All You Need To Know About Artificial Intelligence

A short primer on Artificial Intelligence, what it can do, what it cannot do and what it means for marketing
What is Artificial Intelligence?

According to Wikipedia “Artificial intelligence (AI) is intelligence exhibited by machines”.
In computer science, an ideal “intelligent” machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving“.

This is a great, basic explanation. Except that it does not really explain what AI does. So what does AI do?

The White House, or more specifically the National Science and Technology Council recently tried to answer that question and came to the following conclusion: “There is no single definition of AI that is universally accepted by practitioners. Some define AI loosely as a computerized system that exhibits behavior that is commonly thought of as requiring intelligence”.
Others define AI as a system capable of rationally solving complex problems or taking appropriate actions to achieve its goals in whatever real world circumstances it encounters. Experts offer differing taxonomies of AI problems and solutions. A popular AI textbook used the following taxonomy:

(1) systems that think like humans (e.g., cognitive architectures and neural networks);
(2) systems that act like humans (e.g., pass the Turing test via natural language processing; knowledge representation, automated reasoning, and learning),
(3) systems that think rationally (e.g., logic solvers, inference, and optimization); and (4) systems that act rationally (e.g., intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision-making, and acting).”
Sounds complicated!

“AI is whatever we can do that computers can’t…yet.” Nancy Fulda, a science fiction writer says.

As you can see, the answer to the question of what is AI depends on who you ask. The technological implementation of it, though, is nothing new and has been around for decades. Recent advances in computers and chips have made it possible start to take advantage of these technologies. Some of them, that are well understood and applied today, are explained below.

Why is Artificial Intelligence so important?

AI is important because it is here and it is here to stay. The consequences, mostly good, of it are a reality and the better we understand them the better we can leverage them for a positive result.

AI and machine learning is their potential to improve people’s lives by helping to solve some of the world’s greatest challenges and inefficiencies. Many have compared the promise of AI to the transformative impacts of advancements in mobile computing.

Public-and private sector investments in basic and applied R&D on AI have already begun reaping major benefits to the public in fields as diverse as health care, transportation, the environment, criminal justice, and economic inclusion.

The rapid growth of AI has dramatically increased the need for people with relevant skills to support and advance the field. An AI-enabled world demands a data-literate citizenry that is able to read, use, interpret, and communicate about data, and participate in policy debates about matters affected by AI.
AI knowledge and education are increasingly emphasized in Federal Science, Technology, Engineering, and Mathematics (STEM) education programs.

AI’s central economic effect in the short term will be the automation of tasks that could not be automated before. This will likely increase productivity and create wealth, but it may also affect particular types of jobs in different ways, reducing demand for certain skills that can be automated while increasing demand for other skills that are complementary to AI.
Analysis by the White House Council of Economic Advisors (CEA) suggests that the negative effect of automation will be greatest on lower-wage jobs, and that there is a risk that AI-driven automation will increase the wage gap between less-educated and more educated workers, potentially increasing economic inequality.

Am I already using Artificial Intelligence?

If you are using the web or mobile apps, chances are that you are using AI.

And if you’ve bought a product on Amazon, taken a ride in an Uber or booked a trip on Expedia then you’ve used some of the most advanced AI available today.

Remarkable progress has been made on what is known as Narrow AI, which addresses specific application areas such as playing strategic games, language translation, self-driving vehicles, and image recognition.
Narrow AI underpins many commercial services such as trip planning, shopper recommendation systems, and ad targeting, and is finding important applications in medical diagnosis, education, and scientific research.

The challenges of Artificial Intelligence?

AI is a powerful force and a reality for everybody. It improves our healthcare, shopping and travel experiences and is starting to make inroads into the workplace and government.
With that, comes a responsibility to make sure AI is a force for good, and that its tremendous power does not create a new chasm in our society.
Our friends at the White House sum it up perfectly: ”AI can be a major driver of economic growth and social progress, if industry, civil society, government,
and the public work together to support development of the technology, with thoughtful attention to its potential and to managing its risks.
Government has several roles to play. It should convene conversations about important issues and help to set the agenda for public debate. It should monitor the safety and fairness of applications as they develop, and adapt regulatory frameworks to encourage innovation while protecting the public. It should support basic research and the application of AI to public goods, as well as the development of a skilled, diverse workforce. And government should use AI itself, to serve the public faster, more effectively, and at lower cost.”
Many areas of public policy, from education and the economic safety net, to defense, environmental preservation, and criminal justice, will see new opportunities and new challenges driven by the continued progress of AI.
Government must continue to build its capacity to understand and adapt to these changes.
As the technology of AI continues to develop, practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations. Researchers and practitioners have increased their attention to these challenges, and should continue to focus on them.
Developing and studying machine intelligence can help us better understand and appreciate our human intelligence.
Used thoughtfully, AI can augment our intelligence, helping us chart a better and wiser path forward!

How is AI impacting business?

AI’s central economic effect in the short term will be the automation of tasks that could not be automated before. There is some historical precedent for waves of new automation from which we can learn, and some ways in which AI will be different. Government must understand the potential impacts so it can put in place policies and institutions that will support the benefits of AI, while mitigating the costs.

Like past waves of innovation, AI will create both benefits and costs. The primary benefit of previous
waves of automation has been productivity growth; today’s wave of automation is no different. For example, a 2015 study of robots in 17 countries found that they added an estimated 0.4 percentage point on average to those countries’ annual GDP growth between 1993 and 2007, accounting for just over one tenth of those countries’ overall GDP growth during that time.
One important concern arising from prior waves of automation, however, is the potential impact on certain types of jobs and sectors, and the resulting impacts on income inequality. Because AI has the potential to eliminate or drive down wages of some jobs, especially low- and medium-skill jobs, policy interventions will likely be needed to ensure that AI’s economic benefits are broadly shared and that inequality is diminished and not worsened as a consequence.
The economic policy questions raised by AI-driven automation are important but they are best addressed by a separate White House working group.
The White House will conduct an additional interagency study on the economic impact of automation on the economy and recommended policy responses, to be published in the coming months.

Why is AI Impacting marketing

Many CMOs express a desire to leverage AI-based technology and solutions, but often lack a deeper understanding of the foundations of AI, as well as any starting points to introduce AI to their operations.

A recent study of CMOs from the U.S., U.K., and China from organizations with $500 million+ in revenue found that about two-thirds believe AI will play an essential in the future of their marketing operations. But only a third say they have a solid understanding of how to apply AI to their operations.

The Impact.

Long gone are the days when blasting universal marketing messages, ignorant of consumers’ demographics or preferences, was an acceptable communications strategy.
Today’s marketing requires engaging prospective and existing customers through highly personalized and relevant conversations. These should be based on myriad factors, such as observed individual behaviors and preferences, contextual awareness like the time of day or location, as well as extracted insights from data gathered from the customer base. Companies like Spotify, Amazon, and Netflix lead the way and set expectations in this regard. Observed user behavior and choices are leveraged instantly to optimize the experience and offering, which yields a more personal brand relationship and improved loyalty.

However, personalized, tailored marketing at scale quickly reaches a breaking point when relying on a manual workforce. Activities like identifying existing customer segments that require separate messaging, running social advertising targeted to the right demographics, or curating product recommendations based on a user’s other selections are time consuming for humans and often beyond the scale of what a marketing department can accomplish.

Enter artificial intelligence. To perform increasingly personalized marketing at scale, marketing workforces will need to rely on the input and efficiency machine-driven
AI applications can provide across all cycles of the customer journey. This does not necessarily imply a hands-off approach, where AI-based applications operate on their own without human involvement. Indeed, the performance of AI-based algorithms improves when they are reviewed, sanitized, and operationalized by real, live humans.

AI’s potential will soon allow existing and new market competitors to disrupt the landscape with a force similar to that of the mobile revolution; putting off the adoption of AI will set marketing back. Non-AI companies will inevitably engage with customers in a less personalized and impactful way, miss opportunities to increase the efficiency of day-to-day marketing tasks, spend advertising budgets in suboptimal ways, and miss out on data-driven insights that can directly impact sales.

Source: A Starter Guide to AI in Marketing.

Am I going to get replaced by AI?

We have all read it! AI will replace us all! All the doctors, lawyers, accountants. The line workers, handyman and middle man. Teachers, drivers, and creatives.

Some reports forecast that 6% of the US workforce will be replaced by AI by the year 2021. For certain industries this quota is closer to 60%, and more.

Reality is that the goal of any AI application must be to augment the human rather than to replace the human. As we’ve learnt above, AI is still in its infancy and even on a rapid growth curve it will be decades before it can truly replace the wits, experience, creativity and judgment of a human.

Reality is also that we all will need to learn to work and live with AI. The rapid growth of AI has dramatically increased the need for people with relevant skills to support and advance the field. The AI workforce includes AI researchers who drive fundamental advances in AI, a larger number of specialists who refine AI methods for specific applications, and a much larger number of users who operate those applications in specific settings. For researchers, AI training is inherently interdisciplinary, often requiring a strong background in computer science, statistics, mathematical logic, and information theory. For specialists, training typically requires a background in software engineering and in the application area. For users, familiarity with AI technologies is needed to apply AI technologies reliably.

How to get started with AI?

Getting started with AI is as easy as never before. Google it and the plethora of course, conferences, groups and associations to help you understand and learn about AI is endless.

For those of you that are more on the practical side of things, we obviously recommend that you had over to our good friends at IBM and check out their Watson solution, accessible through the Bluemix cloud. Experimenting, building and commercializing AI is literally as simple as flipping a switch, and best of all it is free for beginners.

Google, Amazon, Microsoft and Facebook have similar, API driven clouds to help you get started!

Examples of AI use cases

The use cases for AI are plentiful – we’ve mentioned your Uber ride, the last product you bought on Amazon or the travel recommendations on Expedia.

But AI is also improving how doctors to their job, how lawyers win their cases and how bankers close deals.
Another important field of application is security, from the many known and not so well known government projects, to software security and community safety applications.
Head over to Forbes for 13 recent examples of how AI is currently changing your life.

General AI vs. Narrow AI

Also called strong and weak AI, is an important distinction between two vastly differing approaches to AI.

Strong AI – “Artificial general intelligence, a hypothetical machine that exhibits behaviour at least as skilful and flexible as humans do, and the research program of building such an artificial general intelligence.”
Weak AI – “Non-sentient computer intelligence, typically focused on a narrow task”

In other words, a weak AI uses Models of its problem domain given to it by programmers. A strong AI figures out its own Models based on raw input. And nothing else.Most of the currently available AI applications rely on weak, or narrow, AI.

What is weak AI?

The principle behind Weak AI is simply the fact that machines can be made to act as if they are intelligent. For example, when a human player plays chess against a computer, the human player may feel as if the computer is actually making impressive moves. But the chess application is not thinking and planning at all. All the moves it makes are previously fed in to the computer by a human and that is how it is ensured that the software will make the right moves at the right times.

What is strong AI?

The principle behind Strong AI is that the machines could be made to think or in other words could represent human minds in the future. If that is the case, those machines will have the ability to reason, think and do all functions that a human is capable of doing. But according to most people, this technology will never be developed or at least it will take a very long time. However, Strong AI, which is in its infant stage, promises a lot due to the recent developments in nanotechnology. Nanobots, which can help us fight diseases and also make us more intelligent, are being designed. Furthermore, the development of an artificial neural network, which can function as a proper human being, is being looked at as a future application of Strong AI.

Natural Language Processing

Natural language processing is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages.
As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve: natural language understanding, enabling computers to derive meaning from human or natural language input; and others involve natural language generation.

Modern NLP algorithms are based on machine learning, especially statistical machine learning. The paradigm of machine learning is different from that of most prior attempts at language processing. Prior implementations of language-processing tasks typically involved the direct hand coding of large sets of rules. The machine-learning paradigm calls instead for using general learning algorithms — often, although not always, grounded in statistical inference — to automatically learn such rules through the analysis of large corpora of typical real-world examples. A corpus (plural, “corpora”) is a set of documents (or sometimes, individual sentences) that have been hand-annotated with the correct values to be learned.

Recent research has increasingly focused on unsupervised and semi-supervised learning algorithms. Such algorithms are able to learn from data that has not been hand-annotated with the desired answers, or using a combination of annotated and non-annotated data. Generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, there is an enormous amount of non-annotated data available (including, among other things, the entire content of the World Wide Web), which can often make up for the inferior results.

Machine Learning

Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task.

Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming. Clustering, reinforcement learning, and Bayesian networks among others. As we know, none achieved the ultimate goal of General AI, and even Narrow AI was mostly out of reach with early machine learning approaches.

As it turned out, one of the very best application areas for machine learning for many years was computer vision, though it still required a great deal of hand-coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters “S-T-O-P.” From all those hand-coded classifiers they would develop algorithms to make sense of the image and “learn” to determine whether it was a stop sign.

Good, but not mind-bendingly great. Especially on a foggy day when the sign isn’t perfectly visible, or a tree obscures part of it. There’s a reason computer vision and image detection didn’t come close to rivaling humans until very recently, it was too brittle and too prone to error.Tooltip Text

Deep Learning

Deep learning is a branch of machine learning based on a set of algorithms that attempt to model high level abstractions in data.
In a simple case, you could have two sets of neurons: ones that receive an input signal and ones that send an output signal.
When the input layer receives an input it passes on a modified version of the input to the next layer.

In a deep network, there are many layers between the input and output (and the layers are not made of neurons but it can help to think of it that way), allowing the algorithm to use multiple processing layers, composed of multiple linear and non-linear transformations.

Deep learning is part of a broader family of machine learning methods based on learning representations of data.
An observation (e.g., an image) can be represented in many ways such as a vector of intensity values per pixel, or in a more abstract way as a set of edges, regions of particular shape, etc.
Some representations are better than others at simplifying the learning task (e.g., face recognition or facial expression recognition[10]).
One of the promises of deep learning is replacing handcrafted features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.

Data Science

Data science, also known as data-driven science, is an interdisciplinary field about scientific processes and systems to extract knowledge or insights from data in various forms, either structured or unstructured, which is a continuation of some of the data analysis fields such as statistics, machine learning, data mining, and predictive analytics, similar to Knowledge Discovery in Databases (KDD).
Turing award winner Jim Gray imagined data science as a “fourth paradigm” of science (empirical, theoretical, computational and now data-driven) and asserted that “everything about science is changing because of the impact of information technology” and the data deluge.
Data science employs techniques and theories drawn from many fields within the broad areas of mathematics, statistics, operations research,[6] information science, and computer science, including signal processing, probability models, machine learning, statistical learning, data mining, database, data engineering, pattern recognition and learning, visualization, predictive analytics, uncertainty modeling, data warehousing, data compression, computer programming, artificial intelligence, and high performance computing.
Methods that scale to big data are of particular interest in data science, although the discipline is not generally considered to be restricted to such big data, and big data technologies are often focused on organizing and preprocessing the data instead of analysis.
The development of machine learning has enhanced the growth and importance of data science.
Data science affects academic and applied research in many domains, including machine translation, speech recognition, robotics, search engines, digital economy, but also the biological sciences, medical informatics, health care, social sciences and the humanities. It heavily influences economics, business and finance.
From the business perspective, data science is an integral part of competitive intelligence, a newly emerging field that encompasses a number of activities, such as data mining and data analysis.

Predictive Analytics

Predictive analytics encompasses a variety of statistical techniques from predictive modeling, machine learning, and data mining that analyze current and historical facts to make predictions about future or otherwise unknown events.
In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities.Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision making for candidate transactions.

The defining functional effect of these technical approaches is that predictive analytics provides a predictive score (probability) for each individual (customer, employee, healthcare patient, product SKU, vehicle, component, machine, or other organizational unit) in order to determine, inform, or influence organizational processes that pertain across large numbers of individuals, such as in marketing, credit risk assessment, fraud detection, manufacturing, healthcare, and government operations including law enforcement.
Predictive analytics is used in actuarial science, marketing, financial services, insurance, telecommunications, retail, travel, healthcare, child protection, pharmaceuticals, capacity planningand other fields.
One of the most well known applications is credit scoring, which is used throughout financial services. Scoring models process a customer’s credit history, loan application, customer data, etc., in order to rank-order individuals by their likelihood of making future credit payments on time.