Artificial intelligence (AI) is increasingly emerging in applications like autonomous vehicles and medical assistance devices, but consumers don’t necessarily trust these applications. Research shows that operational safety and data security are decisive factors in getting people to trust new AI technology. Even more important is the balance between control and autonomy in the technology. And communication is key – it should be proactive and open in the early stages of introducing the public to the technology. Consumers who can effectively communicate the benefits of an AI application have a reduction in their perceived risk, which results in greater trust, and ultimately, in greater adoption of the technology.

Artificial intelligence (AI) is emerging in applications like autonomous vehicles and medical assistance devices. But even when the technology is ready to use and has been shown to meet customer demands, there’s still a great deal of skepticism among consumers. For example, a survey of more than 1,000 car buyers in Germany showed that only 5% would prefer a fully autonomous vehicle. We can find a similar number of skeptics of AI-enabled medical diagnosis systems, such as IBM’s Watson. The public’s lack of trust in AI applications may cause us to collectively neglect the possible advantages we could gain from them.

In order to understand trust in the relationship between humans and automation, we have to explore trust in two dimensions: trust in the technology and trust in the innovating firm.

How it will impact business, industry, and society.

In human interactions, trust is the willingness to be vulnerable to the actions of another person. But trust is an evolving and fragile phenomenon that can be destroyed even faster than it can be created. Trust is essential to reducing perceived risk, which is a combination of uncertainty and the seriousness of the potential outcome involved. Perceived risk in the context of AI stems from giving up control to a machine. Trust in automation can only evolve from predictability, dependability, and faith.

hbr.org