Much must happen before it can be achieved

trust [truhst]

  • • reliance on the integrity, strength, ability, surety, etc., of a person or thing; confidence.

verb (used without object)

  • • to rely upon or place confidence in someone or something (usually followed by in or to):

  • To trust in another's honesty; trusting to luck.

  • • to have confidence; hope:

  • Things work out if one only trusts.

techlash [tek-lash]

  • • a strong reaction against the major technology companies, as a result of concerns about their power, users’ privacy, the possibility of political manipulation, etc.

If you are a human reading this, you know all too well the importance of trust in a relationship. It can take years to understand and trust a person before we open up to them, confident they will care for us and have our backs. Similarly, before we provide sensitive information to a business or other institution, we must believe that they will responsibly use and protect that information in exchange for helping us live a better, more productive and fulfilled life.

It’s through understanding that we can assess our sense of safety and security in someone or something else. It can take a long time to build trust, and a very short time to lose it.

Source: Andy Kelly

Interview: Sri Shivananda

Senior Vice President & Chief Technology Officer,

“Just like with anything that is powerful, when people don't see it and they don't understand it, they end up fearing it and therefore avoiding it. ... A customer should be able to see why something happened ... and platforms need to be able to explain why any choice was made.”

Sri Shivananda

Senior Vice President and Chief Technology Officer, PayPal

For decades, consumers have placed more trust in the technology sector to do the right thing compared to other industries such as energy, automotive, telecommunications, and financial services. Yet there are signs that this is changing. Data breaches, “deep fakes” on social media, blatant misuse of sensitive information for profit, and the growing dominance, some would say monopolistic tendencies, of technology companies in our daily lives have helped to erode this trust.

A recent Capgemini report demonstrates how this is impacting people’s trust in AI: 75 percent of respondents said they want more transparency when a service is powered by AI; 73 percent want to know if AI is treating them fairly; and 76 percent think there should be further regulation on how companies use AI.

Meanwhile, in a recent global study by Edelman, 61 percent of consumers felt the pace of change in technology is too fast; 66 percent worry technology will make it impossible to know if what people are seeing or hearing is real; and 61 percent feel their government does not understand emerging technologies enough to regulate them effectively.

“For consumers to be able to trust the [AI] experience, they have to trust the organizations that are actually dealing with all of their data. Data is the raw fuel on which AI runs the relationship between a customer and a company and it is based on the trust they build over time,” says Sri Shivananda, SVP, CTO, PayPal. “When a customer can trust the platform or the company that is delivering experiences based on AI, they begin to implicitly trust the AI behind the experiences that they are being put in front of them.”

The Capgemini study reinforces the point – 62 percent said they would place higher trust in a company whose AI interactions they perceived as ethical; 59 percent would have higher loyalty to the company; and 55 percent would purchase more products and provide high ratings and positive feedback on social media.

Transparency, then, becomes an imperative lens through which AI developers, policymakers and end-users should approach AIX design. In order to ensure there is adequate information exchange between end-users and the technology, we must consider important questions about explainability, purpose and data management, but in a way that is different than the debate around ethics.


Inserting humans in the decision-making process

Interview: Christina Colclough

Founder of The Why Not Lab & former Director of Platform and Agency Workers, Digitalisation and Trade at UNI Global Union


Know thy customer

Source: Donald Giannatti

There are clear benefits to AI-enabled products and services across a number of industries. The healthcare sector, for one, stands to make great strides in using AI to deliver better patient care. Cities incorporate AI to improve traffic flow and aid in urban planning. At the consumer level, there are myriad products currently on the market and in development that are having a positive or negative impact on people’s personal and professional lives. These early experiences do help improve trust, assuming everything goes well.

As AI becomes more ubiquitous though, a clear communications plan outlining how it will impact consumers or citizens is more important than ever if we want to quell any unrest and create trust.

“When it comes to building AI-based experiences for our customers, all of us should think of the trust with the customer as the final line not to cross,” adds Shivananda. “Trust must be demonstrated through everything that a customer sees about the company – the core value system, how we execute, how do we treat them when they call us, how do we make it right when something goes wrong. As long as it is all centered around the customer.”

A great part of the responsibility will lie with marketers and corporate communications professionals. They must put people at the center of any communications effort. Know thy customer! What are their fears, concerns, needs and wants? What is their level of understanding of AI? What perceptions do they hold, both positive and negative? And what other forces in society are influencing their opinions?

To address these questions, communications professionals should consider outward messaging that outlines the clear benefits of their AI product; demonstrates how it works to achieves these benefits, with real examples; ensures the consumer feels part of the process by letting them interact and “teach” AI to better understand them and clear the perception of inherent biases; truthfully reacts to misinformation or preconceived notions; and reassures that the AI-enable product or service has their best interests and safety in mind.

Jeff Poggi, Co-CEO of the McIntosh Group sees simpler more accessible communications as a key enabler for consumer adoption of AI.

  • “You have to have an honest, authentic conversation with your consumers so that they know exactly what's going on. The challenge with that is, unfortunately the legal system and it makes it really, really hard for businesses. There's not one of us that has ever read all the disclosures to your music service agreement that you sign when you sign up for Spotify or Apple music or whatever, it may be. The length of these disclosures are minor and while they may seem simple and people basically write them off today. The transaction of the future, if it's sharing more of my personal data, I probably need to understand what's going to happen with my personal data a little more. We need to find a way to really bring the sort of legal framework down to a very simple, easily digestible, understandable level, so that it's not too complex because that's what will scare people away.”

Communications then, as it relates to the larger theme of transparency, is an essential component of AI experience design. End-users will have much better experiences when they have better understanding and realistic expectations for the technology. Coupled with explainability, communication then provides a continuing narrative by which end-users can relate their own experiences and to derive the most value from AI services and products.


Power and potential aren’t enough

Source: Science in HD

“How do we maintain our human rights, but also our right to be human? How do we avoid the commodification of people, so they're not just seen as numerous data points and algorithmic influences, but the human you are? How do you remain relevant and wanted and prioritized in this very digitalized world?”

Dr Christina J. Colclough

The Why Not Lab

“The human brain is a marvelous piece of computing equipment. And we don't quite fully understand all of the calculations that we are subconsciously making as we go about the world today,” says David Foster, Head of Lyft Bikes and Scooters. “Therefore, how do we model those [calculations] so that AI can make equivalently good decisions?”

It will become critical then, to be transparent and openly communicate the “purpose” of an AI-enabled product or service so that consumers can assess whether the AI is “successful”– or if the assigned purpose is even the right one for them.

This can be translated then into an easy equation for developers and companies building AI systems and products: AI without purpose is without value. And if AI doesn’t add value to our lives, then we will see it as simply intrusive in our lives and we will reject it.

Our purpose, and the purpose of our AI, will be ever more intertwined in the future. We had better ensure that they are also aligned.

Data Privacy

Improving trust factor is key

Source: Markus Spiske


Make it reliable, intuitive and easy to operate

Source: Omid Armin

Building trust and transparency in consumer AI

The Business of A.I.

Businesses around the world are turning to AI to streamline production, automate services, serve up better content and optimize their workforce. There are already thousands of companies driving the industry forward. But what is the industry worth? Here are five key stats about the business of AI.


Big Business

McKinsey estimates AI techniques have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries.


3 Sources of Value

According to Gartner AI-derived business value is forecasted to reach $3.9 trillion in 2020. There are three different sources of AI business value: customer experience, new revenue and cost reduction. While PwC predicts that AI could contribute up to $15.7 trillion to GDP by 2030.


AI Just Starting Up

Venture funding in AI companies had reached a mind-blowing $61 billion from 2010 through the first quarter of 2020. For example, Softbank recently announced an AI-focused second $108 billion vision fund.


Jobs Shifting

PwC estimates that 30 percent of jobs are at potential risk of automation by mid-2030s, with 44 percent of workers with low education at risk of automation by the same period. While at the same time, new highly-skilled jobs are being created.


Big Spender

Statistica estimates that global spending of cognitive and artificial intelligence (AI) systems in 2019 per segment, amounted to software $13.5 billion, services $12.7 billion and hardware $9.6 billion.

Exchange Your Perspective

Want to be involved in shaping the future of AI experiences?
Share your email and we’ll keep you updated on what comes next.

This report is sponsored by LG Electronics and Element AI and produced by the BriteBirch Collective.

Contact Us

If you’re interested in collaborating on initiatives related to Artificial Intelligence
Experience (AIX) and the creation of a more equitable, safe and transparent future
through human-centric AI, please email us at

This work is licensed under CC BY-NC-SA 4.0

Use of this website constitutes acceptance of the Legal and Privacy PolicySitemap.