Transparency

Transparency speaks to the need for clear, open communication between all stakeholders and especially for the end-user. From explainable AI and clear feedback loops to managing expectations of consumers, this theme is about building and maintaining trust through human-centred design.

Transparency

Much must happen before it can be achieved

trust [truhst]
noun
reliance on the integrity, strength, ability, surety, etc., of a person or thing; confidence.
verb (used without object)
to rely upon or place confidence in someone or something (usually followed by in or to): To trust in another’s honesty; trusting to luck.
to have confidence; hope: Things work out if one only trusts.
techlash [tek-lash]
a strong reaction against the major technology companies, as a result of concerns about their power, users’ privacy, the possibility of political manipulation, etc.

If you are a human reading this, you know all too well the importance of trust in a relationship. It can take years to understand and trust a person before we open up to them, confident they will care for us and have our backs. Similarly, before we provide sensitive information to a business or other institution, we must believe that they will responsibly use and protect that information in exchange for helping us live a better, more productive and fulfilled life.

It’s through understanding that we can assess our sense of safety and security in someone or something else. It can take a long time to build trust, and a very short time to lose it.

SourceAndy Kelly

Interview Sri Shivananda

Senior Vice President & Chief Technology Officer, PayPal

“Just like with anything that is powerful, when people don’t see it and they don’t understand it, they end up fearing it and therefore avoiding it. … A customer should be able to see why something happened … and platforms need to be able to explain why any choice was made.” Sri Shivananda Senior Vice President and Chief Technology Officer, PayPal

For decades, consumers have placed more trust in the technology sector to do the right thing compared to other industries such as energy, automotive, telecommunications, and financial services. Yet there are signs that this is changing. Data breaches, “deep fakes” on social media, blatant misuse of sensitive information for profit, and the growing dominance, some would say monopolistic tendencies, of technology companies in our daily lives have helped to erode this trust.

A recent Capgemini report demonstrates how this is impacting people’s trust in AI: 75 percent of respondents said they want more transparency when a service is powered by AI; 73 percent want to know if AI is treating them fairly; and 76 percent think there should be further regulation on how companies use AI.

Meanwhile, in a recent global study by Edelman, 61 percent of consumers felt the pace of change in technology is too fast; 66 percent worry technology will make it impossible to know if what people are seeing or hearing is real; and 61 percent feel their government does not understand emerging technologies enough to regulate them effectively.
“For consumers to be able to trust the [AI] experience, they have to trust the organizations that are actually dealing with all of their data. Data is the raw fuel on which AI runs the relationship between a customer and a company and it is based on the trust they build over time,” says Sri Shivananda, SVP, CTO, PayPal. “When a customer can trust the platform or the company that is delivering experiences based on AI, they begin to implicitly trust the AI behind the experiences that they are being put in front of them.”

The Capgemini study reinforces the point – 62 percent said they would place higher trust in a company whose AI interactions they perceived as ethical; 59 percent would have higher loyalty to the company; and 55 percent would purchase more products and provide high ratings and positive feedback on social media.

Transparency, then, becomes an imperative lens through which AI developers, policymakers and end-users should approach AIX design. In order to ensure there is adequate information exchange between end-users and the technology, we must consider important questions about explainability, purpose and data management, but in a way that is different than the debate around ethics.

Explainability

Inserting humans in the decision-making process

As machines continue to play a larger role in making decisions that impact a person’s life, it will become more important for these machines to explain the process made to come to these decisions.

“We have to try and introduce or enable trust in these systems,” says Dr. Christina J. Colclough, who runs The Why Not Lab and is an advocate for global workers’ rights in the digital age. “And the way that we can do that is by having demands for transparency, fairness, and auditability, so that humans don’t feel that they are controlled by this algorithmic system, which knows more about me than I do.”

In essence, companies must open up the “black box” to let consumers and regulators know what’s going on under the hood and determine if AI’s recommendations are fair, accurate, reliable and in a person or society’s best interest.

“Just like with anything that is powerful, when people don’t see it and they don’t understand it, they end up fearing it and therefore avoiding it,” adds Shivananda. “It is important to make sure that as we build AI-based experiences with the use of technology, we need to build explainability into the process. A customer should be able to see why something happened on a product or an experience and platforms need to be able to explain why any choice was made.”

This becomes increasingly important in areas such as healthcare and autonomous driving, where an AI system’s decisions could be a matter of life and death. Many companies, including Google and IBM have already made commitments in this area.
The extent to which we approach explainability will vary in an effort to balance the need for privacy and security and accuracy – if we reveal too much, can the system be gamed or compromised? Without some effort to address explainability, however, developers will be leaving themselves open to questions and criticism.

Explainability, then, helps place humans in the decision-making process. First, by understanding how decisions are made, and then by providing the necessary context for users to further refine and optimize their AI experience. As a result, users can feel more confident not just in how AI may be advising them on more routine day-to-day activities, but also in more serious matters, such as critical healthcare or other areas where an autonomous decision will have far-reaching effects on a person’s wellbeing.

Interview Christina Colclough

Founder of The Why Not Lab & former Director of Platform and Agency Workers, Digitalisation and Trade at UNI Global Union

Communication

Know thy customer
SourceDonald Giannatti

There are clear benefits to AI-enabled products and services across a number of industries. The healthcare sector, for one, stands to make great strides in using AI to deliver better patient care. Cities incorporate AI to improve traffic flow and aid in urban planning. At the consumer level, there are myriad products currently on the market and in development that are having a positive or negative impact on people’s personal and professional lives. These early experiences do help improve trust, assuming everything goes well.

As AI becomes more ubiquitous though, a clear communications plan outlining how it will impact consumers or citizens is more important than ever if we want to quell any unrest and create trust.
“When it comes to building AI-based experiences for our customers, all of us should think of the trust with the customer as the final line not to cross,” adds Shivananda. “Trust must be demonstrated through everything that a customer sees about the company – the core value system, how we execute, how do we treat them when they call us, how do we make it right when something goes wrong. As long as it is all centered around the customer.”

A great part of the responsibility will lie with marketers and corporate communications professionals. They must put people at the center of any communications effort. Know thy customer! What are their fears, concerns, needs and wants? What is their level of understanding of AI? What perceptions do they hold, both positive and negative? And what other forces in society are influencing their opinions?

To address these questions, communications professionals should consider outward messaging that outlines the clear benefits of their AI product; demonstrates how it works to achieves these benefits, with real examples; ensures the consumer feels part of the process by letting them interact and “teach” AI to better understand them and clear the perception of inherent biases; truthfully reacts to misinformation or preconceived notions; and reassures that the AI-enable product or service has their best interests and safety in mind.

Jeff Poggi, Co-CEO of the McIntosh Group sees simpler more accessible communications as a key enabler for consumer adoption of AI. “You have to have an honest, authentic conversation with your consumers so that they know exactly what’s going on. The challenge with that is, unfortunately the legal system and it makes it really, really hard for businesses. There’s not one of us that has ever read all the disclosures to your music service agreement that you sign when you sign up for Spotify or Apple music or whatever, it may be. The length of these disclosures are minor and while they may seem simple and people basically write them off today. The transaction of the future, if it’s sharing more of my personal data, I probably need to understand what’s going to happen with my personal data a little more. We need to find a way to really bring the sort of legal framework down to a very simple, easily digestible, understandable level, so that it’s not too complex because that’s what will scare people away.”

Communications then, as it relates to the larger theme of transparency, is an essential component of AI experience design. End-users will have much better experiences when they have better understanding and realistic expectations for the technology. Coupled with explainability, communication then provides a continuing narrative by which end-users can relate their own experiences and to derive the most value from AI services and products.

Purpose

Power and potential aren’t enough

As AI advances it will become more ubiquitous while constantly learning to seamlessly add value to a user’s life. Using local context and external sources of knowledge, this “purpose-driven AI” balances a user’s competing needs and interests and is able to take creative approaches to influence user behaviours, all in the service of the user’s higher purpose.

The technological and, more importantly, ethical considerations to achieve this, however, strike at the very core of what it is to be a human being.
One only has to look at themselves to understand the complexity in asking users to adopt an AI-driven life – we have a persona for work, home, friends, people in our professional network, at the grocery store, in a job interview, on vacation, and on it goes. AI must be mindful of the boundaries within each of these personas and do so in a way that helps the user, but doesn’t necessarily begin to alter or influence their life in a significant way – by crossing work and home life, or providing recommendations for overtly commercial or perhaps subtle nefarious ways.

“How do we maintain our human rights, but also what I call our right to be human?” asks Dr. Colclough. “How do we avoid the commodification of people, so they’re not just seen as numerous data points and algorithmic influences, but the human you are – with your beauties, your bad sides, your good sides? How do you remain relevant and wanted and prioritized in this very digitalized world?”

You could consider all the internal and external inputs and outputs of data like a supply or value chain. Streams of data – contextual, emotional, factual, personal, commercial, cultural – are delivered on single or interconnected roads from myriad sources to AI-enabled products that now surround your life at work, home and play. The level of trust we’re asking of consumers to essentially let algorithms, and the developers behind them, run their lives is immense.

SourceScience in HD

“How do we maintain our human rights, but also our right to be human? How do we avoid the commodification of people, so they’re not just seen as numerous data points and algorithmic influences, but the human you are? How do you remain relevant and wanted and prioritized in this very digitalized world?” Dr Christina J. Colclough The Why Not Lab

“The human brain is a marvelous piece of computing equipment. And we don’t quite fully understand all of the calculations that we are subconsciously making as we go about the world today,” says David Foster, Head of Lyft Bikes and Scooters. “Therefore, how do we model those [calculations] so that AI can make equivalently good decisions?”

It will become critical then, to be transparent and openly communicate the “purpose” of an AI-enabled product or service so that consumers can assess whether the AI is “successful”– or if the assigned purpose is even the right one for them.

This can be translated then into an easy equation for developers and companies building AI systems and products: AI without purpose is without value. And if AI doesn’t add value to our lives, then we will see it as simply intrusive in our lives and we will reject it.

Our purpose, and the purpose of our AI, will be ever more intertwined in the future. We had better ensure that they are also aligned.

Data Privacy

Improving trust factor is key
SourceMarkus Spiske

Data privacy and security is one of the most pressing issues in business today. For decades consumers have made a bargain with emerging technology companies – we will give up our personal data for free access to your apps or service, and happily do so.
It seems that bargains came with obscured risks – privacy breaches, information sold off to companies by social media platforms, rogue apps posing as harmless games collecting user information for nefarious purposes – the list goes on and on.

A recent survey by PwC revealed that, when it comes to privacy, 60 percent of respondents say that they expect the companies they do business with to suffer a data breach some day, likely because 34 percent say that one or more companies that hold their data have already suffered a breach.

Upwards of (85 percent say they wish there were more companies they could trust with their data, and 83 percent want more control over their own data. More revealing though, 76 percent call sharing personal information with companies a “necessary evil”, while 55 percent have continued to use or buy from companies, even after learning that these companies suffered a breach. Consumers may regret that virtual handshake, but feel powerless to change the dynamic.

“If we assume that there’s going to be this massive influx of artificial intelligence in our private lives as citizens, as consumers, and as workers, we’re going to need to learn what questions to ask,” says Dr. Colclough. “But I think the majority of ordinary citizens and ordinary workers cannot even imagine the power and potential of these technologies. So we don’t know what questions to ask. We don’t know what the threats to our privacy and human rights are.”

Consumers can reluctantly shrug if their credit card or e-commerce account becomes compromised – after all, major companies in these areas have significant protections and recourse for customers, and in some cases the monetary resources to reimburse them. Unfortunately, data breaches are becoming the cost of doing business.
But will consumers simply shrug if their AI-enabled home, for example, is hacked, wreaking havoc to them and their families? Likely not. And the prospect of having your “personal space,” be it home, car or work – compromised will act as a serious barrier to adoption.

In her interview for AIX Exchange, Helena Leurent, Director General of Consumers International explains AI’s trust dynamic as such: “As we look at consumer attitudes towards connected products and of course, many [people] are really excited about the way in which these products fulfill a need … even for those who do buy these products, there is a little bit of a feeling that they are creepy. And when you try and unpick that lack of trust, it’s about ‘where does my data go’? But also ‘am I the product?’ ‘What’s the business model behind this?’ “There’s a really interesting sense amongst consumers about what a product really does to your environment and your experience. So, in order for a greater sort of extent or greater use of those types of products, we would need to overcome that lack of trust. And what we’ve found is that you can build trust. If you build in from the very beginning in the design of this, the levels of transparency, security, attention to vulnerable consumers, attention to environmental impact, the things that perhaps should not be left to the very end of the process, but considered at the very start and that openness and that consideration of a broader set of criteria can help build trust.”

Interface

Make it reliable, intuitive and easy to operate

The widespread adoption of consumer technology is tied to a number of factors, including access, price, perceived benefits and perhaps most importantly, ease of use. From personal computers, smartphones and the internet, consumers flock to new technology when it’s clear they don’t need a degree in computer science to operate them.

Trust in a company’s offering, and by extension the company itself, relies on addressing the fact that today’s consumers expect a product to be reliable, intuitive, and simple to operate. It’s table stakes.

The evolution of AI-enabled products and services into our lives introduces a new challenge in consumer adoption. If AI is to be truly ubiquitous in our lives, we need to evolve from the current, intrusive way we interact with our technology – keyboard, mouse, clicks, searches, power cords, and anything else that ties us to machine or place. Voice and conversational AI, and a backend system that is accurately pulling together and analyzing all of your personal data and habits from a variety of sources, are now key to a superior AI experience.

According to a study by Gartner, 70 percent of respondents feel comfortable with AI analyzing their vital signs, and identification of voice and facial features to keep transactions secure. However, 52 percent of respondents do not want AI to analyze their facial expressions to understand how they feel, and 63 percent do not want AI to take an always-on listening approach to get to know them better. It can then be assumed there are similar reservations about brain-computer interfaces.

“People will want to make sure that their data is being treated properly and not being used for any sort of malicious or ill-conceived intent,” says Poggi. “ And people are obviously very sensitive with their personal data. We’re really crossing a bridge that we have not crossed before, because it’s much more personal in nature. The data we’re asking for is not just your name, address, and social security number, (but) now looking for your face or listening to your voice, which gets obviously a little bit more intimate.

“I think what’s going to become a key challenge is how do we face the need to get massive amounts of personal data in order to build high quality AI engines and at the same time, treating that data with a high degree of respect for the individual.”
So how do developers and designers bridge this gap?

Conversational AI should, in theory, create a more personal 1:1 experience than how we currently interact with technology. Bad design, minimal transparency, and generally poor communication that isn’t aware of the user (demographics, region, culture), demonstrates that you haven’t taken the time to anticipate your customer’s needs and wants. As a result, consumers will likely turn away, or provide substandard data to maximize the AI experience.

“We need to have as much data about people and about their experiences in order to have highly effective AI engines that are able to produce interesting results for us,” adds Poggi. “For instance, we need biometric data. We need voice data. We need visual data for facial recognition. All of these sensors have to provide us the data in a highly credible way that’s repeatable and robust so that we’re not making bad decisions off of that data. I think that the evolution of the quality of those input devices is going to be a key enabler to serve successful AI propagation in a humanistic way.”

SourceOmid Armin

Building trust and transparency in consumer AI

For centuries, humans have been the masters of the tools and machines created to make them more productive at work and happier at home. Today, we are living in a unique moment in history where this relationship is changing. Machines now have the capability of being a much more dynamic part of our lives.

AI products and systems need data, personal data, to learn and become more effective in understanding a user. If users are reluctant to share this information, or “game it” in an effort to protect themselves from giving up too much, then they will never realize the full potential of AI in their lives. A company’s best laid plans for creating and delivering a truly superior AI product may live or die based on a most human characteristic – the ability to trust, or not.

Transparency will be key in this regard. At the highest levels companies will need to have a very honest conversation about what they do, what they don’t do and why they do it. Consumers will need to understand the philosophy behind a company’s actions, its frame of reference when developing the algorithms it is asking us to trust, and what recourse they have when things go wrong.

Consumers will need to know what data is being collected, who is collecting it, where it is staying and how it is being used and reused. Companies will need to own up and admit mistakes, which will happen, and clearly state how they will fix these missteps.

If we want to make AI-driven products and services serve people and the planet, governments and regulators must look at the whole ecosystem and its impact on people, and then put demands to what these systems are intended to do and hold companies accountable, with data privacy and security at the forefront.

“We’re not going to be able to market AI as being instantly trustable or to prove from a technological perspective that it is trustable in all cases,” says Foster. “Trust is going to have to be earned over time.”
AIX design that brings developers together with policymakers and especially end-users is the first step in achieving that long-term trust for the AI industry.

Download the Full Report

The Business of A.I.

Businesses around the world are turning to AI to streamline production, automate services, serve up better content and optimize their workforce. There are already thousands of companies driving the industry forward. But what is the industry worth? Here are five key stats about the business of AI.
  • 01. Big Business
    McKinsey estimates AI techniques have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries.
  • 02. 3 Sources of Value
    According to Gartner AI-derived business value is forecasted to reach $3.9 trillion in 2020. There are three different sources of AI business value: customer experience, new revenue and cost reduction. While PwC predicts that AI could contribute up to $15.7 trillion to GDP by 2030.
  • 03. AI Just Starting Up
    Venture funding in AI companies had reached a mind-blowing $61 billion from 2010 through the first quarter of 2020. For example, Softbank recently announced an AI-focused second $108 billion vision fund.
  • 04. Jobs Shifting
    PwC estimates that 30 percent of jobs are at potential risk of automation by mid-2030s, with 44 percent of workers with low education at risk of automation by the same period. While at the same time, new highly-skilled jobs are being created.
  • 05. Big Spender
    Statistica estimates that global spending of cognitive and artificial intelligence (AI) systems in 2019 per segment, amounted to software $13.5 billion, services $12.7 billion and hardware $9.6 billion.

Explore the Themes