Public Perception

Public Perception deals with how our opinions, fears and expectations for personalized AI is shaped by pop culture, education and marketing and questions the role of business, government and the broader public in driving a more realistic and consistent understanding of the technology.

Public Perception of AI

Dial F for Frankenstein

“I’m sorry, Dave. I’m afraid I can’t do that.” – HAL (Heuristically Programmed ALgorithmic Computer), the sentient computer that controls the Discovery One spacecraft from 2001: A Space Odyssey. “I am superior, sir, in many ways, but I would gladly give it up to be human.” – Lt. Cmdr. Data, the sentient android from Star Trek: The Next Generation “I’ll be back.” – The Terminator

The roots of science fiction (sci-fi) date back to the 2nd century AD with the novel “A True Story,” by Lucian of Samosata, a satirist in the Roman Empire. However, it was Mary Shelly’s Frankenstein (1818) and the Last Man (1826) that defined the sci-fi genre, which has since crept into every aspect of pop culture – from books, TV, movies, music and more.

SourceAideal Hwa

Interview Charles Lee Isbell

Dean of Computing and John P. Imlay Jr. Chair, Georgia Institute of Technology

Frankenstein offers an interesting starting point as we examine how the public’s perception of AI will affect not only how consumers adopt AI-enabled products and services, but also how companies and policy-makers address the technology’s transformative power.

Frankenstein as a novel is a complex story, but at its heart it is about how scientific ambition and the creation of artificial life can have disastrous consequences. Its theme has been used over and over again in pop-culture, and which has inevitably led us all to ask – what is the price of the rapid advancements in technology, and AI in particular? Are we inadvertently creating the next Frankenstein’s Monster? And how can we better understand how AI works so we can comfortably take advantage of all it has to offer to better our lives?

While we can dismiss the work of Hollywood as a hyperbolic warning to humanity, it becomes harder when today’s leading thinkers and titans of business raise the risks, too.

Elon Musk certainly has his concerns: “AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.”
And so does Bill Gates: “The world hasn’t had that many technologies that are both promising and dangerous – you know, we had nuclear energy and nuclear weapons.”

Of course, all new advances in technology that serve to change our way of life, work, and society at large are met with varying degrees of resistance or adoption. But, studies have shown that the more we view advances in technology as helping us have more control, flexibility and efficiency in life, the more likely we are to adopt them into our lives.

On the flipside, if we feel we lack control, understanding of how things work, or a general distrust of technology and the organizations offering it to us, the less likely we are to get onboard.

Designers and developers must confront two potential barriers to adoption. The first driven by pop culture, mixed messaging, and a few very public false starts by technology companies. Second, and perhaps more importantly, a healthy dose of skepticism among consumers about what the “inevitable” march of technology more broadly means to their lives and the world they want to live in.

“Technology is not necessarily evil nor is it necessarily good. We have to decide on the limits, the frames, the requirements to these technologies, and we have to imagine every single future scenario we can with the powers of these technologies,” says Dr. Christina J. Colclough, who runs The Why Not Lab and is an advocate for global workers’ rights in the digital age. “So we’re going to need to converse. We’re going to need to talk. We’re going to need to find out what is fair for you, what is fair for me, and how do we make this society work?”

Creators of AI-enabled products and services will be tasked with responsibly educating consumers on how this technology can have a positive impact on their lives – through transparency, accurate marketing and intuitive design. More importantly, they will need to consider taking a human-centric AI experience (AIX) approach to design, inviting end-users and other stakeholders to be part of the process.

Ironically, we can turn back to Frankenstein for inspiration on how a negative fictional story about AI resulted in something positive in the real world. Afterall, it was Arthur C. Clark’s influential short story “Dial F for Frankenstein”, about a super network of telephones that learn to speak to one another and eventually take over the world, that apparently helped inspire Tim Berners-Lee to invent the World Wide Web.

Interview Helena Leurent

Director General of Consumers International

“Awareness and education for consumers is absolutely critical, but it goes beyond that. It’s about actually engaging with consumers in the design … it’s about bringing consumer advocacy upfront, as opposed to it being at the backstage of the process. It’s about having principles that think about vulnerable consumers … It’s about making a level playing field.” Helena Leurent Director General of Consumers International

News / Pop Culture

Comfort zone simply not there yet

AI is becoming increasingly complex. It is not easily explained and can be confusing or even daunting to most people outside of those who work in the field. It wouldn’t be an exaggeration to suggest that for many people, their only exposure to the technology is the sometimes apocalyptic way in which AI is depicted in pop culture or the negative news stories that latch on to AI projects gone bad.

But as consumers assess whether to embrace the idea that more advanced AI-enabled products and services will become intertwined in their lives, pop-culture’s impact certainly cannot be ignored as it shapes how people will adopt the technology and trust the developers behind it.

“There is some public perception on AI, which is driven either by headlines of where AI has gone wrong, in some sense, and by science fiction films over the years,” says David Foster, Head of Lyft Bikes and Scooters. “And none of that probably leads to a sense of calm and happiness with AI taking increasingly broad roles in the world around us.”

The industry has taken steps to better inform and educate consumers. People understand that AI is doing useful things like helping them take better pictures on their phones, streamlining more relevant content to their social feeds, cleaning their home with a robotic vacuum, or helping them find artists they may like on their media player.

However, an Ipsos study taken in the UK found that 53 percent respondents would not feel comfortable with AI making decisions which affect them.
A study by the Center for the Governance of AI at the University of Oxford found that 22 percent of Americans thought that technology will be “on balance bad,” while 12 percent think that it would be “extremely bad,” leading to the potential human extinction.

Adds Foster: “We have to be a little bit careful about pushing the words AI for AI’s sake, and instead to focus on the benefits that we’re actually bringing to consumers and to the public that are enabled by AI, but are not there purely as a means to serve the technology of AI.”

By inviting end-users into the process of AIX design, we can better gauge the job to be done in terms of educating them about a product or service’s value and to understand how pop culture and news media are helping or hurting the technology’s adoption.

SourceGlenn Carstens-Peters

Language of AI

More than words

“When people say AI, they have this kind of notion in their head of the Terminator or some kind of intelligent computer telling you ‘I’m sorry, you can’t do that Dave’, and they’re not thinking about it as simply intelligence and automation augmenting what you’re doing, or simply being smart about what you are.” Charles Isbell Dean of Computing and John P. Imlay Jr. Chair
Georgia Institute of Technology

Linguists and psychologists have long known that words hold power and that the words we choose often convey unintended messages and drive unintended behaviour. It’s for this reason that we should be extra careful about the words we use to describe AI and its capabilities.

“Suitcase words,” according to renowned roboticist and AI researcher Rodney Brooks, are words that are often overused and which get misinterpreted.

“I think people use words to describe what an AI system is doing, that then get overgeneralized,” he explains. “When we say a system is learning something, people who are not familiar with AI may think of every possible use of the word ‘learning’. Learning can mean ‘I learned to ride a bicycle.’ which is very different from ‘I learned ancient Latin.’

“Learn applies to so many different things that companies need to be careful about saying that their AI system learns, because it’s going to provide too much promise that we don’t have capability of. It’s better to say that the system ‘adapts to a specific set of circumstances.’ That sort of promise sets the expectation for ordinary consumers, for how the AI system is going to change and how much.”

Marketers and researchers alike choose words to describe AI’s capabilities that can often lead to misunderstanding about how the systems function and can cause consumers to be let down and turned off by the technology, when their expectations aren’t met. It’s one of the key reasons why tools like the AIX Framework were created, in order to standardize the language around AI at different levels of its development and capability.

Whether it is words such as ‘learning’, ‘predictive’, ‘realistic’ or ‘intuitive’, by choosing our words more wisely, we will be able to better deliver on the promise of AI and ensure end-users understand that the technology is essentially software. When we strip away the marketing catch-phrases, what is left is a clearer understanding of the value being delivered.

Marketing

There are ways to avoid an ‘AI Winter’

Who can forget the HAPIFork, the “smart” fork that monitors how quickly and how much you eat, that took the 2013 Consumer Electronics Show by storm? Of course, utensils are not the only thing apparently thinking for you – the number of products deemed “smart” or “intelligent” range from watches, health apps, home systems, bikes, toys and cars.

Each product promises to revolutionize the way we live in big and small ways, but have we hit peak AI hype? Some in the AI industry seem to think so, and that this will have consequences for the technology’s further adoption and development.

“Marketing that’s focused on texts and specs is going to fail,” says Jeff Poggi Co-CEO of the McIntosh Group. “And if we keep the marketing focused on the engineers that are developing the technologies, it’s going to be a very difficult rollout. We’ve got to think about this in the terms of consumer benefits. “We’ve got to start with the idea of how is this going to impact the emotions and the experience of the consumer and give them realistic examples of how that’s going to impact their lives. Show them the truth. Tell them how it is going to benefit them and what the experience is going to be. That’s the way we need to do it, and it has to be done enough in an authentic way.”

Many of existing products have relatively simple tasks which they perform quite well such as monitoring and automatically adjusting room temperature based on your schedule and comfort, or capturing physical activity levels and caloric intake to advise on diet and exercise.

But we are now moving into a realm where the industry is promising self-driving cars, refrigerators that monitor your food supply and diet and then order groceries for you, and robots that will take care of your household chores.

The issue is that these tasks require a tremendous amount of data and an AI system that can effectively process this information to provide real value. The challenge is to ensure we’re setting expectations about what these products can actually do right now.

This is an issue, and one the industry has faced in the past. “AI Winter” refers to a setback in the development of AI as a result of a lack of enthusiasm among consumers and investors. Typically, this begins when the AI community itself begins to see its own limitations. This pessimistic outlook is then picked up on by a number of outside influencers, including the media, which creates a negative feedback loop. This has already occurred in the 1970s and 1980s.

It could be argued that this time, given the significant advances in technology across all facets of our lives, the backlash will come from consumers who simply expect more. Indeed, consumers are constantly flooded with endless promises of how technology will make their world a better place.

This has resulted in people trusting their devices more than they should – like letting their car self-drive exceptionally long distances, along busy highways only to end up in a fatal crash. The car isn’t yet “smart” enough to handle all the variables thrown its way. While this is an extreme example, it’s these types of instances where expectations do not align with reality and serve as a wake-up call to the industry.

“In the companies that I’ve started, I’ve always told our engineering teams that what you’re going to deliver to the final customer is going to be really disappointing,”says Rodney Brooks, the Panasonic Professor of Robotics (emeritus) at MIT. “It’s not going to be at the cutting edge because we can’t afford to have our products grow up out there with a real user. So we’re going to be really careful with what we put out there and what we deliver. And we’re not going to be going as fast as science fiction movies expect us to be. It’s just the reality in order to provide something that people can feel comfortable with.”

SourceJohn Tekeridis

Using AI to Research Conversations on AI For AIX Exchange, Phrasia, an NLP AI technology, analyzed over 60K samples of unstructured text data from published academic papers, Hacker News articles and Twitter posts. They wanted to understand how AI narratives differ between consumers, enthusiasts and published researchers. What they found was both surprising and insightful:

Consumer narratives reflect fear-based perspectives on AI that mirror the language often used by media and in pop-culture. 28% of #AI tweets were connected to fear of AI.

Enthusiasts champion ethics in AI and are more practical about real-world applications of the technology. 80% of the narrative around ethics was led by enthusiasts.

Very few academic papers consider end-users or the ethical application of their research. Only 12% of academic publishing had a focus on ethical concerns.

What this means: A gap exists between researchers and the end-users that could lead to the technology being deployed without the proper scrutiny. Enthusiasts could play an important role in bridging this gap, ensuring more human-centric design for AI from the onset and ensuring better AI literacy for those who are most impacted.

Check out Phrasia’s interactive conversation map and more details about this research.

Design

Visual, purposeful, powerful
SourceAmeer Basheer

Consumer-facing technologies have always tried to apply designs that reflect a futuristic aesthetic. One of the most popular components of auto trade shows, for example, is the concept car, which often prototypes new technologies and packages them in the era’s current fantasy of the future.
And while this can be a valuable market research exercise for automakers to understand which technologies are ready for consumer applications in vehicles, such an approach for consumer AI can very easily be misinterpreted, raising expectations about the power, function and role of AI in any given device or service.

From advertising, to product packaging to news articles and children’s toys, the term ‘AI’ is as over-used as the iconography that usually accompanies it. Search for ‘artificial intelligence’ in Google’s image search and it becomes immediately clear what are the visual tropes associated with the technology: cybernetic-looking brains with circuit board synapses, superimposed over human-looking robots.

Perhaps this is a natural part of the hype that precludes new technologies that are meant to usher us into the future. However, it’s important that the media and businesses building consumer AI applications consider the implications of the visual language they employ, as it can lead to oversimplification of a technology with very real implications on privacy and security.

“I think the biggest barrier for the adoption of AI is by end users out in the world is simply calling it AI,” says Charles Isbel, Dean of Computing at Georgia Tech.”When people say AI, they have this kind of notion in their head of the Terminator or some kind of intelligent computer how telling you ‘I’m sorry, you can’t do that Dave’, and they’re not thinking about it as simply intelligence and automation augmenting what you’re doing, or simply being smart about what you are. But it’s also an illogical reaction, because people have been adopting AI for years and even decades, they just haven’t been calling it that.”

And so AIX design becomes an ever important concept for researchers, designers, developers and companies to consider, as the user experience will be AI’s biggest marketing lever. A good experience that delights and surprises travels just as fast through social media as a poor experience. It is the reason why global design companies like IDEO are tasked by technology brands to take an interdisciplinary approach and apply design-thinking early in a product’s development. Bo Peng is a Director at IDEO, helping lead the company’s approach to designing for AI experiences.

“I think human-centered design of AI products and services is really important. At its core, it’s very simple. It’s that when building out a product or service, to keep the end user in mind, but not just to keep them in mind in terms of having a list of requirements or having a faceless name up on a board, it’s really actually keeping them in mind as human beings. They have needs and wants, desires and pain points. It’s to think through how they might interact with your AI-driven product or service from a holistic point of view. It may not actually matter to them what kind of technology you’ve built on the back end or how it works, or even how you would like for them to use your product. To me building AI products is not just about using the best technology that researchers can offer… what I really care about is helping them see the effects of what their technologies have on their prospective customers or their perspective end-users.”

Education

AI must be understood before it can flourish

One step in helping people move from a general distrust in AI towards acceptance lies in helping them understand how it all works. The more we talk about AI and see the incremental benefits it can have on society; the more willing people will be to accept it into their lives.

We need to foster technology literacy, help the general public understand how AI actually works, help them understand their rights as consumers, and how best to collaborate with AI to help bridge the gap from sci-fi to reality. Further, companies, governments and academics need to create and openly promote a framework that protects consumers from privacy and security threats, as well as define our “data rights” in a way that is accessible to consumers to make them feel empowered and safe.

“Technology literacy is not only important for the people who are building the systems to help them think about the impact of what they’re building,” adds Isbel. “It is at least as important, perhaps more important that we teach people who are not going to build those systems, but are going to be impacted by those systems, to think about what those implications are. And that has to start as early as possible.”

Even how we speak about AI should be taken into consideration. Tech-speak and programming jargon only serve those who are entrenched in the development of AI products and services.
A common understanding of AI and its capabilities that reflects different cultures, countries, and beliefs will help us better interpret and understand how AI is impacting our lives. This should be baked into education in the form of AI literacy so that consumers of all ages can better understand and harness AI’s potential without falling for the hype.

“Awareness and education for consumers is absolutely critical, but it goes beyond that,” says Helena Leurent, Director General, Consumers International. “It’s about actually engaging with consumers in the design of these new technologies or approaches or services. It’s about bringing consumer advocacy into the upfront stage, as opposed to it being at the backstage of the process. It’s about having principles that think about vulnerable consumers at the very start. It’s about making a level playing field so that the best practice becomes closer to the norm.”

SourceThis is Engineering RAEng

From Perception to Reality

It is worth remembering that the AI story is in its early stages, particularly for the sorts of consumer-facing AI that is proliferating our homes, cars, offices and public spaces.

If we are to believe Hollywood or product marketing, however, it is easy to understand why many people do not trust AI-powered products, as there has been a tendency to over-extend the metaphors and capabilities of the technology. Like the Netflix documentary, The Social Dilemma, has demonstrated, the industry and researchers may have good intentions when developing new technologies, but there can be real consequences for consumers and real backlash for companies when reality eventually overcomes the promise.

For this reason, public perception of AI is an important but often overlooked component of a burgeoning new technology and has implications on whether society trusts, fears and ultimately embraces it.

Longer-term, the industry should consider an AIX design approach that brings end-users into the process much early on, creating AI literacy through education and by moderating the marketing messages and images to properly inform and manage expectations, while at the same time ensuring deeper understanding of the risks and rewards of AI as we adopt it into our lives.

Download the Full Report

A.I. in Pop Culture

Throughout history, science fiction has inspired, influenced and shaped the public’s viewpoints towards AI. Here are five classic examples that have left an imprint on our current perception of the technology.
  • 01. A.I Artificial Intelligence
    Directed by Steven Spielberg and inspired by sci-fi author Brian Aldiss’ short story ‘Super-Toys Last All Summer Long’, this movie depicts the first robotic child, David, programmed to love and coexist as a member of the family.
  • 02. 2001: A Space Odyssey
    Inspired by futurist Arthur C.Clarke’s short story ‘The Sentinel’, Stanley Kubrick’s oscar-winning film inspired a generation of sci-fi filmmakers. The movie depicts an expedition by humankind to Jupiter where they are accompanied by a sentient supercomputer named HAL 9000 who takes control of the spacecraft.
  • 03. Terminator Films
    Forever enshrining the phrase ‘I’ll be back”, the original 1984 American sci-fi film was directed by James Cameron and starred Arnold Schwarzenegger. In the films, unbreakable machines travel through time to stop or enable an AI general superintelligence called Skynet from eradicating human life.
  • 04. Star Wars
    Produced by George Lucas in 1977, this sci-film became a pop-culture sensation expanding into television series, videos games, novels and comics. Set a long time ago in a galaxy far, far away, many of the stories involve robots with varying degrees of intelligence. Two of the most famous, golden C-3PO and quirky R2-D2 displayed emotion and personality that has since made them iconic characters in the franchise.
  • 05. Star Trek: The Next Generation
    Set in outer space many years in the future, TNG’s most famous android is the popular Lieutenant Commander Data, played by Brent Spiner. A synthetic lifeform with a positronic brain, Data’s curiosity for human behaviour eventually leads him to become equipped with an emotion chip.

Explore the Themes