"I'm sorry, Dave. I'm afraid I can't do that."
– HAL (Heuristically Programmed ALgorithmic Computer), the sentient computer that controls the Discovery One spacecraft from 2001: A Space Odyssey.
“I am superior, sir, in many ways, but I would gladly give it up to be human.”
– Lt. Cmdr. Data, the sentient android from Star Trek: The Next Generation
“I’ll be back.”
– The Terminator
The roots of science fiction (sci-fi) date back to the 2nd century AD with the novel “A True Story,” by Lucian of Samosata, a satirist in the Roman Empire. However, it was Mary Shelly’s Frankenstein (1818) and the Last Man (1826) that defined the sci-fi genre, which has since crept into every aspect of pop culture – from books, TV, movies, music and more.
Source: Aideal Hwa
Dean of Computing and John P. Imlay Jr. Chair,
Georgia Institute of Technology
Frankenstein offers an interesting starting point as we examine how the public’s perception of AI will affect not only how consumers adopt AI-enabled products and services, but also how companies and policy-makers address the technology’s transformative power.
Frankenstein as a novel is a complex story, but at its heart it is about how scientific ambition and the creation of artificial life can have disastrous consequences. Its theme has been used over and over again in pop-culture, and which has inevitably led us all to ask – what is the price of the rapid advancements in technology, and AI in particular? Are we inadvertently creating the next Frankenstein’s Monster? And how can we better understand how AI works so we can comfortably take advantage of all it has to offer to better our lives?
While we can dismiss the work of Hollywood as a hyperbolic warning to humanity, it becomes harder when today’s leading thinkers and titans of business raise the risks, too.
Elon Musk certainly has his concerns: “AI doesn't have to be evil to destroy humanity – if AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings."
And so does Bill Gates: “The world hasn’t had that many technologies that are both promising and dangerous – you know, we had nuclear energy and nuclear weapons.”
Of course, all new advances in technology that serve to change our way of life, work, and society at large are met with varying degrees of resistance or adoption. But, studies have shown that the more we view advances in technology as helping us have more control, flexibility and efficiency in life, the more likely we are to adopt them into our lives.
On the flipside, if we feel we lack control, understanding of how things work, or a general distrust of technology and the organizations offering it to us, the less likely we are to get onboard.
Designers and developers must confront two potential barriers to adoption. The first driven by pop culture, mixed messaging, and a few very public false starts by technology companies. Second, and perhaps more importantly, a healthy dose of skepticism among consumers about what the “inevitable” march of technology more broadly means to their lives and the world they want to live in.
“Technology is not necessarily evil nor is it necessarily good. We have to decide on the limits, the frames, the requirements to these technologies, and we have to imagine every single future scenario we can with the powers of these technologies,” says Dr. Christina J. Colclough, who runs The Why Not Lab and is an advocate for global workers’ rights in the digital age. “So we're going to need to converse. We're going to need to talk. We're going to need to find out what is fair for you, what is fair for me, and how do we make this society work?”
Creators of AI-enabled products and services will be tasked with responsibly educating consumers on how this technology can have a positive impact on their lives – through transparency, accurate marketing and intuitive design. More importantly, they will need to consider taking a human-centric AI experience (AIX) approach to design, inviting end-users and other stakeholders to be part of the process.
Ironically, we can turn back to Frankenstein for inspiration on how a negative fictional story about AI resulted in something positive in the real world. Afterall, it was Arthur C. Clark’s influential short story “Dial F for Frankenstein”, about a super network of telephones that learn to speak to one another and eventually take over the world, that apparently helped inspire Tim Berners-Lee to invent the World Wide Web.
Director General of Consumers International
Director General of Consumers International
Source: Glenn Carstens-Peters
Dean of Computing and John P. Imlay Jr. Chair, Georgia Institute of Technology
Linguists and psychologists have long known that words hold power and that the words we choose often convey unintended messages and drive unintended behaviour. It’s for this reason that we should be extra careful about the words we use to describe AI and its capabilities.
“Suitcase words,” according to renowned roboticist and AI researcher Rodney Brooks, are words that are often overused and which get misinterpreted.
“I think people use words to describe what an AI system is doing, that then get overgeneralized,” he explains. “When we say a system is learning something, people who are not familiar with AI may think of every possible use of the word ‘learning’. Learning can mean ‘I learned to ride a bicycle.’ which is very different from ‘I learned ancient Latin.’
“Learn applies to so many different things that companies need to be careful about saying that their AI system learns, because it’s going to provide too much promise that we don’t have capability of. It’s better to say that the system ‘adapts to a specific set of circumstances.’ That sort of promise sets the expectation for ordinary consumers, for how the AI system is going to change and how much.”
Marketers and researchers alike choose words to describe AI’s capabilities that can often lead to misunderstanding about how the systems function and can cause consumers to be let down and turned off by the technology, when their expectations aren’t met. It’s one of the key reasons why tools like the AIX Framework were created, in order to standardize the language around AI at different levels of its development and capability.
Whether it is words such as ‘learning’, ‘predictive’, ‘realistic’ or ‘intuitive’, by choosing our words more wisely, we will be able to better deliver on the promise of AI and ensure end-users understand that the technology is essentially software. When we strip away the marketing catch-phrases, what is left is a clearer understanding of the value being delivered.
Who can forget the HAPIFork, the “smart” fork that monitors how quickly and how much you eat, that took the 2013 Consumer Electronics Show by storm? Of course, utensils are not the only thing apparently thinking for you – the number of products deemed “smart” or “intelligent” range from watches, health apps, home systems, bikes, toys and cars.
Each product promises to revolutionize the way we live in big and small ways, but have we hit peak AI hype? Some in the AI industry seem to think so, and that this will have consequences for the technology’s further adoption and development.
“Marketing that's focused on texts and specs is going to fail,” says Jeff Poggi Co-CEO of the McIntosh Group. “And if we keep the marketing focused on the engineers that are developing the technologies, it's going to be a very difficult rollout. We've got to think about this in the terms of consumer benefits.
“We've got to start with the idea of how is this going to impact the emotions and the experience of the consumer and give them realistic examples of how that's going to impact their lives. Show them the truth. Tell them how it is going to benefit them and what the experience is going to be. That's the way we need to do it, and it has to be done enough in an authentic way.”
Many of existing products have relatively simple tasks which they perform quite well such as monitoring and automatically adjusting room temperature based on your schedule and comfort, or capturing physical activity levels and caloric intake to advise on diet and exercise.
But we are now moving into a realm where the industry is promising self-driving cars, refrigerators that monitor your food supply and diet and then order groceries for you, and robots that will take care of your household chores.
The issue is that these tasks require a tremendous amount of data and an AI system that can effectively process this information to provide real value. The challenge is to ensure we’re setting expectations about what these products can actually do right now.
This is an issue, and one the industry has faced in the past. “AI Winter” refers to a setback in the development of AI as a result of a lack of enthusiasm among consumers and investors. Typically, this begins when the AI community itself begins to see its own limitations. This pessimistic outlook is then picked up on by a number of outside influencers, including the media, which creates a negative feedback loop. This has already occurred in the 1970s and 1980s.
It could be argued that this time, given the significant advances in technology across all facets of our lives, the backlash will come from consumers who simply expect more. Indeed, consumers are constantly flooded with endless promises of how technology will make their world a better place.
This has resulted in people trusting their devices more than they should – like letting their car self-drive exceptionally long distances, along busy highways only to end up in a fatal crash. The car isn’t yet “smart” enough to handle all the variables thrown its way. While this is an extreme example, it’s these types of instances where expectations do not align with reality and serve as a wake-up call to the industry.
“In the companies that I've started, I've always told our engineering teams that what you're going to deliver to the final customer is going to be really disappointing,”says Rodney Brooks, the Panasonic Professor of Robotics (emeritus) at MIT. “It's not going to be at the cutting edge because we can't afford to have our products grow up out there with a real user. So we're going to be really careful with what we put out there and what we deliver. And we're not going to be going as fast as science fiction movies expect us to be. It’s just the reality in order to provide something that people can feel comfortable with.”
Source: John Tekeridis
For AIX Exchange, Phrasia, an NLP AI technology, analyzed over 60K samples of unstructured text data from published academic papers, Hacker News articles and Twitter posts. They wanted to understand how AI narratives differ between consumers, enthusiasts and published researchers. What they found was both surprising and insightful:
Consumer narratives reflect fear-based perspectives on AI that mirror the language often used by media and in pop-culture. 28% of #AI tweets were connected to fear of AI.
Enthusiasts champion ethics in AI and are more practical about real-world applications of the technology. 80% of the narrative around ethics was led by enthusiasts.
Very few academic papers consider end-users or the ethical application of their research. Only 12% of academic publishing had a focus on ethical concerns.What this means:
A gap exists between researchers and the end-users that could lead to the technology being deployed without the proper scrutiny. Enthusiasts could play an important role in bridging this gap, ensuring more human-centric design for AI from the onset and ensuring better AI literacy for those who are most impacted.
Check out Phrasia's interactive conversation map and more details about this research.
Source: Ameer Basheer
Source: This is Engineering RAEng
Throughout history, science fiction has inspired, influenced and shaped the public's viewpoints towards AI. Here are five classic examples that have left an imprint on our current perception of the technology.
Directed by Steven Spielberg and inspired by sci-fi author Brian Aldiss’ short story ‘Super-Toys Last All Summer Long’, this movie depicts the first robotic child, David, programmed to love and coexist as a member of the family.
Inspired by futurist Arthur C.Clarke’s short story ‘The Sentinel’, Stanley Kubrick’s oscar-winning film inspired a generation of sci-fi filmmakers. The movie depicts an expedition by humankind to Jupiter where they are accompanied by a sentient supercomputer named HAL 9000 who takes control of the spacecraft.
Forever enshrining the phrase ‘I’ll be back”, the original 1984 American sci-fi film was directed by James Cameron and starred Arnold Schwarzenegger. In the films, unbreakable machines travel through time to stop or enable an AI general superintelligence called Skynet from eradicating human life.
Produced by George Lucas in 1977, this sci-film became a pop-culture sensation expanding into television series, videos games, novels and comics. Set a long time ago in a galaxy far, far away, many of the stories involve robots with varying degrees of intelligence. Two of the most famous, golden C-3PO and quirky R2-D2 displayed emotion and personality that has since made them iconic characters in the franchise.
Set in outer space many years in the future, TNG’s most famous android is the popular Lieutenant Commander Data, played by Brent Spiner. A synthetic lifeform with a positronic brain, Data’s curiosity for human behaviour eventually leads him to become equipped with an emotion chip.