Ethics is a hot topic in AI, but hasn’t been explored thoroughly through a specific consumer lens. This theme addresses how AI should be developed inclusively and takes into consideration the differing values of individuals and cultures while raising questions about responsibility for privacy and security.

AI Ethics

Living rooms as laboratories

The old anecdote, ‘the road to hell is paved with good intentions’ has proven true many times throughout history. Studies of business ethics show that most misbehaviour is not due to malice but is a result of people’s inability to plan for their own missteps. Said another way: People tend to think they are good and right, even when they are wrong.

The problem with AI, however, is that it doesn’t truly think in the conventional sense and its intentions aren’t inherently good or bad. For AI, the road is often paved with messy data gleaned from imperfect humans, which then often leads to imperfect results.

The fact that AI systems are inherently fallible has been shown again and again in recent years. From applications of the algorithm PULSE, that turned Barack Obama Caucasian, to an inherently racist facial recognition system that claims to predict whether someone is a criminal, AI, in many ways, is helping to show, not that the technology is biased, but that humans are, and not always in ways that we expect. It’s a result of the big data adage, ‘garbage in, garbage out.’

While imaging applications present obvious ethical challenges concerning the application of AI, this is only scratching the surface of the challenges that we face in creating equitable, representative, socially and culturally-fluid artificial intelligence systems and devices.

Yet the AI that is becoming commonplace in our pockets, on our bookshelves and in our cars, is gathering increasingly more data with little industry oversight or accountability. So while ethics is an area that is already covered heavily by countless manifestos and frameworks it can be argued that most are difficult to implement in practice.

SourceBernard Hermant

Interview Yoshua Bengio

Scientific Director, Mila

Therefore, in the context of Artificial Intelligence experience (AIX) design, ethics must also consider a practical, human-centric approach that considers what end-users consider ethical or dangerous or valuable.

The European Commission’s Ethics Guidelines for Trustworthy AI is a great example of the role that governments can play in shaping the industry and ensuring the protection of common citizens. Countries from Canada to China have similar lofty frameworks.
But as 2019 Turing Award-winning AI researcher, Yoshua Bengio describes it: “Human centric means to take into consideration the human aspect of how the tools are going to be used, for what purpose, and what’s the consequence for humans who use the tool. And it’s important because those tools are becoming more and more powerful. The more powerful the tool is, the more we need to be careful about how to use it.”

When we imagine AI ethics we may imagine AI-enabled fighter pilots, like the one that annihilated a human combatant in a DARPA-sponsored simulated dogfight. But while the technology’s application in the theatre of war is progressing, there is also an acknowledgement of the need for accountability, as seen by the recent convening of 100 military officials from 13 countries to discuss the ethical use of AI in battle.

And as human-centric AI becomes foundational to our lives, the threats posed by AI will not be on any battlefield, it will be in our personal and public spaces.


AI for one or one AI?

When you see the general lack of diversity in the field of science and technology, it is easy to deduce that lack of representation in the laboratories of universities and tech companies is a key factor for much of the bias that we currently see in AI systems.
A study published last year by the AI Now Institute of New York University concluded that there is a “diversity disaster” perpetuating biases in AI. According to the report, only 15 and 10 percent of AI researchers at Facebook and Google, respectively, are women.

But the solution to simply encourage inclusivity can actually cause additional harm. According to the same study, the focus on encouraging ‘women in tech’ is too narrow and likely to privilege white women over others. “We need to acknowledge how the intersections of race, gender, and other identities and attributes shape people’s experiences with AI,” the report states.

And while gender diversity is important, ethnicity, religion and disability are only some of the other factors that need to be considered when building inclusive AI systems. For instance, Black in AI and LatinX in AI are organizations dedicated to increasing the presence of Black and LatinX people in the field of Artificial Intelligence.

However, these organizations are rooted in the U.S., one of the major global AI hubs. As the technology becomes more commonplace in the personal and professional lives of people around the world, how can we ensure all populations, regardless of income level, are able to benefit from their collaboration and cohabitation with AI?

There are already many global initiatives that aim to support a more equitable future for AI by ensuring better representation in the lab. The African Institute for Mathematical Sciences hosts The Next Einstein Forum (NEF), which focuses on building a platform for Africa’s innovators to collaborate with the rest of the world.
This and many other such initiatives across Africa, Asia, the Middle East, and South America will hopefully provide greater perspective for the companies creating consumer-focused technologies. As Chioma Nwaodike, a Nigerian lawyer puts it in a recent blog post, “It is essential that we engage in discussion about artificial intelligence from a developing country perspective in order to better understand how emerging technologies impact regions and countries differently depending on social, economic, and political conditions.”

Charles Isbell, Dean of Computing at Georgia Tech is also an advocate for advancing diversity in computing. He believes that diversity is a good place to start when looking at how we can build the most equitable and accessible AI solutions: “When we talk about human centered design, about bringing people into the conversation, we have to be very careful because we don’t just mean bringing in someone. We mean bringing in multiple people because we’re not just designing something for a specific person or even a specific kind of person, but designing it for broad swaths of people. Otherwise you’re actually limiting not only what the capabilities of the system are, but you’re also limiting access to the support from a broad set of people.” “So if you don’t have diversity among the people who are doing the designing and the people doing the testing, the people who are involved in the process, then you’re all guaranteed to have a narrow solution, which will, as these tools get more and more powerful, become more and more dangerous for more and more people and hurting more and more people because they are focused over here.”

Interview Yuko Harayama

Executive Director of International Affairs, Riken

“It isn’t just the government anymore that is taking public actions, but every single private entity, they have to feel this responsibility for society, because their products have a huge impact on the function of society.” Dr. Yuko Harayama Executive Director of International Affairs at RIKEN


End-users’ ideologies should come first

“The more powerful the tool is, the more we need to be careful about how to use it.” Yoshua Bengio Scientific Director, Mila

Having more inclusive and diverse teams creating consumer AI is important, but what happens when the most successful companies buy up smaller competitors, consolidating their power over the technology that is foundational in our lives? Even when other regions try to build their own AI industry, they still rely heavily on the big players and so adopt many of the values associated with those companies. Does AI then become a soft power tool like Hollywood or K-Pop?

The most obvious example is the consolidation of global AI supremacy between the U.S. and China. The decoupling of technology between these countries is often made on ethical grounds rooted in fundamental differences in ideologies and values.

With decoupling, there is an expectation that technologies, including those related to financial systems, healthcare and AI-driven consumer products and services will be created in isolation. This will likely lead to multiple, incompatible systems that map to the values of the systems in which they were designed. And while both sides argue their moral superiority, as always, it will be everyday people who pay the highest price.

Concerns like these have resulted in increasing calls for AI to be designed in ways that take into consideration universal human values as well as more nuanced, cultural ones. According to an industry framework proposed by Element AI and LG Electronics at CES 2020, as AI advances to level three, the AI systems will use causal learning to understand the root of certain patterns and behaviours to predict and promote positive actions.

AI at this stage will understand the larger interconnected system of our home or car and function of different devices, sharing learning outcomes. However, how will we be able to ensure the values of the individual, their culture, spirituality, and social expectations are also shared between the systems, especially if those systems originated from competing companies with conflicting value systems?

“It isn’t just the government anymore that is taking public actions, but every single private entity, they have to feel this responsibility for society, because their products have a huge impact on the function of society,” explains Dr. Yuko Harayama of RIKEN and former Executive Member of the Council for Science, Technology and Innovation Cabinet Office of Japan. “That’s why it’s not just about maximizing their profits as business school taught you, but taking the responsibility, in the way that the action will have an impact on society. it’s up to us because we are all human beings and that means you are responsible for your action, including your action within your company.”


Humans, not machines, must decide what data to share

As adjacent technologies such as the Internet of Things (IoT), 5G and edge computing continue to develop alongside AI, there are serious concerns regarding our personal information security and the overlap of AI systems owned and operated by our employers, our landlords, our educators and governments. There are many ethical questions that we need to answer regarding the prioritization of these various systems and the ability for users to override them.
In her recent work with the Japanese government, Dr Harayama was one of the initiators of Society 5.0, a conceptualization of the city of the future whereby AI systems and other technologies have become foundational.

To achieve a modern and safe Society 5.0, the research argues that we must establish environments with robust cyber-security and safety. It is especially essential to develop technology that enables us to choose how much personal data to share, the level of individual privacy to be protected, and what kind of information can be used publicly.
How then should researchers consider the development of AI technologies that enable people to control their own safety features, to explain the processes and logics of the calculations and decisions made by the AI and to provide interfaces that smoothly perform transitions of control from AI to humans, especially in emergencies?

Even more concerning is the human-machine collaboration that may take place between powerful AI systems and bad actors. Recently, researchers from the University College London (UCL) listed 20 AI-enabled crimes based on academic papers, news and popular culture. Many, such as using driverless vehicles as a weapon, illustrate that the biggest threat from AI is actually from humans. Do developers then need to also consider all the ways that people can misuse their applications? What is the role of government? Can they even keep up with the change?

“I think most of us who live in a modern democracy, think there’s a place for government to make certain regulations, which make our lives safer,” explains renowned roboticist and inventor Rodney Brooks. “You know, we expect that there will be rules on freeways. Cars have to have certain safety features. People have to drive them under certain restrictions. But as we have AI coming along, often the rule makers are not as in-tune with what is possible and what is real. And so the rules sometimes come down a little too heavy handed.”

Fellow AI researcher, Dr Max Welling, VP Technologies at Qualcomm Technologies Netherlands B.V. agrees: “When these devices become increasingly more complex, how are you going to certify something that is changing? We certify airplanes and cars, which are already extremely complex pieces of engineering. But if these things become self-learning, this becomes increasingly challenging to do. And so overcoming that barrier of certifying these things so that we can sort of guarantee that the device is safe and privacy-preserving for the people that will use it and also fair and all these other dimensions which are important…this is a truly interdisciplinary effort. We should not leave these questions to the technologists. I find this very important because we have a very limited view of the world.”

SourceLianhao Qu

Data Privacy

The future risks of obsolete AI
SourceFranki Chamaki

Artificial intelligence systems need ‘checks and balances’ throughout development. But currently there are few safeguards for consumer AI, which is seen as less harmful than AI focused on industry or military use. However, as our personal AI systems become more sophisticated and interwoven with other systems, they may become more susceptible to abuse. This is why their human-centric design will become paramount in ensuring trust and adoption.

While many of Hollywood’s most entertaining examples of AI endangering humanity revolve around super-sentient computers and overly ambitious robots, one of the biggest threats for everyday people will likely come from the technology turning off.
“Human centric sometimes gets used to refer to paying attention to the end user who is using the system and or as being impacted by the system,” says Alex Zafiroglu, Deputy Director at the 3A Institute. “To be truly human centric I think we need to be paying attention to humans across the entire development and deployment and decommissioning cycle of these AI solutions.”

As AI proliferates and becomes foundational to our lives, there will be many systems that we depend on for our well-being that run silently in the background. Whether it is a smart mirror that reminds you to take your medicine or the AI that controls the electricity usage in your home, what happens if the manufacturer goes out of business or neglects the older model that you’ve become accustomed to? In most developed countries, things like water, electricity and even the internet have become essential services. Will artificial intelligence systems be the same?

“When we think about the challenges to consumer adoption of AI enabled services and solutions. I think one of the biggest things we need to consider is what data is being collected, who is collecting it, where it is staying and how it is being used and reused,” says Zafiroglu. “And so the barrier, I think, is transparency in the use and collection of data. We should be talking about how do you know what you know about the world and how do you categorize and make sense of the things that exist in the world, and then act upon those things.”


Data handshakes and firewalls

In some ways, data privacy and AI are like oil and water. On one hand, deep learning models require massive amounts of data to learn, improve and offer the sort of experience that we want from AI. Yet on the other hand, as more applications become available and every appliance in our homes becomes ‘smart,’ there is little understanding amongst the public that our living rooms are becoming laboratories, using our very own data to serve us up improved, more personalized experiences.

As the Netflix documentary The Social Dilemma highlighted recently to so many people, we the users are the product that is being sold and that our own data can be used against us. Our society constantly reinforces the importance of personal privacy, yet we readily hand it over for a chance to win a free trip. So what are the implications then when AI that is meant to serve a useful purpose in our lives, is also gathering, storing, sharing and using that data to become better at doing the things we want it to do?

Purpose is an important concept, as is the context of the data and systems that we share it with. We may feel comfortable sharing our health data with a health app and work data with our employer, but as the Covid-19 crisis has shown, our homes have now become the central hub for health and work, not to mention entertainment, food, education, banking and our social lives. So it becomes ever more important that AI experience design takes into account the ways that we use and share data so that we can build appropriate handshakes to enable our AI to achieve its purpose while creating appropriate firewalls that keep certain data in its place.

Bo Peng, Director at IDEO works with clients to ensure that clear purpose is baked into solutions and that the data AI solutions are built on don’t betray that purpose. “The existence of bias in an AI driven product or service is really an inevitability at first in the sense that there is nothing that we can think through hard enough. Now there are no silver bullets but what I can offer is that, through many of the projects that we’ve been part of, we’ve come up with a couple of principles, to help design this process. First, recognizing the data is not truth. At the end of the day, the mechanism that was designed to collect that data was architected by a human being. There are inherent biases, not only in the accuracy of the data, but also just in the decision of what data to collect and what data to not collect.”

SourceLuke Chesser

Asimov was onto something

There is no shortage of ethics discussion regarding AI and that’s a good thing. Already, we’re seeing certain changes that are having a positive impact. For example, according to IBM’s study, 85 percent of global AI professionals thought the industry has become more diverse in recent years. Most of respondents said that diversity has had a positive impact on AI technology.

Similarly, questions about applying human values, an increased awareness of human-centric design and the creation of safeguards for ensuring security and deterring the misuse of AI systems is being considered much earlier in the process. But we must remain diligent in these efforts.

The many ethics councils and committees being set up around the world, while important, are also perpetuating the challenges of such technological colonialism. As Sri Shivananda, Senior Vice President and Chief Technology Officer at PayPal says, “We already live in a world where experiences around us are being powered by AI. What keeps me up about AI is the immense power that it brings to the table. That power comes with a lot of responsibility and that responsibility is something that all of us in the industry have to treat with respect.

“What we are all doing is taking from these first experiences, understanding the obligations that we have to the customers, to communities, and to the whole planet to make sure that we actually put guard rails around what AI can do. We must collaborate across the industry to create new standards and best practices around how AI should be implemented and then adhere to those codes of ethics.”

For consumer applications of AI to become successful in our most personal spaces, human-centric design becomes imperative, as does education and then accountability. Frameworks like the series of principles set by the Future of Life Institute, can be helpful guides, but the industry, policymakers and consumers themselves need to ensure they are followed.

As one of the earliest people to begin thinking about the ethical challenges posed by artificial intelligence, American writer Isaac Asimov once said, “I could not bring myself to believe that if knowledge presented danger, the solution was ignorance. To me, it always seemed that the solution had to be wisdom. You did not refuse to look at danger, rather you learned how to handle it safely.” Indeed, the path forward for AI should be guided by intellectual curiosity, care, and collaboration.

So how then do we address the ethical challenges posed by a technology that learns from our own fallible character? How can we ensure consumer AI is designed with the best intentions, with accessibility, inclusiveness and without human bias? Currently, there are no right or wrong answers, but there are many interesting questions that can help us set off in the right direction.

“Well, I’m by nature optimistic, but I also realize that you can’t just look at one side of the coin,” says Bengio. “You also have to look at the danger and listen to the people who are raising concerns. And so, I think we need companies, governments, citizens… we need to brainstorm together about how we organize the rules of society, which is the laws that govern our businesses so that we move together in a good direction.”

Download the Full Report

A.I. for Good

While many of AI’s depictions in film and TV show the technology threatening humans and despite recent news relating to AI being employed for potentially harmful purposes, the fact is that it is a tool, and a powerful one at that. In fact, AI is already making our lives better. Here are five examples of AI being used for good:
  • 01. Healthcare
    AI is increasingly being used to assist medical professionals to provide better patient care, advise on the best treatments, identify new drugs, speed up clinical trials and detect breast cancer to name a few.
  • 02. Human Rights
    From finding missing people through facial recognition, to tracking twitter abuse against women, AI is impacting human rights issues that have previously been neglected.
  • 03. Education
    Colleges and universities face challenges of disengaged students, high dropout rates and the inefficiency of the old school “one size fits all” approach to education. AI has helped solve this in some ways by enabling personalized learning giving students a tailored approach to studying based on individual needs.
  • 04. Global Hunger
    AI is playing a vital role in increasing agricultural productivity, helping end global hunger and reach the UN Sustainable Development Goals. From predicting food shortages, recognizing pest outbreaks to improving yield, AI is helping put food on the table of billions of people.
  • 05. Climate Change
    From predictions on how much energy we use to efficiencies in supply chains, the discovery of new materials and removing CO2 from the atmosphere, there are many ways that AI is being used to reverse the damage we’ve done.

Explore the Themes