AI Ethics

Living rooms as laboratories

The old anecdote, ‘the road to hell is paved with good intentions’ has proven true many times throughout history. Studies of business ethics show that most misbehaviour is not due to malice but is a result of people’s inability to plan for their own missteps. Said another way: People tend to think they are good and right, even when they are wrong.

The problem with AI, however, is that it doesn’t truly think in the conventional sense and its intentions aren’t inherently good or bad. For AI, the road is often paved with messy data gleaned from imperfect humans, which then often leads to imperfect results.

The fact that AI systems are inherently fallible has been shown again and again in recent years. From applications of the algorithm PULSE, that turned Barack Obama Caucasian, to an inherently racist facial recognition system that claims to predict whether someone is a criminal, AI, in many ways, is helping to show, not that the technology is biased, but that humans are, and not always in ways that we expect. It’s a result of the big data adage, ‘garbage in, garbage out.’

While imaging applications present obvious ethical challenges concerning the application of AI, this is only scratching the surface of the challenges that we face in creating equitable, representative, socially and culturally-fluid artificial intelligence systems and devices.

Yet the AI that is becoming commonplace in our pockets, on our bookshelves and in our cars, is gathering increasingly more data with little industry oversight or accountability. So while ethics is an area that is already covered heavily by countless manifestos and frameworks it can be argued that most are difficult to implement in practice.

Source: Bernard Hermant

Interview: Yoshua Bengio

Scientific Director,
Mila

Therefore, in the context of Artificial Intelligence experience (AIX) design, ethics must also consider a practical, human-centric approach that considers what end-users consider ethical or dangerous or valuable.

The European Commission’s Ethics Guidelines for Trustworthy AI is a great example of the role that governments can play in shaping the industry and ensuring the protection of common citizens. Countries from Canada to China have similar lofty frameworks.

But as 2019 Turing Award-winning AI researcher, Yoshua Bengio describes it: “Human centric means to take into consideration the human aspect of how the tools are going to be used, for what purpose, and what's the consequence for humans who use the tool. And it's important because those tools are becoming more and more powerful. The more powerful the tool is, the more we need to be careful about how to use it.”

When we imagine AI ethics we may imagine AI-enabled fighter pilots, like the one that annihilated a human combatant in a DARPA-sponsored simulated dogfight. But while the technology’s application in the theatre of war is progressing, there is also an acknowledgement of the need for accountability, as seen by the recent convening of 100 military officials from 13 countries to discuss the ethical use of AI in battle.

And as human-centric AI becomes foundational to our lives, the threats posed by AI will not be on any battlefield, it will be in our personal and public spaces.

Inclusivity

AI for one or one AI?

When you see the general lack of diversity in the field of science and technology, it is easy to deduce that lack of representation in the laboratories of universities and tech companies is a key factor for much of the bias that we currently see in AI systems.

A study published last year by the AI Now Institute of New York University concluded that there is a “diversity disaster” perpetuating biases in AI. According to the report, only 15 and 10 percent of AI researchers at Facebook and Google, respectively, are women.

But the solution to simply encourage inclusivity can actually cause additional harm. According to the same study, the focus on encouraging ‘women in tech’ is too narrow and likely to privilege white women over others. “We need to acknowledge how the intersections of race, gender, and other identities and attributes shape people’s experiences with AI,” the report states.

And while gender diversity is important, ethnicity, religion and disability are only some of the other factors that need to be considered when building inclusive AI systems. For instance, Black in AI and LatinX in AI are organizations dedicated to increasing the presence of Black and LatinX people in the field of Artificial Intelligence.

However, these organizations are rooted in the U.S., one of the major global AI hubs. As the technology becomes more commonplace in the personal and professional lives of people around the world, how can we ensure all populations, regardless of income level, are able to benefit from their collaboration and cohabitation with AI?

There are already many global initiatives that aim to support a more equitable future for AI by ensuring better representation in the lab. The African Institute for Mathematical Sciences hosts The Next Einstein Forum (NEF), which focuses on building a platform for Africa’s innovators to collaborate with the rest of the world.

This and many other such initiatives across Africa, Asia, the Middle East, and South America will hopefully provide greater perspective for the companies creating consumer-focused technologies. As Chioma Nwaodike, a Nigerian lawyer puts it in a recent blog post, “It is essential that we engage in discussion about artificial intelligence from a developing country perspective in order to better understand how emerging technologies impact regions and countries differently depending on social, economic, and political conditions.”

Charles Isbell, Dean of Computing at Georgia Tech is also an advocate for advancing diversity in computing. He believes that diversity is a good place to start when looking at how we can build the most equitable and accessible AI solutions:

  • “When we talk about human centered design, about bringing people into the conversation, we have to be very careful because we don't just mean bringing in someone. We mean bringing in multiple people because we're not just designing something for a specific person or even a specific kind of person, but designing it for broad swaths of people. Otherwise you're actually limiting not only what the capabilities of the system are, but you're also limiting access to the support from a broad set of people.”

    “So if you don't have diversity among the people who are doing the designing and the people doing the testing, the people who are involved in the process, then you're all guaranteed to have a narrow solution, which will, as these tools get more and more powerful, become more and more dangerous for more and more people and hurting more and more people because they are focused over here.”

Interview: Yuko Harayama

Executive Director of International Affairs,
Riken

“It isn’t just the government anymore that is taking public actions, but every single private entity, they have to feel this responsibility for society, because their products have a huge impact on the function of society.”

Dr. Yuko Harayama

Executive Director of International Affairs at RIKEN

Values

End-users’ ideologies should come first

"The more powerful the tool is, the more we need to be careful about how to use it.”

Yoshua Bengio

Scientific Director, Mila

Having more inclusive and diverse teams creating consumer AI is important, but what happens when the most successful companies buy up smaller competitors, consolidating their power over the technology that is foundational in our lives? Even when other regions try to build their own AI industry, they still rely heavily on the big players and so adopt many of the values associated with those companies. Does AI then become a soft power tool like Hollywood or K-Pop?

The most obvious example is the consolidation of global AI supremacy between the U.S. and China. The decoupling of technology between these countries is often made on ethical grounds rooted in fundamental differences in ideologies and values.

With decoupling, there is an expectation that technologies, including those related to financial systems, healthcare and AI-driven consumer products and services will be created in isolation. This will likely lead to multiple, incompatible systems that map to the values of the systems in which they were designed. And while both sides argue their moral superiority, as always, it will be everyday people who pay the highest price.

Concerns like these have resulted in increasing calls for AI to be designed in ways that take into consideration universal human values as well as more nuanced, cultural ones. According to an industry framework proposed by Element AI and LG Electronics at CES 2020, as AI advances to level three, the AI systems will use causal learning to understand the root of certain patterns and behaviours to predict and promote positive actions.

AI at this stage will understand the larger interconnected system of our home or car and function of different devices, sharing learning outcomes. However, how will we be able to ensure the values of the individual, their culture, spirituality, and social expectations are also shared between the systems, especially if those systems originated from competing companies with conflicting value systems?

“It isn’t just the government anymore that is taking public actions, but every single private entity, they have to feel this responsibility for society, because their products have a huge impact on the function of society,” explains Dr. Yuko Harayama of RIKEN and former Executive Member of the Council for Science, Technology and Innovation Cabinet Office of Japan.

“That's why it's not just about maximizing their profits as business school taught you, but taking the responsibility, in the way that the action will have an impact on society. it's up to us because we are all human beings and that means you are responsible for your action, including your action within your company.”

Governance

Humans, not machines, must decide
what data to share

Source: Lianhao Qu

Data Privacy

The future risks of obsolete AI

Source: Franki Chamaki

Artificial intelligence systems need ‘checks and balances’ throughout development. But currently there are few safeguards for consumer AI, which is seen as less harmful than AI focused on industry or military use. However, as our personal AI systems become more sophisticated and interwoven with other systems, they may become more susceptible to abuse. This is why their human-centric design will become paramount in ensuring trust and adoption.

While many of Hollywood’s most entertaining examples of AI endangering humanity revolve around super-sentient computers and overly ambitious robots, one of the biggest threats for everyday people will likely come from the technology turning off.

“Human centric sometimes gets used to refer to paying attention to the end user who is using the system and or as being impacted by the system,” says Alex Zafiroglu, Deputy Director at the 3A Institute. “To be truly human centric I think we need to be paying attention to humans across the entire development and deployment and decommissioning cycle of these AI solutions.”

As AI proliferates and becomes foundational to our lives, there will be many systems that we depend on for our well-being that run silently in the background. Whether it is a smart mirror that reminds you to take your medicine or the AI that controls the electricity usage in your home, what happens if the manufacturer goes out of business or neglects the older model that you’ve become accustomed to? In most developed countries, things like water, electricity and even the internet have become essential services. Will artificial intelligence systems be the same?

“When we think about the challenges to consumer adoption of AI enabled services and solutions. I think one of the biggest things we need to consider is what data is being collected, who is collecting it, where it is staying and how it is being used and reused,” says Zafiroglu. “And so the barrier, I think, is transparency in the use and collection of data. We should be talking about how do you know what you know about the world and how do you categorize and make sense of the things that exist in the world, and then act upon those things.”

Purpose

Data handshakes and firewalls

Source: Luke Chesser

Asimov was onto something

A.I. for Good

While many of AI’s depictions in film and TV show the technology threatening humans and despite recent news relating to AI being employed for potentially harmful purposes, the fact is that it is a tool, and a powerful one at that. In fact, AI is already making our lives better. Here are five examples of AI being used for good:

01.

Healthcare

AI is increasingly being used to assist medical professionals to provide better patient care, advise on the best treatments, identify new drugs, speed up clinical trials and detect breast cancer to name a few.

02.

Human Rights

From finding missing people through facial recognition, to tracking twitter abuse against women, AI is impacting human rights issues that have previously been neglected.

03.

Education

Colleges and universities face challenges of disengaged students, high dropout rates and the inefficiency of the old school “one size fits all” approach to education. AI has helped solve this in some ways by enabling personalized learning giving students a tailored approach to studying based on individual needs.

04.

Global Hunger

AI is playing a vital role in increasing agricultural productivity, helping end global hunger and reach the UN Sustainable Development Goals. From predicting food shortages, recognizing pest outbreaks to improving yield, AI is helping put food on the table of billions of people.

05.

Climate Change

From predictions on how much energy we use to efficiencies in supply chains, the discovery of new materials and removing CO2 from the atmosphere, there are many ways that AI is being used to reverse the damage we’ve done.

Exchange Your Perspective

Want to be involved in shaping the future of AI experiences?
Share your email and we’ll keep you updated on what comes next.

This report is sponsored by LG Electronics and Element AI and produced by the BriteBirch Collective.

Contact Us

If you’re interested in collaborating on initiatives related to Artificial Intelligence
Experience (AIX) and the creation of a more equitable, safe and transparent future
through human-centric AI, please email us at aixexchange@lge.com.

This work is licensed under CC BY-NC-SA 4.0

Use of this website constitutes acceptance of the Legal and Privacy PolicySitemap.