Understanding

Artificial Intelligence (AI) is a term that seems to be affixed to nearly every new technology or service today. And it is true that the last few years have seen many advancements in the field and in its applications. But AI isn’t just an industry tool for businesses, it is also in your pocket, your home, car and our public spaces. It is the new form of software and it is transversal.

You may not know it, but you depend on AI, interact with it, and take its advice almost every day.

This report was created because AI has a tremendous power to improve our lives, but it is advancing quicker than most people realize and, like any tool, has the potential to be mishandled and misunderstood.

As end-users of the technology, we should be more aware of how AI systems and products are using our data, how AI arrives at recommendations and to ensure that it is designed with transparency, purpose and function so that it adds value to our lives.

If you are new to AI, we have pulled together an overview of the basics, to prepare you for some of the concepts and topics mentioned in this report and interviews. There is also a glossary of terms, so you can always return to this page for definitions of key terms.

Source: Possessed Photography

What is Artificial Intelligence?

Source: This is Engineering RAEng

There are many definitions of AI and in many ways the term has become a catch-all for a variety of complex computational processes that mirror human intelligence in their systems, their outputs and outcomes.

The simplest description of AI is: human-like intelligence exhibited by machines.

Another more specific definition is: AI is a system that simulates human intellectual functioning. In doing so, it allows a rational computation of datasets to come up with an output in a form or shape desired by the programmer.

Types of Artificial Intelligence

It is important to understand that there are varying levels of subfields that sit under AI. If Artificial Intelligence is considered any technique by which computers mimic human behaviour, Machine Learning is a type of AI using algorithms that learn from data to make predictions.

Artificial Neural Networks are a type of machine learning that simulate the computational processes of the brain by recognizing patterns in big sets of structured data. And finally, Deep Learning is the layering of many neural networks that are designed to recognize patterns in big sets of unstructured data.

Most of this report uses the terms AI and Machine Learning interchangeably, but there are many more sub-subfields, each with unique underlying techniques for achieving human-like intelligence, with more cropping up every day.

Artificial Intelligence Subfields

Artificial Intelligence Subfields

It is important to understand that there are varying levels of subfields that sit under AI. If Artificial Intelligence is considered any technique by which computers mimic human behaviour, Machine Learning is a type of AI using algorithms that learn from data to make predictions.

Artificial Neural Networks are a type of machine learning that simulate the computational processes of the brain by recognizing patterns in big sets of structured data. And finally, Deep Learning is the layering of many neural networks that are designed to recognize patterns in big sets of unstructured data.

Most of this report uses the terms AI and Machine Learning interchangeably, but there are many more sub-subfields, each with unique underlying techniques for achieving human-like intelligence, with more cropping up every day.

How Does AI Work?

Source: Aideal Hwa

Machine learning typically requires three key ingredients:

  • Datasets that provide the examples for training the machine.
  • Features that are important components of the data on which the machine is trained to pay attention to and identify patterns.
  • Algorithms that are computational rules or steps that govern how the machine should achieve a task. In Deep Learning, these are layered in such a way that the system effectively infers connections.

AI, and more specifically Machine Learning, can be applied to identify patterns in nearly any dataset. And in today's digital world we are awash in data. For Deep Learning, the more data the better. So to gather, learn and apply machine learning, AI systems exhibit some or all of the following qualities:

  • Perception — Sensing the world through cameras, microphones, GPS, etc.
  • Communication — Learning from interaction and gathering information based on action and response.
  • Reasoning — Understanding concepts and relations between patterns in data.
  • Decision making — Optimizing how to process the patterns in data to achieve specific outcomes.
  • Interaction — Taking actions on these patterns to achieve specific goals.

How is AI Used?

You may not know it, but you use AI all the time. Here are some examples:

Your morning commute: Google Maps uses location data and neural networks to calculate the best route and predict your arrival time. Ride hailing apps such as Uber and Lyft use machine learning to calculate dynamic pricing based on demand. Self-driving cars, which use AI in many of their systems are only a few years away, according to many industry experts.

Your communications: Depending on your email client, most are using AI to keep spam out of your inbox, while social media platforms such as Facebook use machine learning for recommendations and face matching to help with tagging friends in photos. Natural Language Processing is now more powerful than ever, with Open AI’s GPT-3 as a most recent example of powerful neural networks which can understand, create and translate languages.

Your shopping experience: Through the use of cookies (identifying tags that are placed on your phone or computer when browsing the web) AI learns what you like and serves you up relevant and timely ads. Whether shopping online or in the real world, AI and technologies such as augmented reality and image recognition are being used to help you pick the best outfit. When you make a purchase, AI protects you from fraud.

Relaxing at home: Smart home technology has come a long way from AI powered locks and thermostats, lights and music. AI assistants like Alexa and Google Home help to connect it all together, learning when you arrive home and setting the temperature and lights accordingly. As far as entertainment is concerned, Netflix’s recommendation engine learns from you and people like you to predict which shows you will binge next, while video games push the limits of AI to create realistic virtual experiences.

Relaxing at work: Since Covid-19, many of us have been commuting to our living rooms. In many ways, AI has helped us transition to remote work through AI-powered applications and digital tools. Whether it is automating customer service support, scheduling your next meeting or optimizing your video conference, AI is likely involved. For those who are still on the factory floor, automated processes and robots are taking away much of the strain of menial tasks.

Source: David Leveque

Why Now?

Source: Skye Studios

Why is AI all of a sudden everywhere? Well, the truth is that the concept has been around for 50 years (check out this short timeline of AI’s history). But there are a few reasons why we are just now seeing such advancements in the technology.

First, the digital age and the proliferation of sensors, the internet and mobile technology have created a large amount of data for AI to train on. But it has only been recently that we have had large amounts of computing resources (GPUs) to process that data. What’s more, the excitement of AI has ushered in a large community of researchers and resources to support innovations in hardware and in making the training algorithms more efficient, allowing for better use of available computation and hence, better applications of AI.

What About the Future?

The future is exciting and scary at the same time, depending on how you look at it. Many technologies are converging to enable AI to become more powerful while new deep learning methods continue to push the technology forward. 

The Levels of AIX Framework is designed to help the industry, policymakers, workers, consumers – any end-user – better recognize the technological milestones that define the advancements in AI, share a common language and to support a roadmap for human-centric design of consumer AI.
(Check out the Levels of AIX Framework)

The AIX Framework

Level 1

Level 2

Level 3

Level 4

Names

Definitions

Efficiency

AI facilitates specific functions with systems and devices, making user interactions more efficient and effective

Personalization

AI uses pattern learning to recognize, optimize and personalize functions in order to improve and simplify interactions for users

Reasoning

AI uses causality learning to understand the cause of certain patterns and behaviours, this information is used to predict and promote positive outcomes for users

Exploration

AI uses experimental learning to continuously improve, by forming and testing hypotheses it uncovers new inferences, seamlessly adding value to users’ lives and enabling a deeper affinity

Pervasiveness
In Our Lives

Familiar

Systems and devices that utilize AI are appearing in user’s everyday lives

Common

AI is optimizing most devices at the edge and most systems through the cloud

Universal

AI is everywhere and interconnected for the benefit of all devices and systems

Foundational

AI forms a core component of the infrastructure for all devices and systems in society which share and learn collectively

Environmental Awareness

Perceives

Perceives specific, pre-defined information and acts on it accordingly to increase its efficiency

Recognizes

Recognizes patterns and uses them to make better predictions to increase relevance for users

Understands

Understands the patterns and principles across systems in order to meet predefined missions. Uses reasoning to respond to new situations by applying unique approaches

Explores

Seeks to test and validate the underlying conditions of a situation by analyzing data froma broader set of external sources to inform its inferences

Collaboration

Independent

Works alone or relays commands from one system to another

Connects

Connects with other devices within a user-controlled system so that the user can use one device to control others

Coordinates

Understands the larger interconnected system and the function of different devices and shares learning outcomes to achieve a broader mission

Orchestrates

Identifies gaps in data and user understanding then orchestrates across internal and external systems to find and apply new knowledge as it scrutinizes and optimizes its hypotheses

User Understanding

Agent

Perceives user inputs and logs past inputs

Assistant

Recognizes and distinguishes users and their unique behaviours and preferences

Companion

Interprets the user’s mood from contextual understanding of multiple data points and reasons about social relations to predict and support how users will interact

Sage

Understands how to influence users – enabling them to trust new information and approaches by providing evidence, nudging behaviours in service of a broader purpose

Collaboration

Task-oriented

One-off actions

Can execute specific commands within specific parameters to achieve a specific task

Goal-oriented

Multiple actions

Works out various options for achieving a given goal and presents them to the user for selection or is pre-programmed to efficiently meet the desired goal

Mission-focused

Long-term actions

Understands users and its environment in order to predict, recommend and execute solutions to assigned missions

Purpose-driven

Exploratory actions

Using local context and external sources of knowledge, it balances users’ competing needs and interests and is able to take creative approaches to influence user behaviours, whilst in service of the user’s higher purpose

LGE and Element AI have partnered to research and develop this framework, proposing a shared definition for advancements in AI. Grounded in the imaginative work of foresight and research into the cutting edge of applied AI science and engineering, the framework consists of four clear levels. Each level represents a step-change in capability that will allow AI-powered products and services to provide new benefits to users and society.

AI’s Short History

AI has developed from imagination to reality in a relatively short time.
Here are 10 big milestones in the story of AI.

1950 & 1951

Asimov
& Turing

In the early 1950s, science fiction writer Issac Asimov proposed the three laws of robotics and published the influential sci-fi story collection 'I Robot’. The same year, mathematician Alan Turing created the Turing Test, which measured technology’s ability to deceive a human judge.

Isaac Asimov “I, Robot” Book Cover
Source: HarperCollins Publishers

1954
(Granted 1961)

First Industrial
Robot

Inventor George Devol created Unimate, the first industrial robot which transformed the manufacturing world. Unimate grew from the planning and business insights of Joseph Engelberger – the Father of Robotics.

Unimate
Source: UL Digital Library

1969

Shakey
the Robot

Shakey was the first general-purpose mobile robot with the ability to make decisions about its own actions by applying logical thinking of its surroundings. However, AI was slow and there were challenges and this was made aware by Shakey the Robot.

Shakey the Robot
Source: SRI International

1990s & 2000s

Robots
at Home

AI scientist, MIT professor and founder of IRobot, Rodney Brooks emphasized that limitation of machines and interaction with the environment hinders the design of intelligent systems. He strongly argued that humans' use is much better off if the design of AI is human centric. His company, IRobot invented the first successful robot for the home called Roomba - an autonomous vacuum cleaner.

iRobot’s Roomba vacuum cleaner
Source: Jo Zimny

Today

Deeper Learning

Realistic language processing algorithms such as OpenAI’s GPT-3 and image and voice processing used for deep fakes are only some of the new ways that AI is being applied. Today, task-oriented AI can beat humans at the game Go, at complex video games and military exercises. Even so, according to researchers and the Levels of AIX Framework, we are still in the early stages of AI’s development. What will tomorrow bring?

Robot Creating Music
Source: Photos Hobby

John McCarthy
Source: Jeff Kubina

1956

Dartmouth
Conference

John McCarthy organized a summer conference at the Dartmouth University where The ‘Logist Theorist’ program was presented. It is considered the first AI based program designed to mimic the problem solving skills of a human.

2001: A Space Odyssey
Source: James Vaughan

1968 - 2001:

A Space Odyssey

Academic Marvin Minsky guided Stanley Kubrick for the film 2001 space odyssey influencing science fiction. He also supported a ‘top down approach,’ the notion of pre-programming a computer with the rules that govern human behaviour.

1980s Computer
Source: Marcin Wichary

1970s & 1980s

AI Blues

AI advancements slowed in the 1970s. Although there was proof of concept, computers still couldn’t store enough information, process fast enough and couldn’t perform tasks like facial recognition. As Hans Moravec, a faculty member at the Robotics Institute of Carnegie Mellon University said, “computers were still a million times too weak to exhibit intelligence.” In the 1980s, AI began to reemerge, new funding was in place and ‘expert systems’ were developed by the AI pioneer Edward Feigenbaum.

Siri on iPhone 4S
Source: Kārlis Dambrāns

2011

Rapid Progress

The new millennium saw a massive leap in AI’s progress. IBM Watson beat Jeopardy world champions Ken Jennings and then Brad Rutter a 20-time winner. And then in 2011 Apple introduced intelligent personal assistant Siri on the iPhone 4S, followed by Google Now and Cortana, which have since become mainstream.

1950 & 1951

Asimov
& Turing

Isaac Asimov “I, Robot” Book Cover
Source: HarperCollins Publishers

In the early 1950s, science fiction writer Issac Asimov proposed the three laws of robotics and published the influential sci-fi story collection 'I Robot’. The same year, mathematician Alan Turing created the Turing Test, which measured technology’s ability to deceive a human judge.

1956

Dartmouth
Conference

John McCarthy
Source: Jeff Kubina

John McCarthy organized a summer conference at the Dartmouth University where The ‘Logist Theorist’ program was presented. It is considered the first AI based program designed to mimic the problem solving skills of a human.

1954
(Granted 1961)

First Industrial
Robot

Unimate
Source: UL Digital Library

Inventor George Devol created Unimate, the first industrial robot which transformed the manufacturing world. Unimate grew from the planning and business insights of Joseph Engelberger – the Father of Robotics.

1968 - 2001:

A Space Odyssey

2001: A Space Odyssey
Source: James Vaughan

Academic Marvin Minsky guided Stanley Kubrick for the film 2001 space odyssey influencing science fiction. He also supported a ‘top down approach,’ the notion of pre-programming a computer with the rules that govern human behaviour.

1969

Shakey
the Robot

Shakey the Robot
Source: SRI International

Shakey was the first general-purpose mobile robot with the ability to make decisions about its own actions by applying logical thinking of its surroundings. However, AI was slow and there were challenges and this was made aware by Shakey the Robot.

1970s & 1980s

AI Blues

1980s Computer
Source: Marcin Wichary

AI advancements slowed in the 1970s. Although there was proof of concept, computers still couldn’t store enough information, process fast enough and couldn’t perform tasks like facial recognition. As Hans Moravec, a faculty member at the Robotics Institute of Carnegie Mellon University said, “computers were still a million times too weak to exhibit intelligence.” In the 1980s, AI began to reemerge, new funding was in place and ‘expert systems’ were developed by the AI pioneer Edward Feigenbaum.

1990s & 2000s

Robots
at Home

iRobot’s Roomba vacuum cleaner
Source: Jo Zimny

AI scientist, MIT professor and founder of IRobot, Rodney Brooks emphasized that limitation of machines and interaction with the environment hinders the design of intelligent systems. He strongly argued that humans' use is much better off if the design of AI is human centric. His company, IRobot invented the first successful robot for the home called Roomba - an autonomous vacuum cleaner.

2011

Rapid Progress

Siri on iPhone 4S
Source: Kārlis Dambrāns

The new millennium saw a massive leap in AI’s progress. IBM Watson beat Jeopardy world champions Ken Jennings and then Brad Rutter a 20-time winner. And then in 2011 Apple introduced intelligent personal assistant Siri on the iPhone 4S, followed by Google Now and Cortana, which have since become mainstream.

Today

Deeper Learning

Robot Creating Music
Source: Photos Hobby

Realistic language processing algorithms such as OpenAI’s GPT-3 and image and voice processing used for deep fakes are only some of the new ways that AI is being applied. Today, task-oriented AI can beat humans at the game Go, at complex video games and military exercises. Even so, according to researchers and the Levels of AIX Framework, we are still in the early stages of AI’s development. What will tomorrow bring?

Glossary of Terms

5G

Fifth Generation mobile technology which allows for improved communications everywhere.

Algorithm

A precise sequence of steps used to solve a task.

Artificial General Intelligence

AI that has general capabilities and functions on a similar level as a human, both logically and emotionally.

Artificial Intelligence Experience (AIX)

The term used in this report to describe the unique design principles that should be applied to creating great human experiences through AI.

Artificial Neural Network

An algorithm that mimics the structure of the brain.

Artificial Super Intelligence

Often disputed level of AI that far surpasses human capabilities in all aspects. It’s worth noting that this level of AI is mostly considered science fiction in the eyes of researchers.

Big Data

A phenomenon brought on by the digital age whereby huge amounts of data are created, stored and used for applications such as training AI systems.

Blockchain

A robust digital leger technology secured through cryptography that can support the transparent exchange and storage of digital information in records (blocks).

Cloud Computing

Virtual systems for storing and processing data, which are layered with software that sit remotely across many servers and don’t require direct physical management by the user.

Deep Learning

An Artificial Neural Network with many layers.

Human-Machine Collaboration

The application of AI not to replace humans, but to assist humans in performing complex, menial or dangerous tasks.

Internet of Things (IoT)

A term referring to the proliferation of smart devices and sensors that are connected wirelessly, collecting and storing data in the cloud.

Machine Learning

A set of algorithms that learn from data to make predictions.

Narrow AI

AI that is designed and trained to do a specific (narrow) task.

Quantum Computing

An experimental but promising new way of processing information using quantum mechanics, which allows for much more powerful computation than traditional binary digital systems.

Reinforcement Learning

Learning actions from rewards by the indication of correctness at the end of a sequence.

Supervised Learning

Learning known patterns from examples which provide desired outputs for given inputs.

Unsupervised Learning

Learning unknown patterns in input data when no specific output values are given.

Great AI Reads

If you’re looking for further insight and information on the AI industry,
start by picking up some of these important reads.

Human Compatible

Stuart Russell

This cutting- edge book sets a new approach on AI and how we must make sure we don’t lose control over the superintelligent machines to ensure we can coexist safely in the future.

AI Superpowers

Kai Fu Lee

Written by AI pioneer Kai Fu Lee the book discusses the advancements China and US have made with AI. It paints us a picture of the future advantages and changes that AI will have on humankind.

Life 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark

Thought provoking dialogue about the present and distant future possibilities on the impact of AI. And for it to be non destructive our goals need to be connected with AI.

Homo Deus: A Brief History of Tomorrow

Yuval Noah Hariri

An epic read into the future and how AI impacts human society. This book looks at how we have developed from history and what our next phase of life will be like with machines and technology.

Big Mind

Geoff Mulgan

An enlightening read, Geoff gathers diverse perspectives from philosophy, computer science, biology and explores how collective intelligence has potential to solve the greatest challenges of our time.

Exchange Your Perspective

Want to be involved in shaping the future of AI experiences?
Share your email and we’ll keep you updated on what comes next.

This report is sponsored by LG Electronics and Element AI and produced by the BriteBirch Collective.

Contact Us

If you’re interested in collaborating on initiatives related to Artificial Intelligence
Experience (AIX) and the creation of a more equitable, safe and transparent future
through human-centric AI, please email us at aixexchange@lge.com.

This work is licensed under CC BY-NC-SA 4.0

Use of this website constitutes acceptance of the Legal and Privacy PolicySitemap.