According to Charles Lee Isbell Jr., Dean of Computing at Georgia Institute of Technology, for something to truly be intelligent, in a way that is meaningful, it has to be intelligent with people, not intelligent like people.
Fallibility is not a one-way street. The industry frequently talks about teaching AI tricks about human beings, but what about the reverse? The fallibility of the end user, and how best to limit their own manipulations and or exploitations of AI must also be considered.
The relationship between humans and AI services and devices should be viewed not in the literal sense, but as a metaphor that describes two-way interaction, collaboration and the exchange of information and understanding. This distinguishes AI from earlier technologies, as we now work in tandem with these tools, learning from each other to achieve a common goal.
Artificial Intelligence Experience (AIX) is a concept that requires designers, developers, policymakers and end-users to share an understanding of the human-centric dimensions that must be considered for creating equitable, enjoyable and valuable AI products and services for end-users. This includes how we, as humans, might interact with technology that is becoming better at thinking and acting human and that begins to take a more meaningful role in our lives.
For Prof. Alex Zafiroglu, Deputy Director of the 3A Institute (3Ai) at the Australian National University, an anthropologist and formerly Intel’s foremost domain expert in homes and home life, end-users’ expectations need to be managed. She points out that machines are incredibly good at specific things but are limited to only that thing.
“They're very good at maths, for example, and humans are very good at other things, including relationships with other human beings,” she says. “When we mix those two things up and we expect our computing systems to do the hard work of sociality and connections between people, we are making a mistake at the level of who has responsibility for actions in the world, particularly as it relates to relationships among people.”
Indeed, we are seeing the early steps towards technology taking more active roles in our daily lives, be it to vacuum a rug or taking on the cognitive task of digitally codifying the world. But most of these cases show AI focused on a singular task, which is a much more likely application of the technology for end-users to understand and relate. In this way, the idea of Artificial General Intelligence (AGI) is less likely and that these early instances of narrow AI should be imagined multiplied over and over to create emergent AI experiences.
Source: Manny Becerra
Co-CEO of the McIntosh Group
One example of this is the Observatory for Human-Machine Collaboration (OHMC) at the University of Cambridge, which collaborated with domestic appliance firm Beko, to train a robot to prepare an omelette from scratch.
The optimum word here is train. The making of an omelette is based on narrow AI trained on data and repetition. But since it makes an omelet like a human, will end-users come to expect this robot to also make pancakes? Maybe. But what about making recommendations for music, healthcare treatments?
But AI applications will rarely be so specific. In the Levels of AIX Framework launched at CES 2020, various scenarios were used to help illustrate the increasing integration of AI in our lives. But we need to be comfortable with the technology in order for it to become useful.
Imagine, the weather forecast calls for snow and the AI alerts the family to dress warm, preheats the oven and orders ingredients to prepare their favourite meal.
Conversely, the car’s AI is an extension of the home and knows that the user is running late, suggests altering the usual route to ensure that an appointment is not missed and provides a calming environment.
Finally, a car’s AI interfaces with the smart city to experiment with different routes, departure times and driving speeds, optimizing journeys based on daily user objectives and other goals such as fuel efficiency or journey time.
These scenarios aren’t farfetched. In fact, they may happen sooner than we realize. What will be important to understand, however, is how AI will be designed to consider the end-user and the ways they will want to relate to technology that knows them evermore personally.
VP Technologies at Qualcomm Technologies Netherlands B.V.
VP Technologies at Qualcomm Technologies Netherlands B.V.
Source: Maximal Focus
How does an AI device learn about us and how do we learn about it? It is a fascinating question and the best way to answer it, for now at least, is to show what might occur. Humans can adapt, it is part of our very being, but while AI is now learning about us, we as a society are falling behind the learning curve for how this technology will impact us. More importantly, we must learn to leverage AI rather than the other way around.
“Well, if we assume that there's going to be this massive influx of artificial intelligence in our private lives as citizens, as consumers, as workers as well, then of course, we're going to need to learn, what questions to ask,” says Colclough. “But I think for the majority of ordinary citizens for ordinary workers we cannot even imagine the power and potential of these technologies. So, we don't know what questions to ask. We don't know what the threats to our privacy rights or human rights are.”
And what questions will AI need to ask about us?
When it comes to learning, a pivotal AI sub-category is Affective Computing (AC), described by Hayley Sutherland, senior research analyst of AI software platforms at IDC, as a combination of computer science, behavioural psychology and cognitive science. Sutherland stated in a blog released last year that AC uses hardware and software to identify human feelings, behaviours and cognitive states through the detection and analysis of facial, body language, biometric, verbal and/or voice signals.
Multidisciplinary approaches like this and the ones advocated by Prof. Zafiroglu at 3Ai allows AI system to be built not only to learn the affective signals of a human, but as the end-user, that same human would be involved in the entire learning process for how to best use what is essentially an intelligent tool.
“We often find that when people are talking about AI, they are talking about super big systems and it sounds really big and scary, and it gets abstracted out to a level that you can't really tell what the impact is going to be on the individual person that's using the system, or it gets refined down so closely to an interaction between a human being and a device. You can't begin to see the connections between that person and that device and then other systems that are also using that data.
We [at 3A Institute] are filling a role right in the middle there, trying to draw the threads together between how data is being used in the world and the types of systems that are enabling and making that type of usage of data and the usage of those systems understandable and actionable. A wide variety of people that need to understand how that data is being used. We are training the next generation of practitioners to go out into the world and work in a variety of settings from policy to industry, to academia, to education at the non university level, to think tanks, to product teams, to strategy teams.”
Source: Ameer Basheer
Source: Possessed Photography
AI is developing fast, but it isn’t doing it alone. In fact, many technologies are advancing simultaneously, further enabling the advancement of AI and creating an exciting future full of possibility.
Plays an essential role in the Internet of Things (IOT) by enabling us to store and process data as close to the source as possible which helps to increase operational efficiency and contributes many advantages to the system.
A new kind of computer that manipulates subatomic particles such as electrons or photons to provide significantly more powerful processing than even today's most powerful supercomputer.
Fifth generation mobile networks (5G) provides much greater bandwidth, giving higher download speeds up to 10 gigabits per second. This will enable and accelerate the development of smart devices, IoT and autonomous vehicles.
A decentralized, distributed ledger that records data in secure blocks that can’t be altered, the technology ensures the transparency and traceability of data between all parties in the chain.