As humanity’s relationship with #AI grows, experts call for protective framework
Scientists have proposed a new international framework to keep ethics and human wellbeing at the forefront of our relationship with technology.
“We’re facing a fourth industrial revolution through the rapid development of AI and technology – but as our relationship with AI grows, so does its potential to disrupt our lives”. Professor Rafael CalvoDyson School of Design Engineering
From gene therapy and AI-predicted disease to self-driving cars and 3D printing, advances in technology can improve health, free up time, and boost efficiency.
However despite the best intentions of its creators, technology might lead to unintended consequences for individual privacy and autonomy.
There’s currently no internationally agreed-upon regulation about who, for example, has access to the data recorded by black boxes in cars, smart TVs and voice enabled personal assistants – and recent findings have shown that technology can be used to influence voting behaviour.
Now, Imperial College London researchers have suggested a new regulatory framework with which governments can minimise unintended consequences of our relationship with technology. The comment piece is published in Nature Machine Intelligence.
The group of researchers, led by Imperial’s Professor Rafael Calvo, say their proposal could help ensure human interests like ethics, privacy, and wellbeing are prioritised as our relationship with technology grows.
They suggest using the Environmental Impact Assessment, which evaluates the likely environmental impacts of a proposed project or development, as a blueprint. The assessment would consider the inter-related socio-economic, cultural and human health impacts of a AI and technology.
The proposed framework, known as the Human Impact Assessment for Technology (HIAT), would be designed to predict and evaluate the impact that new digital technologies have on society and individual wellbeing. This, they argue, should focus on ethical considerations like individual privacy, wellbeing and autonomy.
“Impact assessments are an important tool for embedding certain values and have been successfully used in many industries including mining, agriculture, civil engineering, and industrial engineering.” Professor Rafael CalvoDyson School of Design Engineering
The assessment should also consider which parties are responsible for managing data and maintaining ethical standards, as well as who is responsible when things go wrong, say the researchers.
Professor Calvo, of Imperial’s Dyson School of Design Engineering, said: “We’re facing a fourth industrial revolution through the rapid development of AI and technology – but as our relationship with AI grows, so does its potential to disrupt our lives. Take, for example, evidence that AI is used by humans to manipulate emotions, attention, and voting behaviours, as well as legal, educational, and employment decisions.
“Now is the time to put together a framework to ensure our relationship with AI continues to be a positive one.”
Ethical conundrums
A HIAT framework would help emerging industries navigate the ethical conundrums that go hand in hand with using AI and storing large amounts of data.
According to the comment, questions that internationally agreed guidance could help answer include:
- Some AI assistants can call restaurants on a person’s behalf and use realistic human speech to make reservations. In these instances, what obligations should there be to make the human who picks up the phone aware that the caller is a machine rather than a human? What consent must be obtained to save the data gathered from these conversations?
- Some drivers fit black boxes to their cars that transmit information to insurance companies about the way they drive. This information is used to calculate insurance premiums. In these instances, how can we prevent the personal data gathered and processed by AI (where you go to and when) from being sold to third parties?
- Some police services use Body Worn Video (BWV) while on duty to record video and audio that could later be used during investigations. There have also been recent trials of facial recognition software – but who has access to the data collected in these cases, and are the regulations robust enough to protect the public’s privacy?
“As AI matures, we need frameworks like HIAT to give citizens confidence that this powerful new technology will be broadly beneficial to all”. Professor Rafael CalvoDyson School of Design Engineering
Professor Calvo added: “Although we often benefit from technological progress, we can also suffer ethical, psychological and social costs.
“Impact assessments are an important tool for embedding certain values and have been successfully used in many industries including mining, agriculture, civil engineering, and industrial engineering.
“Other sectors too, such as pharmaceuticals, are accustomed to innovating within strong regulatory environments, and there would be little trust in their products without this framework.
“As AI matures, we need frameworks like HIAT to give citizens confidence that this powerful new technology will be broadly beneficial to all.”
“Advancing impact assessment for intelligent systems” by Rafael A. Calvo, Dorian Peters & Stephen Cave, published February 2020 in Nature Machine Intelligence.
Responses