“AI robots are increasingly being used to facilitate human activity in many industries, such as healthcare, education, mobility and the military, but they must be held accountable,” the university said. “We need to create specific reporting guidelines to ensure that the use of AI robots remains ethical.”
“In a normal work environment, if a person makes a mistake, a mistake or does something wrong, it is obvious who is responsible in most circumstances, or that person, or the wider organization,” said marketing and management researcher Joffi Toth. “However, when you include AI robots in the mix, it becomes much harder to understand.
Researchers reviewed the use of AI robots in different settings from an ethical perspective and identified four “reporting clusters” to help determine where responsibility for AI robot actions lies.
Warning, dear reader: This article attempts to summarize philosophical research – which is far from the home field of Electronics Weekly. Please read the article below if the results of this study may be important to you.
Clusters are freely called:
- Illegal – any action that contradicts the law and regulations
Where AI robots are used for small, corrective, everyday tasks such as heating or cleaning.
Robot design experts and customers take the greatest responsibility for proper use.
- Immoral – any action that reaches only the minimum legal threshold
Where AI robots are used for difficult but basic tasks such as mining or agriculture.
A wider group of organizations bears the burden of responsibility.
- Morally permissible – actions that do not require explanations of alleged fairness or appropriateness
Where AI can make decisions with potentially major implications, such as healthcare management and the fight against crime.
Governments and regulators need to be involved in coordinating the guidelines.
- Super-territorial – where robots with artificial intelligence are used worldwide, such as in the army or in driverless cars.
A wide range of government bodies, regulators, companies and experts are responsible. Although reporting is widespread, “this does not mean that AI robots are usurping the role of ethical human decision-making,” according to the university, “but it is becoming increasingly difficult to attribute the results of AI robots to specific individuals or organizations.” and therefore these cases deserve special attention. “
Previously, accountability for these actions was a gray area, the university said, but a framework like this should help reduce the number of ethically problematic cases of artificial intelligence robots.
The work is described in “The dawn of AI robots: towards a new reporting framework for AI robotsPublished in the Journal of Business Ethics – fully available, expect a lot of philosophical language.