Laws and texts on artificial intelligence
Translation: Judy Fong
Aurora, artificial intelligence, and the standards board
Last winter, I applied for an extremely exciting course through the Aurora University Alliance. Teaching was online and the subject was artificial intelligence (AI) and the legal aspects of its use. The course could hardly have been held at a better time as that was when ChatGPT fever began to spread in the world.
For my final project in the course, I chose to study the intersection between new legislative proposals from the European Council on artificial intelligence and several of the main international standards. Drafts of legislation from ESB about AI were easy to find online and interesting to read. Access to standards was more difficult because generally they are sold to AI manufacturers to meet market demands. Icelandic Standards granted me temporary read access to several international ISO standards which would have otherwise cost me a lot of money, for which I am truly thankful.
Asimov, the laws and standards
To make the project even more fun, I decided to compare the new laws and standards with Isaac Asimov’s laws about correct robotic conduct, which was popularized through his I, Robot short story collection over 80 years ago. The following are the three laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its existence as long as such protection does not conflict with the First or the Second Law.
Asimov’s laws have through time been criticized and torn apart within many other science fiction stories where other authors have found ways for robots to break the laws, both wittingly and unwittingly. Therefore it is nearly impossible to use these laws as they do not answer countless legal, practical, and ethical questions.
A new European legislative proposal on artificial intelligence is now in the approval process on the mainland. It will likely be the first comprehensive law in the world regarding artificial intelligence, automation, and robots of all types. This will possibly also be the first European legislation called an “act” instead of a decision, regulation, or decree which the ESB has used until now. The European proposal is called a “Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence”, also called the Artificial Intelligence Act.
A review of the proposal quickly sheds light on the fact that there is no clear line on how artificial intelligence should be implemented. Instead, there is a risk assessment of the usage of some devices used as precedence, and the risk is divided into three categories:
1) unacceptable, such as artificial intelligence that directly acts against other laws within the ESB.
2) some and limited, e.g. bodily harm, decreasing safety, or taking away the rights of people within the ESB.
3) little to none, meaning artificial intelligence or similar automation that is not risky according to the previous items.
AI deemed an unacceptable risk will be banned according to the laws, but AI that has some limited risk must follow detailed guidelines and standards. AI and technology deemed little to no risk for people and communities will be exempt from the law.
The new European legislation on AI does not need to stand alone but rather work together with other ESB legislation, such as the regulations on “data decision-making authority”, regulation on “the digital market,” regulations on “digital services,” and also with the European GDPR. These regulations all refer to standards of some type supported by guidelines or implementations of legislative requirements in addition to certain regulations. Therefore it can be said that the ethical and technical responsibility of implementing the regulations are handed over to the experts and business stakeholders who work together with legislative representatives.
Today there are over 20 standards with “artificial intelligence” in its name. Most are technical but six of these standards contain good guidelines regarding minimum risks of AI and all kinds of robots. Two of them cover communication between people and advanced systems. The guidelines also mention operating rules for automatic machines used daily by people. Four standards are about technology connected with AI. They touch on various concerns regarding bias in software and decision-making AI available on the market, including ethical and societal issues that can come up and how to minimize the risks of these problems.
These AI standards are rather new. Also, Helga Sigrún, the head of the Icelandic Standards, pointed out a standard from 2010 that was translated into Icelandic in 2020 called, “Guidelines for Societal Responsibility” which demonstrates how companies and organizations could cultivate various societal values, such as transparency, gender equality, sustainability, decision traceability (value chains), environmental protection and protection of vulnerable groups. The standard also mentions human rights and legal and regulatory responsibility, as well as animal protection and animal welfare which is done per ideas of ethical behavior. It can be said that this standard of societal responsibility adequately frames most of what the AI standards attempt to achieve. The biggest drawback of this otherwise fine standard is that since it is a guideline, it is not strictly certifiable. Certifications of standards are important to demonstrate success.
Asimov’s laws fixed
The project consisted of comparing Asimov’s laws of robotics, European Commission proposals on AI, and the main standards on AI and societal responsibility.
The conclusion is that the proposal and newest international standards in the field leave Asimov’s laws in the dust which are now regulating and forming new ideas of how to prevent problems due to AI, automation, and robots. Ideas that are much more detailed are likelier to succeed where manufacturers, companies, and market forces, need clear legal frameworks and implementations in the form of standards that guide them to prevent AI, in the form of algorithms, bots, and tyrants, from taking life and limb from people or to choose to cause other irreparable damage in the future.