As artificial intelligence systems become more common, the need to regulate their use becomes clearer. We have all seen how technologies such as facial recognition can be untrustworthy at best and biassed at worst. Or how policymakers can abuse artificial intelligence and infringe on human rights. The European Union is now considering imposing formal regulations on the use of AI.
On Wednesday, the European Commission proposed regulations that would restrict and guide how companies, organizations, and government agencies use artificial intelligence systems. If approved, it will be the first formal legislation to govern AI usage. The EC says the rules are necessary to safeguard “the fundamental rights of people and businesses.” The legal framework would consist of four levels of regulation.
The first tier will consist of AI systems considered to be “unacceptable risk.” This will be algorithms deemed a “clear threat to safety, livelihoods, and rights of people.” The bill would outright prohibit software intended to alter human behaviour, such as China’s social scoring system.
The second tier consists of “high risk.” AI technology. The European Commission’s concept of high-risk apps is general, encompassing a vast variety of software, some of which is already in use. Law enforcement software that employs AI and can infringe on human rights would be strictly controlled. One example is facial recognition. All remote biometric recognition devices, in particular, fall under this group.
This programmes would be strictly supervised, necessitating high-quality datasets for testing, operation records to track results, accurate reporting, and “appropriate human oversight,” among other items. The use of the majority of these software in public places will be prohibited by the European Union. However, concessions will be made for matters of national security.
AIs with “limited risk” are the third category. Chatbots and personal assistants, such as Google’s Duplex, are the most popular examples. These processes must have a high enough degree of accountability to be defined as non-human. The end-user must be able to choose whether or not to communicate with the AI.
Finally, there are services that are deemed “minimal risk.” This will be AI programmes that pose little or no risk to human life or liberties. Email filtering algorithms and artificial intelligence (AI) used in video games, for example, will be excluded from oversight.
Fines in the region of 6% of a company’s worldwide revenue will be seen as enforcement mechanisms. However, when European member states negotiate and hash out the specifics, it might take years for anything to take place.