March 30, 2023

Brief introduction to ethical AI and AI ethics

Every few years there is a new scare of new technology. It happens periodically. It targets new developments such as robots, self-driving cars, vaccines, or genetic manipulation.

Today people are puzzled by new AI technology such as ChatGPT. And society exhibits reactions such as proposals to ban new technology from certain environments such as from schools, or even banning technology altogether until we understand it better. Of course, some of this is provoked by competition, e.g. Google’s DeepMind, the guys that developed AlphaGO, just signed a petition to ban the development of large language models such as GPT-4, while starting new internal research projects to develop new large language models themselves.

So we start talking about ethical implications, and legal responsibility related to the new technology. 

But, the good news: ethical AI is not a new concept. Here is a quick introduction.

(photo by Fotor)

Theoretical research

There is already theoretical research in ethical AI, AI ethics, and artificial awareness. We are not talking about the responsibility of machines, of course (there is no machine liability, since there is yet no AI awareness, intentionality, or free will). 
But we talk about ethical implications, where the liability lies, and how we can categorize and analyze ethical implications.
In fact, we can categorize the legal and ethical implications of AI even when there is no machine responsibility. E.g, (Moor, 2006) classifies AI ethics into 4 levels of artificial moral agents:
  • AMA-1. Built-in ethical issues. Any technology includes some underlying ethical decisions, related to how it is used or designed.
  • AMA-2. Machines that incorporate pre-programmed ethical decisions. For example, prohibiting children from accessing the internet.
  • AMA-3. Machines that make ethical decisions based on automated algorithms, such as self-driving cars.
  • AMA-4. Intentionality, moral principles, self-conscientiousness.
There is also theoretical research into artificial awareness. An interesting topic is that, as Padin Fazelian said, everything comes down to the basic unanswered question: "what is human consciousness". Machines could even tell us something new about human awareness itself. 
Since ethical decisions are part of any technology, Katleen Gabriels proposes, in "Conscientious AI. Machines Learning Morals", a framework where ethical implications are analyzed even before the design of new technology.
Besides these theoretical implications, there are also practical issues such as explainability and interpretability. AI is a complex technology and we are not always fully able to explain how it works. While this behavior is sometimes seen as miraculous, it nevertheless poses practical problems such as difficulty in debugging malfunctioning products.
Legal implications and frameworks

There are UN recommendations already related to the Ethical Use of Artificial Intelligence. 

There was also an international panel and debate. 

According to UNESCO, "In November 2021, the 193 Member States at UNESCO’s General Conference adopted the Recommendation on the Ethics of Artificial Intelligence, the very first global standard-setting instrument on the subject. It will not only protect but also promote human rights and human dignity, and will be an ethical guiding compass and a global normative bedrock allowing to build strong respect for the rule of law in the digital world."

There are also studies and debates organized by the European Commission, ever since 2019. These produce results and deliverables such as Ethics Guidelines for Trustworthy AI. The proposal for regulation made by EC is currently with the European Parliament: "Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts - COM/2021/206 final".

The Commission was saying in 2021: "The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:

  • a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems;

  • a civil liability framework - adapting liability rules to the digital age and AI;

  • a revision of sectoral safety legislation (e.g. Machinery Regulation, General Product Safety Directive).

The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard".

In conclusion,

We need to have a discussion on ethical AI and the ethical implications of AI. This discussion is not new, and is not conclusive. But banning technology is often impractical and sometimes impossible. When a technology becomes ubiquitous, banning it might even be considered absurd. Such measures should not be taken based on commercial interests, nor based on irrational social scare.

No comments:

Post a Comment

Villeneuve's Dune doesn't know if it is more like a book, or like a movie

There are authors like Frank Herbert and Philip K. Dick whose books are very difficult to make into a movie. Villeneuve proves this again wi...