Every few years there is a new scare of new technology. It happens periodically. It targets new developments such as robots, self-driving cars, vaccines, or genetic manipulation.
Today people are puzzled by new AI technology such as ChatGPT. And society exhibits reactions such as proposals to ban new technology from certain environments such as from schools, or even banning technology altogether until we understand it better. Of course, some of this is provoked by competition, e.g. Google’s DeepMind, the guys that developed AlphaGO, just signed a petition to ban the development of large language models such as GPT-4, while starting new internal research projects to develop new large language models themselves.
So we start talking about ethical implications, and legal responsibility related to the new technology.
But, the good news: ethical AI is not a new concept. Here is a quick introduction.
- AMA-1. Built-in ethical issues. Any technology includes some underlying ethical decisions, related to how it is used or designed.
- AMA-2. Machines that incorporate pre-programmed ethical decisions. For example, prohibiting children from accessing the internet.
- AMA-3. Machines that make ethical decisions based on automated algorithms, such as self-driving cars.
- AMA-4. Intentionality, moral principles, self-conscientiousness.
There are UN recommendations already related to the Ethical Use of Artificial Intelligence.
There was also an international panel and debate.
According to UNESCO, "In November 2021, the 193 Member States at UNESCO’s General Conference adopted the Recommendation on the Ethics of Artificial Intelligence, the very first global standard-setting instrument on the subject. It will not only protect but also promote human rights and human dignity, and will be an ethical guiding compass and a global normative bedrock allowing to build strong respect for the rule of law in the digital world."
There are also studies and debates organized by the European Commission, ever since 2019. These produce results and deliverables such as Ethics Guidelines for Trustworthy AI. The proposal for regulation made by EC is currently with the European Parliament: "Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts - COM/2021/206 final".
The Commission was saying in 2021: "The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:
a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems;
a civil liability framework - adapting liability rules to the digital age and AI;
a revision of sectoral safety legislation (e.g. Machinery Regulation, General Product Safety Directive).
The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard".
Post a Comment