The Human-centric Perspective in the Regulation of Artificial Intelligence

Printer-friendly version

Abstract: The development of new emerging technologies, such as artificial intelligence, has sparked a scientific debate on their risks and benefits. This debate necessitates legal and regulatory considerations, particularly regarding the balance between technological growth and the protection of human rights. This Insight analyses the legal framework established by the European Union in its initial regulatory measures. The Insight highlights the importance of considering the human-centric perspective and adopting a risk-based methodology in the Artificial Intelligence Act. It also mentions the AI regulatory measures proposed by Member States, with a particular focus on Italy.

Keywords: artificial intelligence – European Union policies – Artificial Intelligence Act – human rights – EU regulation – technology.

 

I.   Introducing the impact of Artificial Intelligence

In the age of hyperintelligence, human-machine interaction plays a fundamental role in everyday life, and therefore national and supranational Institutions must regulate potential risks and prioritise the human being in this "artificial relationship". This is particularly important when considering the impact of rapidly developing forms of Artificial Intelligence (AI) on fundamental rights.[1]

We have come a long way since John McCarthy spoke of "Artificial Intelligence" for the first time at the Dartmouth Conference,[2] and today the enthusiastic understanding of the inevitable benefits deriving from such technological advances is accompanied by a widespread fear of the risks associated with the numerous applications of AI across multiple sectors. There is a fear that it could become a technology capable of “enslaving” human beings.[3] Even Open-AI CEO Sam Altman, who was among the signatories of the statement “Mitigating the risk of extinction by AI should be a global priority along with other risks on a societal scale such as pandemics and nuclear wars”, expressed his concerns about the impact of AI applications on areas related to rights and social inequality assumptions.[4]

In this context, the AI Act appears to be moving towards a system of European standards that can identify the limits beyond which the use of artificial enhancement is deemed unacceptable. This marks the initial step in regulating the applications of artificial intelligence as it is a complex process that cannot be fully addressed by the AI Act alone.

Artificial intelligence has various applications and significant impacts on social, cultural, ethical, and environmental fields; therefore, AI governance should be based on principles of equity, accessibility, and inclusivity to avoid widening digital disparities and inequalities. This is the essential requirement for achieving a balanced coexistence between humans and artificial intelligence, in fact, while AI can facilitate some processes of digital, environmental, and social transformation[5] – especially in less developed Regions by promoting their inclusion – it can also have potentially negative or controversial effects, for example on the labour market,[6] in the social system where there is a risk that the digital divide will fuel new forms of inequality,[7] and also on the energy system where the energy-intensive impact of AI cannot be ignored.[8]

II.  The AI Act: the risk architecture

The increasing use of AI in various sectors that have a significant impact on social development and, consequently, on the protection of fundamental human rights, has highlighted the need for a regulatory framework. This framework should not hinder innovation but ensure development that truly corresponds to the increase in human well-being.[9]

Several new scenarios have emerged that require legal attention: for instance, issues arising from gender and race-based bias in facial and voice recognition systems, as well as considerations regarding privacy and data security. It could be argued that supranational legal instruments, such as the GDPR,[10] may not be fully equipped to regulate the emerging context of artificial intelligence.[11]

Europe has responded to this regulatory emergency with the AI Act, a regulation on artificial intelligence approved by the European Parliament on March 13, 2024, waiting to pass the last scrutiny of the EU Council before being published in the EU Official Journal for the final entry into force.[12] The aim of the regulation is to balance the advancement of AI with the protection of people from potential risks, ensuring the free cross-border movement of AI-enabled goods and services while preventing Member States from imposing unnecessary restrictions. The European Union has proposed legislation on artificial intelligence systems using a risk-based approach that establishes different obligations for providers and users of artificial intelligence systems depending on the level of risk.[13]

So, what is meant by risk? The term “risk” is defined in art. 3 of the AI Act as: “the combination of the probability of an occurrence of harm and the severity of that harm”.[14] If an AI system presents an "unacceptable" risk level, such as those used for social scoring or cognitive manipulation of vulnerable individuals, it will be prohibited. For instance, real-time biometric identification systems, such as facial recognition, are prohibited.[15] AI systems that pose a “high-risk” for instance to health, education, law enforcement, security or fundamental rights will require evaluation before being made available on the market, and afterwards through post-marketing monitoring. Additionally, they will be subject to stricter safety, traceability, and transparency obligations during their lifespan, as is the case for low-risk AI systems, i.e. those that interact with natural people (e.g., chatbots), and that create or manipulate sounds, images, and videos (e.g., deepfakes). Low-risk AI systems therefore place the responsibility for personal risk assessment on the user, who can choose to use the tool at their own discretion.

The AI Act aims to create a secure internal market for consumers while balancing innovation with the protection of health, safety, essential rights, democracy, and the environment. Additionally, it seeks to address concerns regarding social manipulation, the spread of false information, and mass surveillance by technology companies.

Although the effective applicability of the European Regulation is still 24 months away from the publication in the Official Journal of the EU, it is worth noting that there are at least two positive aspects to this development.[16]

The first is the use of foresight as a method of regulation. Foresight is a practice that goes beyond mere forecasting, involving an active projection into the future by analysing trends, identifying possible evolutions, and assessing long-term implications. The use of foresight techniques is considered to be essential in the context of new technologies, particularly in relation to artificial intelligence systems. These techniques enable the legislator to anticipate significant changes, to adapt to emerging challenges and to regulate them effectively according to their concrete evolution, thereby also facilitating active participation in the definition of the evolutionary path.[17]

Furthermore, it should be noted that the AI Act has been crafted to incentivize Member States to expeditiously evaluate the potential implementation of the Regulation, and this process was facilitated using foresight tools. In the days following its approval by the European Parliament, Member States began to contemplate the modalities of the future application of the Regulation, considering pre-existing legislation in individual States that pertains to issues that are now also relevant to AI, such as the protection of privacy policies.[18]

For example, on 25th March 2024, the Italian Data Protection Authority submitted a report to Parliament and the Government regarding the Authority for AI. In the report, they recommended a joint effort to establish regulations which must necessarily consider the close relationship between artificial intelligence and data protection. And then propose itself as the national authority competent to perform the task of supervisory authority for AI, with the attribution of the functions referred to in art. 70 of the Regulation, without prejudice, of course, to the powers of the Government about the general promotion and secondary regulation of the matter. In this regard, in fact, the Data Protection Authority declares that: ‘‘already possesses the requirements of competence and, at the same time, independence necessary to ensure an implementation of the Act consistent with the objective of guaranteeing a high level of protection of fundamental rights in the use of the A.I., as set out in Article 1, paragraph 1”.[19]

III. Further AI policies: development incentives and state regulatory frameworks

Prior to the approval of the AI Act, the European Union had already initiated several initiatives aimed at the development of artificial intelligence, including through the incentive of education tools on the use of AI, a profile that is also relevant for curbing some of the risks envisaged and regulated, now, in the AI Act. For instance, to help Member States improve the provision of specialized AI training, the European Commission awarded grants amounting to € 6.5 million at the end of 2020. These grants were awarded to four University networks, Small and Medium-sized Enterprises (SMEs), and centres of excellence for running advanced degree programs in AI. The goal is to provide top-notch academic programs that focus on AI applications in government, healthcare, and human-centred AI. This funding will facilitate collaborations amongst chosen networks.

In addition, the European Parliament shall monitor constantly the implementation of the EU's Horizon programme, which funds research on artificial intelligence and other areas. It shall also supervise the 2030 policy programme “Path to the Digital Decade”, aiming for 75 per cent of EU firms to use cloud, AI, and big data by 2030. By doing so, the European Union will maintain its active role in establishing policies within sectors significantly impacted by generative AI, including the creative economy, education, health, and a variety of industrial, social, and cultural fields.[20]

To pursue the same goal of digital training, the "Coordinated Plan on AI" was published as early as April 2021 to implement digital education initiatives from 2021 to 2027. The plan accentuates ethical aspirations related to the use of AI and data in teaching and learning. The aim is to help educators achieve their goals and promote research and innovation in this field, so that vocational training courses provide opportunities for students and teaching staff to participate in internships in the digital sector. Furthermore, the scheme supports the creation and execution of specialised educational programs, modules, and short-term training courses to provide professionals from diverse sectors with profound expertise in digital technologies,[21] and the plan seeks to incorporate doctoral programmes and modules which focus on AI into degree programmes that are not necessarily linked to Information and Communication Technologies (ICT).

Moreover, the massive focus on training with respect to the use and applications of AI is connected to one of the main feared risks, namely the access and use of a wide range of personal data whose protection, in the face of new artificial intelligence systems, cannot yet be considered. There are some concerns that have been raised regarding the privacy and security of data storage. It is important to handle and use data appropriately to avoid any potential breaches that could compromise sensitive information.[22]

In response to this issue, the Italian Data Protection Authority ordered a provisional restriction on the processing of Italian users' data by Open-AI, the company behind the Chat-GPT platform. Access was later restored after Open-AI implemented corrective actions, including an age verification system, a communication campaign to inform users, and options to opt out of data usage.[23]

This incident brings attention to the challenges of regulating privacy in the field of AI and raises questions about the adequacy of current legal instruments. While the General Data Protection Regulation (GDPR) in Europe is a step towards addressing these challenges, there may be a need for further measures to keep up with the rapid technological advancements.

Although the process of regulating AI is predominantly taking place at the European level, emphasis should be placed on the attention given at the international level by the G7 leaders who, on 30 October 2023, endorsed the Global Guiding Principles for Artificial Intelligence and a voluntary code of conduct for AI developers. The Hiroshima Process AI is a global policy framework comprising four key pillars: i) the analysis of the risks, challenges and priority opportunities of generative AI; ii) the definition of the international guiding principles of the Hiroshima Process, valid for all actors in the AI ecosystem; iii) the definition of the Hiroshima Process international code of conduct for organisations developing advanced AI systems; and iv) the strengthening of project-based cooperation to support the development of responsible AI tools and best practices.[24] The leaders aim to create an open and enabling environment through the Hiroshima AI Process. This will ensure that safe, secure, and trustworthy AI systems are designed, developed, deployed, and used to maximize the benefits of the technology while mitigating its risks. The goal is to achieve digital inclusion worldwide, including in developing and emerging economies, and to close digital divides.

The objectives set at international and European level are already reflected in some regulations – or regulatory initiatives – set at state level.

In fact, the AI Index analysis of legislative registers from 127 countries shows that the number of bills containing “artificial intelligence” that were passed into law grew from one in 2016 to 37 in 2022. An analysis of parliamentary documents on AI in 81 countries also shows that mentions of AI in global legislative proceedings have increased nearly 6,5-fold since 2016.[25]

Comprehensive legislation to regulate artificial intelligence has not yet been introduced in the United States. However, the government has launched a few federal initiatives to address the matter. One such initiative is a proposal for a charter of rights for artificial intelligence. The charter aims to provide guidance for the development, use, and implementation of automated systems, with the goal of safeguarding the rights of citizens in the United States.[26]

Another initiative includes the “AI Risk Management Framework”, a tool implemented by the National Institute of Standards and Technology to assist practitioners in managing the risks related to AI.[27] A few U.S. federal States have acted by introducing new laws in 2022: for instance, in Illinois, regulations have been implemented governing the use of AI in candidate selection. Moreover, task forces have been established at the State level to investigate the impact of artificial intelligence. Meanwhile, on 30 October 2023, an executive order (N. 14110) was signed by US President, Joe Biden, to guide the development of artificial intelligence within the United States. The order sets forth new guidelines to ensure the safety and security of AI, while also safeguarding the privacy of Americans, promoting fairness and civil rights, protecting consumers and workers, encouraging innovation and competition, and strengthening international leadership.[28]

It is worth noting that other countries have also taken steps towards implementing regulations for artificial intelligence. However, it is important to acknowledge that these measures mainly consist of guidelines, draft laws that are still under discussion, and public consultation processes aimed at informing the government's attention on appropriate regulatory and policy responses.

For example, in March 2023, the White Paper “A Pro-Innovation Approach to AI Regulation” was approved in the United Kingdom. This guide is designed to strengthen the UK's position as a global leader in artificial intelligence, harness AI's ability to drive growth and prosperity, and increase public trust in these technologies.[29] And again, in Brazil has been presented the “Projecto de lei n° 2338 de 2023 Dispõe sobre o uso da Inteligência Artificial”;[30] while in Australia they launched consultations on “Safe and Responsible AI in Australia” in June 2023 to encourage popular participation and fully reap the benefits of AI.[31]

IV. Proposals for legislation and other measures in Italy

There have been several bills on AI that have been presented in both Houses of Parliament in Italy. In October 2023, for example, were presented in the Senate the Bill A.S. 917, on measures for the transparency of AI-generated content, which aims to introduce the obligation for the subjects responsible for the publication and dissemination of AI-generated content, in any transmission medium, to ensure immediate recognition of these to users, in the manner defined by the Italian Communications Authority (AGCOM) with its own regulation.[32]

There is a second Bill presented in the Senate, the A.S. 908, which provides for the introduction of an annual law for the digital sector adopted in order to remove regulatory obstacles to the development of the digital transition, to promote the development of digital networks and services, to ensure fair and sustainable development in the adoption of digital technologies and services and in the application of AI-based tools. In addition, it aims to protect pluralism, guarantees and fundamental rights of citizens on the web, as well as sovereignty over the personal data of the holders of the same and the rights of workers in the workplace in transactions mediated by digital platforms.[33]

The Chamber of Deputies also carries out numerous activities related to the regulation of disability insurance, in the areas of justice, defence, culture, labour, productive activities and social affairs. Just to give a few examples, among the most recent, there is the bill A.C. 1514 containing provisions to ensure transparency in the publication and dissemination of content produced through artificial intelligence systems. Once again, the concern emerges about the identification of AI-produced content which, as such, must be clearly recognizable through a visible and easily understandable label and notice to users indicating that the content has been created, in whole or in part, by an artificial intelligence system.[34]

In Italy, the measures that have been introduced are not limited to legislation alone. Several fact-finding surveys have been initiated to evaluate the impact of AI on various sectors, particularly those where these new technologies have a more disruptive effect.

On 23 November 2023, the 2nd Committee (Justice and Home Affairs) at the Senate resolved to conduct a fact-finding inquiry on the impact of artificial intelligence in the justice sector to explore the issues of predictive justice and evidence formation.[35] And still in the Senate, the Committee on Environment and Public Works is conducting an inquiry into the use of digital technologies and artificial intelligence in mobility infrastructure, including a series of hearings with industry representatives, the last of which was held on 21 March 2024.[36]

An articulated and very interesting analysis, moreover, is the one conducted at the Chamber of Deputies by the 7th Commission as part of the fact-finding inquiry on the impact of digitization and technological innovation on the culture, education, universities, basic research, sport and publishing sectors, whose hearings – which began on April 12, 2023 and are still ongoing – saw sector operators, academics and experts take turns, initiating a shared process of emergence, delineation and definition of the issues, problems and interests in the field.

However, already in 2021, the Italian Government has also approved the document entitled “Programma Strategico. Intelligenza Artificiale 2022-2024”, with the objectives of enhancing skills and attracting talent to develop an artificial intelligence ecosystem within Italy.[37] This strategic plan, co-authored by three Ministries, indicates the 24 policies planned for the three-year period, and includes an increase in funding for advanced research in AI and encouraging the adoption of AI and its applications in both the public administration and productive sectors. These policies aim to enhance Italy's global competitiveness in artificial intelligence by strengthening research and promoting technology transfer. The policies are designed also to reinforce the structure of Italy's AI research ecosystem by fostering collaborations between academia, research, industry, public bodies, and society.

It is therefore evident that there is a focus, including at the Italian national level, on AI issues, which in any case are presented in a perspective of comparison and comparison with other Member States, so much so that most of these internal initiatives were the subject of joint analysis during the Interparliamentary Conference on the topic of artificial intelligence and its future impacts, held in Brussels on 28 and 29 January 2024.[38]

V.  The human-centric approach

The analysis of AI regulatory instruments suggests a thoughtful and ethical approach aimed at promoting responsible and equitable use of technology. The objective is to create an informed and prepared society for AI. The EU, in particular, aims to establish a reliable and responsible regulatory framework for AI that enhances people's lives while preserving societal values. In the rapidly evolving landscape of AI, it is important to consider the establishment of appropriate regulations to guide its adoption and implementation.

The recently implemented Artificial Intelligence Act is a significant step towards addressing the need to limit the potential abuse of AI. It also acknowledges the importance of striking a balance between regulation and innovation, which is critical to ensuring the responsible and beneficial application of AI.

There is the intention to introduce a system in which the AI should be a human-centric technology and “should serve as a tool for people, with the ultimate aim of increasing human well-being”, as the European Parliament specifies in its considerations of 13 March 2024. Furthermore, art. 1 of the regulation specifies that its aim is to enhance the operation of the internal market and encourage the adoption of human-centric and trustworthy artificial intelligence. This must be achieved while maintaining a high level of protection for health, safety, fundamental rights, non-discrimination, against the detrimental effects of artificial intelligence systems in the EU.

To build greater trust in the positive impact that AI can have on society,[39] it is essential that AI and its regulatory framework are developed in accordance with an anthropocentric vision, and therefore with the Union's values as enshrined in art. 2 of the Treaty on European Union (TEU), the fundamental rights and freedoms enshrined in the Treaties, and the Charter, as stated in art. 6 TEU.

With this aim the EU is actively promoting responsible development and AI education on a national and international level also through collaborative networks, ethical frameworks, and grants, the EU invests in AI's future to align with human values and societal needs. But the AI Act has put at the centre of the discussion the challenges and new reflections surrounding the implementation of ethical and human-oriented artificial intelligence in the European Union. It highlights the importance of concepts such as human dignity and non-discrimination in this context, where non-discrimination extends beyond equal access to technologies and encompasses the absence of “algorithmic bias”.[40] AI algorithms can be influenced by biased or poor-quality training data, leading to discriminatory or unfair outcomes.[41] Empirical evidence, in fact, indicates that artificial intelligence applications may result in discrimination against legally protected groups. This raises complex questions for EU law, as the existing categories of EU anti-discrimination law are not easily applicable to algorithmic decision-making.[42]

In the same way, the declination of human dignity, already difficult to define constitutionally, takes on a new meaning as it is also placed here in relation to the autonomy of individuals and their ability to choose whether and when to interact with AI systems.

The responsible development of AI is being advocated globally by the EU Commission's Service for Foreign Policy Instruments (FPI) and the Directorate General for Communications Networks, Content and Technology (DG CONNECT). In partnership with the European External Action Service (EEAS), an international project with a focus on humanity has been initiated. The goal of this project is to create an ethical and credible system that promotes the growth of AI with ethical principles and values. It promotes the essential principles of civil coexistence, like equality, security, non-interference in democratic processes, respect for privacy, and human dignity, these values gain new definitions and rules in the context of human interaction with AI.

In conclusion, the European Union is committed to promoting the ethical and responsible use of AI through effective regulations and investments in digital education. It is important to approach the implementation of AI with caution and involve a variety of stakeholders to ensure a balance between innovation and societal principles. By adopting this approach, a harmonious balance between technological advancement and the protection of human well-being can be established. While AI presents exciting technological opportunities, it is important to remember that humans should be the focus of all regulation. It is crucial to acknowledge that humans are not only the recipients of AI progress but also the architects of it. To ensure that AI contributes to the well-being of society, a balance needs to be struck that requires human intelligence, shrewdness, and logic.

--------------------
European Papers, Vol. 9, 2024, No 1, European Forum, Insight of 20 May 2024, pp. 105-116
ISSN 2499-8249 - doi: 10.15166/2499-8249/745

* Full Professor of Public Law, Università degli Studi Niccolò Cusano, anna.pirozzoli@unicusano.it.

[1] J Lovelock, Novacene: The Coming Age of Hyperintelligence (Allen Lane 2019).

[2] J McCarthy, ML Minsky, N Rochester and CE Shannon, 'A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence' (1955) Dartmouth College, Hanover, New Hamphsire jmc.stanford.edu.

[3] A D’Aloia, ‘Il diritto verso “il mondo nuovo”. Le sfide dell’Intelligenza Artificiale’ (2019) BioLaw Journal 3.

[4] The statement “Mitigating the risk of extinction by AI should be a global priority along with other societal-scale risks such as pandemics and nuclear war” is available at www.safe.ai.

[5] Artificial intelligence holds the potential to make a significant contribution towards global development, human well-being, and environmental protection (SDGs). Is what UN Secretary-General António Guterres also advocated on 27th October 2023 when he introduced the new ‘AI Advisory Body on Risks, Opportunities, and International Governance of Artificial Intelligence’ available at www.un.org. Addition in December 2023, the UN Secretary-General's AI Advisory Body launched its ‘Interim Report: Governing AI for Humanity’, the text of which is available at www.un.org.

[6] The University of Pennsylvania and Open-AI conducted an analysis of the potential impact of AI models, including GPT, on the labour market. The report suggests that the introduction of AI could impact approximately 80 per cent of the US workforce, replacing at least 10 per cent of their work responsibilities with automation. Additionally, almost 19 per cent of employees may experience a reduction of 50 per cent or more in their job duties. The impact of AI is expected to affect workers of all income levels, with higher-paid professions being particularly at risk. DH Autor, ‘Why Are There Still So Many Jobs? The History and Future of Workplace Automation’ (2015) Journal of Economic Perspectives 3; T Eloundou, S Manning, P Mishkin and others, ‘GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models’ (23 March 2023) www.arxiv.org.

[7] T Groppi, 'Alle frontiere dello Stato costituzionale: innovazione tecnologica e intelligenza artificiale' (2020) Consulta online 677; A Goyal and R Aneja, ‘Artificial Intelligence and Income Inequality: Do Technological Changes and Worker's Position Matter?’ (2020) Journal of Public Affairs; E Stradella, ‘Stereotipi e discriminazioni: dall’intelligenza umana all’intelligenza artificiale’ (2020) Consulta online 6; D Rotman, ‘How to Solve AI’s Inequality Problem’ (April 2022) MIT Technology Review.

[8] AI is notoriously energy-intensive, for example, necessitating powerful computing infrastructures it has been calculated that the training process of a conventional neural network for AI utilized in natural language comprehension and processing generates a carbon dioxide emission equivalent to that produced by five cars during their lifetime, which amounts to approximately 284 tonnes of carbon dioxide. E Strubell, A Ganesh and A McCallum, ‘Energy and Policy Considerations for Deep Learning in NLP’ in A Korhonen, D Traum and L Màrquez (eds), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Association for Computational Linguistics 2019) 3645; M Giles, ‘Is AI the Next Big Climate - Change Threat? We Haven’t a Clue’ (29 July 2019) MIT Technology Review www.technologyreview.com.

[9] L Antonini and A Sciarrone Alibrandi, ‘Alla ricerca di un habeas corpus per l’Intelligenza Artificiale’ in C Caporale and L Palazzani (eds), Intelligenza Artificiale: distingue frequenter. Uno sguardo interdisciplinare (Edizioni CNR 2023) 105.

[10] G Resta, ‘Cosa c’è di “europeo” nella Proposta di Regolamento UE sull’intelligenza artificiale?’ (2022) Diritto dell’Informazione e dell’Informatica 323.

[11] NT Nikolinakos, EU Policy and Legal Framework for Artificial Intelligence, Robotics and Related Technologies: The AI Act (Springer 2023) 412.

[12] The Artificial Intelligence Act, after obtaining the approval of the IMCO and LIBE committees of the European Parliament on 13 May 2023, on 14 June 2023 obtained the approval of the Amendments (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)) of the European Parliament of 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence and amending certain legislative acts of the Union. With the Trilogue of 9 December 2023, the political agreement between the Member States was reached, and on 13 March 2024 the European Parliament approved the text in plenary with 523 votes in favor, 46 against and 49 abstentions.

[13] J Van der Heijden, ‘Risk as an Approach to Regulatory Governance: An Evidence Synthesis and Research Agenda’ (2021) Sage Open 11.

[14] For a more general discussion of the concept of risk, see M Florin and MT Bürkler, ‘Introduction to the IRGC Risk Governance Framework’ (2017) Losanna EPFL; T Aven and O Renn, ‘On Risk Defined as an Event where the Outcome Is Uncertain’ (2009) Journal of Risk Research 1.

[15] G González Fuster and MN Peeters, ‘Person Identification, Human Rights and Ethical Principles: Rethinking Biometrics in the Era of Artificial Intelligence’ (EPRS European Parliamentary Research Service 2021) www.europarl.europa.eu; G Sartor and F Lagioia, ‘Le decisioni algoritmiche tra etica e diritto’ in U Ruffolo (eds), Intelligenza artificiale. Il diritto, i diritti, l’etica (Giuffré 2020) 63.

[16] It will enter into force 20 days after its publication in the Official Journal of the EU and will apply 24 months after entry into force, except for: prohibitions on prohibited practices, which will apply six months after entry into force; codes of conduct (nine months later); rules for general purpose AI systems, including governance (12 months); and obligations for high-risk systems (36 months).

[17] G Natale and AM Lazzè, ‘AI Act, il Regolamento europeo sull’intelligenza artificiale: punti di forza e punti di debolezza’ (2024) Rassegna Avvocatura dello Stato 13.

[18] M Cesluk-Grajewski, ‘Artificial Intelligence [What Think Tanks are Thinking]’ (2023) EPRS: European Parliamentary Research Service policycommons.net.

[19] The document entitled “Segnalazione al Parlamento e al Governo sull'Autorità per l'i.a.” is available on the official website of the Italian Data Protection Authority at the link www.garanteprivacy.it. For an analysis of how effectively the GDPR addresses the challenges posed by AI to the fundamental rights of privacy, protection of personal data and non-discrimination, see F Ufert, ‘AI Regulation Through the Lens of Fundamental Rights: How Well Does the GDPR Address the Challenges Posed by AI?’ (2020) European Papers europeanpapers.eu 1087.

[20] AG Higuera, ‘What if Generative Artificial Intelligence Became Conscious?’ (23 October 2023) epthinktank.eu.

[21] L Chen, P Chen and Z Lin, ‘Artificial Intelligence in Education: A Review’ (2020) IEEE Access 75264.

[22] F Pizzetti, ‘La protezione dei dati personali e la sfida dell’intelligenza artificiale’ in F Pizzetti (eds), Intelligenza artificiale, protezione dei dati personali e regolazione (Giappichelli 2018) 34; L Rinaldi, ‘Intelligenza artificiale, diritti e doveri nella Costituzione italiana’ (2022) DPCE online www.dpceonline.it.

[23] TE Frosini, ‘L’orizzonte giuridico dell’intelligenza artificiale’ (2022) BioLaw Journal www.biodiritto.org.

[24] G7 Leaders’ Statement on the Hiroshima AI Process of 30 October 2023 is available at www.digital-strategy.ec.europa.eu.

[25] N Maslej, L Fattorini, E Brynjolfsson and others, ‘The AI Index 2023 Annual Report’ (April 2023) AI Index Steering Committee, Institute for Human-Centered AI, Stanford University aiindex.stanford.edu 145.

[26] The White House, Blueprint for an AI Bill of Rights. Making Automated Systems Work for the American People (October 2022) www.whitehouse.gov.

[27] NIST, U.S. Department of Commerce, Artificial Intelligence Risk Management Framework (AI RMF 1.0) (January 2023) nvlpubs.nist.gov.

[28] The White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (30 0ctober 2023) www.whitehouse.gov.

[29] Secretary of State for Science, Innovation and Technology, A Pro-Innovation Approach to AI Regulation (29 March 2023) assets.publishing.service.gov.uk.

[30] Projeto de Lei de 2023 n° 2338 www25.senado.leg.br.

[31] Australian Government, Department of Industry, Science and Resources, Safe and Responsible AI in Australia: Discussion Paper (2023) storage.googleapis.com.

[32] The text of the bill presented to the Senate on 19 October 2023 is available at www.senato.it.

[33] The text of the bill presented to the Senate on 12 October 2023 is available at www.senato.it.

[34] The text of the bill presented to the Chamber of Deputies on 25 October 2023 is available at documenti.camera.it.

[35] On predictive justice and the use of algorithms in judicial contexts, see L Di Majo, 'Le incognite sull’utilizzo dell’Intelligenza Artificiale nel processo costituzionale in via incidentale' (2023) La Rivista "Gruppo di Pisa" 48.

[36] Details of the ongoing hearings are available at the following link www.senato.it.

[37] Italian Government by the Ministry of University and Research, the Ministry of Economic Development and the Minister for Technological Innovation and Digital Transition, Artificial Intelligence Strategic Programme 2022-2024 (2021) assets.innovazione.gov.it.

[38] The Dossier prepared by the Offices of the Senate of the Republic and the Chamber of Deputies with the documentation collected for the Interparliamentary Conference can be consulted at documenti.camera.it.

[39] On the possibility of a distorted relationship between man and machine see E Hickman, M Petrin ‘Trustworthy AI and Corporate Governance: The EU’s Ethics Guidelines for Trustworthy Artificial Intelligence from a Company Law Perspective’ (2021) European Business Organization Law Review 593, the Author states: “Beyond accountability concerns, it is arguable that even to the extent humans have the opportunity to input substantive rationality, there will be a tendency not to, because of a potential lack of willingness to contradict the machine. People tend to ‘overtrust’ decisions made by machines”.

[40] A Poggi, F Fabrizzi and F Savastano, Social Network, formazione del consenso, intelligenza artificiale: Itinerario di un percorso di ricerca di Beniamino Caravita (Sapienza Università Editrice 2023) 111; G Vilone and L Longo, ‘Explainable Artificial Intelligence: A Systematic Review’ (2020) arxiv.org.

[41] The use of facial and voice recognition systems has raised concerns about gender and race biases. Google, for example, was found to display job advertisements with high salaries predominantly to male users, while female users were shown fewer ads for high-paying jobs. This highlights the gender-biased discrimination in these systems.

[42] P Hacker, ‘Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies Against Algorithmic Discrimination under EU Law’ (2018) CMLRev 1143.

 

e-Journal

European Forum

Archive

e-Journal

Forum Européen

Archives

e-Journal

Forum europeo

Archivio

e-Journal

Foro Europeo

Archive