- The Artificially Intelligent Enterprise
- Special Edition: EU AI Act Provisional Agreement
Special Edition: EU AI Act Provisional Agreement
A deep dive on the first regulation on artificial intelligence and what it means for the AI Industry
I like to keep a cadence of a weekly newsletter, but I think this news is especially noteworthy and I wanted to share my analysis. On Friday, December 8th, 2023, the European Parliament and Council negotiators reached a provisional agreement on the Artificial Intelligence Act. I won’t try to varnish my opinion on the topic. I am typically skeptical when a regulatory body tries to pass legislation about technology. Typically, they create legislation that cannot govern technology effectively because it typically provides too broad and hard to enforce guidelines and cannot adapt to the fluid nature of developing technology.
Summary of the EU Artificial Intelligence Act
The proposed legislation is the Artificial Intelligence Act, which the European Commission laid out. The main goals of the legislation are to:
Promote the development and uptake of trustworthy AI that respects EU values and rights.
Ensure AI systems placed on the EU market are safe and respect existing laws on fundamental rights and safety requirements.
Harmonize rules around AI and facilitate cross-border AI development and uptake in the EU.
The legislation categorizes AI systems as high-risk or non-high-risk. High-risk systems pose significant risks to health, safety, or rights. These systems face stricter requirements around transparency, risk management, data quality, documentation, and human oversight.
Key requirements in the legislation include:
Mandatory risk management systems for high-risk AI systems
Logging capabilities to ensure traceability and technical documentation
Transparency for users when interacting with an AI system
Human oversight for high-risk systems to oversee technology and minimize risks
Requirements around high-quality data sets that are unbiased
The legislation also proposes the following:
An EU regulatory sandbox to facilitate innovation in AI
Codes of conduct to promote voluntary measures for non-high-risk AI systems
An EU database for registration of high-risk AI systems
Governance through a European Artificial Intelligence Board & national authorities
The EU's AI Act has been a topic of concern for open source efforts, with experts warning that the written legislation could impose onerous requirements on developing open AI systems (open source, not OpenAI). There are fears that the Act might lead to a chilling effect on open source AI contributors, potentially concentrating power over the future of AI in large technology companies.
While the Act contains carve-outs for some categories of open source AI, such as those exclusively used for research and with controls to prevent misuse, experts argue that it could be difficult to prevent these projects from being integrated into commercial systems, where malicious actors might abuse them.
Some have emphasized that open source developers should not be subjected to the same regulatory burden as those developing commercial software. The concerns revolve around the potential impact of the AI Act on open-source AI contributors and the balance between innovation and accountability in the AI landscape.
Guardrails for general artificial intelligence systems
It has been agreed that general-purpose AI (GPAI) systems and their models must follow transparency requirements proposed by Parliament. These requirements include creating technical documentation, complying with EU copyright law, and releasing detailed summaries about the content used for training.
For GPAI models with high impact and systemic risk, Parliament negotiators have managed to secure more strict obligations. If these models meet certain criteria, they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents to the Commission, guarantee cybersecurity, and report on their energy efficiency.
Members of the European Parliament(MEPs) have also emphasized that, until harmonized EU standards are published, GPAIs with systemic risk may comply with the regulation by relying on codes of practice.
Governments Aren’t Typically Effective at Creating Technology Legislation
I am not against technology regulation, but I am skeptical of how well they can regulate complex and evolving technology. Here are a couple of examples of technology regulation that falls flat.
The CAN-SPAM Act, enacted in 2003 by the U.S. Congress, to regulate unsolicited commercial email. While the law has made most spam illegal and less attractive to spammers, it has been criticized for being largely ineffective in stopping malicious spammers. A study revealed that the Act had no observable impact on the amount of spam sent and did not significantly affect spammer compliance with its provisions.
However, the law I am even more critical of is the General Data Protection Regulation (GDPR). This law makes us constantly click on cookie warnings to enter the website. This comprehensive EU law aims to protect individuals' privacy rights by regulating the use of personal data. It has been in effect since May 25, 2018, and is considered the world's toughest privacy and security law. However, it has also posed challenges for businesses, particularly small and medium-sized enterprises (SMEs), in terms of compliance due to its extensive and far-reaching requirements. Some argue that the regulation has significantly burdened businesses, especially SMEs, and has led to compliance challenges and potential business hindrances.
The EU AI Act TL;DR
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It’s a big deal because it sets the bar for legislating AI usage in other legislatures. The EU will also be one of the most important markets for IT and will likely not be in lockstep with other government regulations that are sure to come.
Overview of EU AI Act
It aims to regulate AI to ensure safety, transparency, traceability, non-discrimination, and environmental friendliness.
The Act classifies AI systems into different risk levels, with more or less regulation based on the risk.
The European Parliament has prioritized the safety and ethical use of AI.
Negotiations are ongoing to finalize the legislation, intending to reach an agreement by the end of the year.
Banned applicationsRecognizing the potential threat to citizens’ rights and democracy posed by specific applications of AI, the co-legislators agreed to prohibit:
biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
emotion recognition in the workplace and educational institutions;
social scoring based on social behavior or personal characteristics;
AI systems that manipulate human behavior to circumvent their free will;
AI used to exploit people's vulnerabilities (due to age, disability, social or economic situation).
Law enforcement exemptions
Negotiators agreed on a series of safeguards and narrow exceptions for using biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorization and for strictly defined lists of crimes. “Post-remote” RBI would be used strictly in the targeted search of a person convicted or suspected of committing a serious crime.
“Real-time” RBI would comply with strict conditions, and its use would be limited in time and location for:
targeted searches of victims (abduction, trafficking, sexual exploitation),
prevention of a specific and present terrorist threat or
the localization or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, environmental crime).
Debates and Negotiations:
The Act has been the subject of ongoing negotiations with various stakeholders, including Hugging Face, Creative Commons, GitHub, Open Future, LAION, and Eleuther AI, advocating for a more transparent framework for open source and open science in the EU AI Act
Provisions for Open Source:
The Act may largely exempt research activities and the development of free and open source AI components from compliance with its rules.
However, there are concerns that the Act would impose stringent requirements on open-source AI models, potentially undermining innovation in this space.
Impact on Businesses Developing AI Technology:
The Act's potential impact on businesses developing AI technology, especially in the open source space, is a subject of ongoing debate and analysis.
The Act's provisions for open source may exempt free and open-source AI components from strict regulation unless they are deemed high-risk or used for already banned purposes.
This exemption has been viewed as a potential victory for certain companies operating in the open-source space. However, there are concerns that the Act would impose the same stringent requirements on open-source foundation models as on closed-source models, potentially hindering innovation in open-source AI models.