top of page
  • Writer's picturewackyworld

Mastering the EU AI Act: audit scheme for a risk-free AI implementation


Ein Tresor vor dem sich eine menschliche und eine Cyborg-Hand berühren

FOMO SWEETS & MASTER RÖHRICH

An audit scheme for the conformity of use cases with the EU AI Act sounds more like a dissertation topic than the headline or content of a blog article worth reading. But wait, before you push this into the world of superfluous academia: If you and/or your team/company are using or planning to use AI, you should read on. I do it differently than most YouTubers, who exploit our deep-seated fear of missing out - the well-known FOMO - to get you to watch their videos to the bitter end. After all, YouTube rewards not only the thumbs up in the virtual sky but, more importantly, the time you spend on their platform. As a FOMO bonus, I'm giving you the prospect of being one of the early adopters and one of those who, after reading, know that there's a new set of rules in addition to GDPR that you don't need to have under your pillow, but that you should do your best not to ignore when it comes into effect. And with the audit scheme, you or your lawyers can quickly make a "thumb's eye" assessment. Otherwise, what will happen to you is what the old-timers, people of my biblical age, know from the old Werner movies. There was this gnarled Master Röhrich, the square-jawed boss of the witty protagonist, who always announced ominously in his unmistakable Hamburg slang:


"This is going to be expensive!"





TARGET GROUP


The outcome of the trialogue is still fresh. The smoke from the Brussels conclave is still floating through the air. The "Habemus Papam" for the AI world, the result of the trialogue, is (as of 2024-02-22 - 18:59) only a few months old.


You might ask yourself - and rightly so, Who should be dealing with this right now?


I think, apart from law students, of which I was once one and was grateful for any scheme that helped me solve a case, it is people like me who, as software architects (and now also as lawyers again), have to check whether (planned/operational) AI systems will be legally compliant as soon as the smoke has cleared. The EU AI Act will be fully effective. Because, oh wonder of wonders, in addition to an AI board and an AI office, the law also comes with a hefty club of fines that seamlessly continues the tradition of the GDPR.

But it's not just the auditors and consultants who must fight through the jungle of the EU AI Act. The decision-makers, the movers, shakers, and strategists should also be aware of what is coming from Brussels. Companies that incorporate the EU AI Act into their day-to-day AI business early could not only secure a (competitive) advantage but also thrive in an environment strengthened by legal certainty.

But what is this EU AI Act anyway?


Ok, let's clarify a few terms first.


THE EU AI ACT


ein Schmetterling und ein Berg, verschiedene Einfärbungen repräsentieren die Risikoklassen beim EU AI Act


DIRECT EFFECT


From a legal perspective, the EU AI Act is a planned EU regulation. By Art. 288 (2) TFEU, the regulation has general application, is binding, and applies directly in all Member States. Therefore, unlike a directive, no implementing act is required, and there is no scope for implementation. The AI Act should thus be applied equally in Germany, France, etc., as soon as it comes into force. There are no differences due to national interpretations.


(HIGHER) OBJECTIVES


The main objective of the EU AI Act is to establish uniform and harmonized rules for the development, marketing, and use of AI systems in the EU to promote the dissemination of AI technologies on the one hand and to ensure a high level of protection for public interests, health, safety and fundamental rights of citizens on the other. In addition, the regulation is intended to support innovation and establish the EU as a global leader in developing safe, trustworthy, and ethical AI. Sounds like an advertising promise? Even in the world of law, where we often rightly question new regulations, we encounter critical voices here. If you want to delve deeper, I warmly recommend the brilliant article by Dr. David Bomhard and Dr. Jonas Siglmüller in RDI 2024, issue 25, entitled "AI Act - the trialogue result." It is a must-read for anyone who wants to take a serious look at the topic - it can also be found in the references.


A video recording from the German Bundestag can also be found in the references: Experts assess EU regulation on AI differently for those who are more (or additionally) interested in the arguments of the various disciplines and, above all, the practitioners.


HORIZONTAL AND RISK-BASED


The European Commission is pursuing a two-pillar strategy for AI: promoting and supporting AI development while simultaneously managing risks. In creating a legal framework, it pursued a horizontal approach focusing on clarifying rules and protecting users without hindering innovation. This risk-based approach categorizes AI applications according to their level of risk, with specific applications such as social scoring banned outright and others that pose a high risk requiring a catalog of additional measures. Then, there are the general purpose systems, which include those with systematic risk as a subset.


THE EXAMINATION SCHEME



Hint: Most of the rules in the spotlight are primarily relevant to the financial industry. This is not (only) because I want to court my bankers because of my overdrawn overdraft but because I currently have a lot of business and financial sector contacts with them.


RECORD


SCOPE OF PROTECTION


TEMPORAL SCOPE OF PROTECTION

(Point can be deleted after entry into force, now: 2024-02-22 - 21:09)


Basic principle: The EU AI Regulation enters into force on the 20th day after its publication in the EU Official Journal and takes effect after 24 months.


Rules:


  • Art. 85 para. 2: EU AI Act: General implementation period of 24 months after entry into force.

  • Art. 85 para. 3 EU AI Act: Certain provisions enter into force at different times:


Prohibitions in Titles I and II (Art. 5) apply six months after entry into force.


Codes of conduct should be ready nine months after the EU AI Act comes into force.


Sanctions enter into force after 12 months.


GPAI models are given 12 or 24 months if they are already on the market.

Obligations for high-risk AI systems within the meaning of Art. 6 para. 1 (i.e., AI systems intended to be used as a safety component of a product or AI systems listed in Annex II) will apply after 36 months.


Special features:


Member States must designate at least one notifying authority and one market surveillance authority and inform the Commission of the identity of the competent authorities.


Each member state is expected to set up at least one regulatory sandbox within 24 months of the EU AI Act coming into force (Art. 53)


SUBJECTIVE SCOPE OF PROTECTION


Determine whether the technology used qualifies as an AI system defined in the EU AI Act.


Rules: Art. 3 para. 1 EU AI Act as amended on 2024-02-06 (https://www.euaiact.com/article/3(https://www.euaiact.com/article/3), last visit @: 2024-02-21 11:19)

„(1) ‘AI system‘ is a - machine-based system - designed to operate with varying levels of autonomy - and that may exhibit adaptiveness after deployment - and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as - predictions, - content, - recommendations, - or decisions - that can influence physical or virtual environments;“ 
  • Output and input: The AI system must be able to generate outputs based on the input it receives. This means that the system's output is derived directly from the data fed into it.

  • Adaptivity: The system must be adaptive, i.e., it can make adjustments after implementation. It must not be static but must be able to develop and improve over time.

  • Influence on environments: An AI system must potentially influence physical or virtual environments. However, this criterion is unclear, as all software has a particular impact. The decisive factor here is the scope and type of influence.

  • Autonomy: The main difference between an AI system and conventional software lies in the degree of independence. AI systems are designed to operate with different degrees of freedom and can act independently of human influence to a certain extent.

  • Directives and preliminary drafts: The emphasis on autonomy in the definition was already contained in recital 6 of the European Commission's original draft law and was included in the definition in the parliamentary draft.

  • Independence from human intervention: It is checked whether the system has a certain degree of autonomy from human involvement and the ability to operate without human intervention.

  • Cautious approach: Where there is uncertainty as to whether a system should be classified as an AI system, one should err on the side of caution and classify it as an AI system to ensure that all relevant legal requirements are considered and followed.

NO EXCEPTION (SHORT CIRCUIT EVALUATION)

Short circuit originates from computer science and describes an evaluation method for logical (Boolean) expressions in which the evaluation is aborted as soon as the result is known.


The regulation excludes the use of AI systems in the context of purely personal and non-professional activities from the scope of personal protection.


If this is affirmed => the end.


PERSONAL SCOPE OF PROTECTION


Notes:

When assessing the role of a provider, the intention to bring the AI system onto the market is decisive.


The obligations in the AI Act primarily concern the provider but also the operator, while other parties, such as retailers or end users, are only selectively included.


It is advisable to assume the characteristics of the provider or operator to be on the safe side to ensure compliance.


Provider

  • Definition: Natural or legal persons, authorities, institutions, or other bodies that develop AI systems or have them created to bring them to market under their name or brand.

  • Rule: Art. 3 para. 1 AI Act

  • Recital: Recital 6 AI Act-COM; Recital 70a

  • Unique feature: Manufacturer in the sense of product law.


Operator (Deployer)


Other parties based in the EU

Importers, distributors, authorized representatives, notified bodies, and end users: These actors are only affected selectively and have specific responsibilities defined in the regulation.

Rules: Various articles of the AI Act, depending on role and responsibility.

International stakeholders


Providers and operators from third countries

Applicability of the Regulation also to providers and operators based in third countries if the output of their systems is used in the EU.

Rule: Art. 2 para. One lit. c AI Act

GEOGRAPHICAL SCOPE OF PROTECTION

  • Basic principle: Extensive understanding of the marketplace principle, technology import ban to prevent providers in less regulated countries from circumventing the AI Act requirements.

  • Rule: Art. 2 para. 1 AI Act

  • Unique feature: Use of the output in the EU as a decisive criterion, even if the AI systems are neither placed on the market nor operated in the EU.

RISK CLASSIFICATION

FORBIDDEN SYSTEMS (SHORT CIRCUIT EVALUATION)

  • Basic principle: Art. 5 of the EU AI Act prohibits AI applications in special

  • cases - which, at least in Germany, are usually covered by other legal provisions (GDPR, BDSG, Sections 201 et seq. of the German Criminal Code, etc.)

  • Rule: The articles stipulate that specific AI systems are prohibited, e.g., those that use manipulative techniques or exploit personal weaknesses to persuade people to engage in harmful behavior.

  • Examples:

    • Systems that manipulate people or engage in social scoring

    • Systems for remote biometric identification in real-time

    • Systems that are used to oppress or exploit people

  • Unique feature: The exact use cases and the distinction between AI-controlled social media feeds and personalized advertising still need to be clarified. If one of the prohibitions applies, the review is complete, and the system may not be used.


HIGH RISK SYSTEMS (ART. 6 FF.)


If the AI system does not fulfill the criteria for prohibited systems, whether it is to be classified as a high-risk system is checked under Art. 6 et seq. EU AI Act. The following points are relevant here.


Basic principles: High-risk AI systems include machines, toys, medical devices, and others used in critical areas listed in Annex II and III, such as critical infrastructure, employment, and personnel management, or influence risk assessment and pricing in insurance.


What is the difference between Annex II and III?

Annex II refers to AI systems considered a safety component of a product or a stand-alone product that falls under the EU harmonization legislation listed in Annex II and must be subject to third-party conformity assessment. On the other hand, Annex III lists specific areas of application in which AI systems are classified as high-risk, regardless of whether they are connected to products listed in Annex II.

To summarize, Annex II and Annex III provide relevant criteria for classifying AI systems as high risk, with Annex II referring to integration into products and Annex III covering specific areas of application where AI systems are considered high risk regardless of their connection to products.


  • Rules & Use Cases: Art. 6-51 define specific rules for high-risk AI systems that are either covered by particular EU harmonization legislation or are subject to conformity assessment.

    • Article 9: This article states that AI systems performing certain activities listed in Annex III are considered high-risk systems. Activities relevant to the financial sector are

      • Credit scoring (Art. 9 para. 1 lit. a)

      • Fraud prevention (Art. 9 para. 1 lit. b)

      • Investment advice (Art. 9 para. 1 lit. c)

      • Portfolio management (Art. 9 para. 1 lit. d)

      • Trading decisions (Art. 9 para. 1 lit. e)

      • Insurance

    • Annex III: This annex contains a more detailed list of areas where AI systems can be classified as high-risk.

    • Relevant points for the financial sector are

      • Biometric identification (e.g., for customer authentication) (Art. 3 lit. a)

      • Lending (Art. 7 lit. b)

      • Financial markets (Art. 8)

  • Unique feature: AI systems that do not significantly influence decision-making or are used for preparatory acts do not fall under the high-risk category! // However, I would be highly cautious here with my subsumption; I would instead leave it to the courts to develop an evasion that can be relied upon!

GPAI-SYSTEMS


The newly created Title 8a (Art. 52a et seq.) now regulates "General Purpose AI Models" (GPAI models). This refers to an AI model "that displays significant generality and is capable of competently performing a wide range of distinct tasks."


  • Basic principle: GPAI models are AI models with significant generality and the competence to fulfill various tasks.

  • Rule: Title 8a (Art. 52a et seq.) of the EU AI Act specifically regulates GPAI models that are regulated after they have been placed on the market.

  • Unique feature: The distinction between GPAI models and high-risk AI systems is complex, as GPAI models often act as the basic AI structure behind various AI systems.


TWO-TIER APPROACH: GPAI MODEL WITH SYSTEMIC RISK


  • Basic principle: GPAI models with systemic risk are AI models of a general nature with a high adequate capacity and the ability to fulfill a wide range of different tasks. They are classified as such if the cumulative computational effort for training is more than 10^25 floating point operations (FLOPs).

  • FLOPs: Above all, they must have plenty of steam under the hood. In other words, they must have a high effective capacity:

    • Evaluation using suitable technical instruments and methods (e.g., indicators, benchmarks)

    • Cumulative computing effort for training > 10^25 floating point operations (FLOPs)

    • 10^25 FLOPs roughly correspond to the computing power required to:

      • One hundred million years of HD-quality filming.

      • Create 10 billion high-resolution images of the entire globe.

      • An average laptop has a computing power of about 10^12 FLOPs.

      • A supercomputer like the Fugaku in Japan has a computing power of about 10^18 FLOPs.

    • Who can match that today?

      • ChatGPT: The ChatGPT language model developed by OpenAI has not yet exceeded the limit of 10^25 FLOPs with a cumulative computing effort of 1.5T FLOPs (at least according to official "rumors" - as of 2024-02-23 08:51), but shows excellent potential in this direction.

      • Midjourney: Midjourney is an AI system that can generate images from text descriptions. The exact computational effort is unknown, but it is probably in the range of 10^24 FLOPs. But again, it's very unscientific, just rumor knowledge!

      • LaMDA: LaMDA (Language Model for Dialogue Applications) from Google AI is another language model with great potential. It was trained with a data set of 1.56T words and can conduct complex conversations and generate various creative text formats. The exact computational effort is unknown, but probably well over 10^25 FLOPs.

      • uDao 2.0: WuDao 2.0 is a language model developed by the Beijing Academy of Artificial Intelligence (BAAI), which was trained with a data set of 1.75T words. It can handle text generation, translation, and answering questions. The cumulative computational effort is 1.75T FLOPs, just below the limit of 10^25 FLOPs.

  • Conclusion: The cumulative computing effort of 10^25 FLOPs is enormous and corresponds to the computing power required for highly complex and computationally intensive tasks. Therefore, AI models that exceed this threshold have a high potential to influence society and should, thus, be mainly strictly regulated. Rule: Special requirements for GPAI models with systemic risk are set out in Title 8a (Art. 52a et seq.) of the EU AI Act. These models must fulfill specific criteria or be deemed by the Commission to meet such criteria to be classified as such. Summary: GPAI models with systemic risk pose particular challenges due to their broad applicability and high impact capacity. They require careful monitoring and regulation to avoid potential negative impacts on society. Keyword: AGI


OTHER (LOW RISK) AI SYSTEMS

All AI systems that do not fall under the above categories are considered other AI systems.


It is important to note that the distinction between the categories is not always clear. In case of doubt, you should contact the competent supervisory authority.


  • Basic principle: AI systems not classified as prohibited or high-risk are subject to less stringent regulation.

  • Rule: The requirements for these AI systems are less detailed in the legal text and usually result from more general principles and the application of standards.

  • Unique feature: For this category of AI systems, there is more flexibility and scope for design regarding compliance and integration into business processes.

LEGAL CONSEQUENCES

TECHNICAL REQUIREMENTS


FOR ALL AI SYSTEMS

  • Transparency:

    • Implementing functions to disclose the functioning of the AI system and its use of data.

    • Creating easily accessible and understandable explanations for end users and other interested parties.

    • Use of standardized formats and interfaces for the disclosure of information.

  • Ethical guidelines:

    • Adhering to the ethical principles set out in the EU AI Act Regulation, such as fairness, transparency, accountability, and harm avoidance. Develop and implement a company code of ethics for using AI systems.

    • Consideration of ethical aspects in all phases of the life cycle of an AI system, from development to use and decommissioning.

  • Supervisory authorities:

    • Knowledge of the tasks and powers of the competent supervisory authorities to enforce the EU AI Act.

    • Appointment of a contact person for cooperation with the supervisory authorities.

    • Regularly review compliance with the requirements of the EU AI Act.

  • Rights of data subjects:

    • Implementation of procedures to safeguard data subjects' rights, e.g., access, rectification, erasure, objection, and restriction of processing.

    • Provision of easily accessible and understandable information on the rights of data subjects.

    • Training employees in dealing with requests from data subjects (AI literacy).

  • Documentation:

    • Creating and maintaining comprehensive documentation about the AI system, including its architecture, functionality, data sources and use, risks, and ethical assessments.

    • Ensure that the documentation is up-to-date and accessible.

  • Reporting obligations: Fulfilling all reporting obligations set out in the EU AI Act Regulation. Keeping records of the use of the AI system and the associated risks.

ADDITIONAL REQUIREMENTS FOR GPAI SYSTEMS


  • Embedding in the Foundation Model:

    • Ensure that the risk classification measures and the resulting obligations are integrated into the existing organizational and technical structures of the Foundation Model.

    • Develop a consistent approach to compliance with the EU AI Act at the Foundation Model level.

    • Collaborate with other Foundation Model stakeholders to ensure compliance with the requirements.


ADDITIONAL REQUIREMENTS FOR GPAI SYSTEMS WITH SYSTEMIC RISK


  • Model Evaluation: Implementation of a model evaluation. Documentation and reporting of serious incidents: Tracking, documenting, and reporting serious incidents and remedial actions to the AI Office.

  • Cybersecurity and physical infrastructure: Ensuring an appropriate level of cybersecurity and physical infrastructure.

  • Systemic Risks at Union Level: Assessment and mitigation of potential systemic risks at Union level.

  • Special feature: Providers of GPAI models with systemic risk must inform the Commission immediately that the model fulfills or will fulfill the designated requirements. They must also comply with additional obligations under Art. 52d.

  • Commission decision-making process: The Commission decides, based on Annex YY/IXc, whether or not to classify the relevant GPAI model as posing a systemic risk based on the notifications of the providers or at its discretion.

ADDITIONAL REQUIREMENTS FOR HIGH-RISK-SYSTEMS

  • Risk management: Implement a comprehensive risk management system to identify, assess, and manage risks connected to the AI system. Regularly review and update the risk management system.

  • Data and data governance:

    • Compliance with the data processing and data governance requirements set out in the EU AI Act Regulation.

    • Implementation of measures to ensure the quality, security, and integrity of data.

    • Establishment of a transparent system for data access rights.

  • Technical documentation:

    • Create detailed technical documentation of the AI system, including its architecture, algorithms, data sets, and model parameters.

    • Use of standardized formats and interfaces for technical documentation.

  • Human monitoring and feedback: Implement mechanisms for human monitoring and control of the AI system. Ensure that humans can intervene and make decisions in critical situations.

  • Audit trails:

    • Implement a system to track all activities of the AI system, including input data, decisions, and outputs.

    • Ensure that audit trails are accessible to regulators.

  • Transparency and communication:

    • Provide all interested parties with transparent information about the AI system and its use.

    • Develop a communication plan to inform the public about the AI system and its potential impacts.

  • CE conformity assessment: Conduct a CE conformity assessment by a notified body to confirm the conformity of the AI system with the requirements of the EU AI Act.

CONTINUING


Regularly monitor developments in the EU AI Act and other relevant regulations and adapt internal processes and procedures to new requirements.

SOURCES


PRIMARY SOURCES



ORIGIN



CHANGES



SECONDARY SOURCES


WEB



BOOKS


  • Dr. Rudolf Streinz em. o. Professor an der Ludwig-Maximilians-Universität München, in: Schwerpunktbereich Europarecht, 12., neu bearbeitete Auflage, 2023


ESSAYS


  • RA Dr. David Bomhard / RA Dr. Jonas Siglmüller, in RDI 2024, 25 - "AI Act - das Trilogergebnis" (excellent legal essay!)

0 comments
bottom of page