EC Artificial Intelligence Act

European Commission published the draft resolution (it is still waiting for approval) on 21st April 2021 over the use of AI. The regulation applies to all organisations that do business in or with European Commission jurisdiction and citizens (living in the EC jurisdictions). So any company that operates in the European Union will comply with this regulation (if approved by the EU parliament).

Regulation’s Key Points

For reference, some of the highlights of the proposal include:

Definition of AI

  • An AI system is defined as the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension.

  • AI systems can be designed to operate with varying levels of autonomy and be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded).

  • AI Techniques and Approaches this regulations applies to:

    • Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;

    • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;

    • Statistical approaches, Bayesian estimation, search and optimization methods.

  • The rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to users of AI systems established within the Union.

  • This Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union.

  • AI systems are categorised into four groups; Unacceptable risk, high risk, limited risk and minimal risk.

  • AI systems (unacceptable risk category) are forbidden or prohibited to be deployed in the markets include:

    • Systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur

    • Systems providing social scoring of natural persons for general purpose by public authorities or on their behalf.

  • High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements.

    • AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.

  • There are no mandatory requirements on Listed Risk (AI systems with specific transparency obligations) and minimal risk. However, EC encourages Incentivisation of the non-High-risk AI providers to voluntarily meet the mandatory requirements.

  • Compliance Requirements include:

    • Risk management system (Article 9), Data and Data Governance (Article 10), Technical Documentation (Article 11), Record-Keeping (Article 12), Transparency and provision of information to users (Article 13), Human Oversight (Article14), Accuracy, robustness and cybersecurity (Article 15).

  • The the conformity assessment (validation that a provider meets the mandatory requirements) of their respective AI systems should be carried out as a general rule by the provider under its own responsibility - at least in an initial phase of application of this regulation.

  • For any AI system placed on the market, a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system.

  • The non-compliance of the AI systems can be:

    • Administrative fines of up to 30 000 000 EUR or, if the offender is company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher

      • non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5 (Prohibited Artificial Intelligence Practices);

      • non-compliance of the AI system with the requirements laid down in Article 10 (High-risk AI systems - Data and Data Governance).

    • Administrative fines of up to 20 000 000 EUR or, if the offender is company, up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is higher

      • non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10

    • Administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

      • for supplying of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request

AI System Classification Summaries

Four categories of AI systems in EC proposed regulation

The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI. They follow a risk-based approach and define four levels of risk categories to classify AI systems and their deployment environments/applications.

Unacceptable risk

AI Systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulated human behaviour to circumvent user’s free will (e.g., toys using voice assistance encouraging dangerous behaviour of minors) and systems that also ‘social scoring’ by governments. [RN Comment: this does not include military use of AI applications, especially use in weaponry. Furthermore, social scoring and ranking can be carried out by non-governmental bodies – for example loyalty scores of using a particular service or product.]

High-risk

  • AI systems identified as high-risk include AI technology used in:

    • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;

    • Educational or vocational training, that may determine the access to education and professional course of someone's life (e.g. scoring of exams);

    • Safety components of products (e.g. AI application in robot-assisted surgery);

    • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);

    • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);

    • Law enforcement that may interfere with people's fundamental rights (e.g. evaluation of the reliability of evidence);

    • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);

    • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

Examples of High-Risk Applications:

As identified by the EC regulation proposal

The list below contains a limited number of AI systems whose risks have already materialised or are likely to materialise in the near future. The EC has the jurisdiction and authority to expand the list of high-risk AI systems used within certain pre-defined areas, by applying a set of criteria and risk assessment methodology.

  1. Biometric identification and categorisation of natural persons:

    1. AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;

  2. Management and operation of critical infrastructure:

    1. AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.

  3. Education and vocational training:

    1. AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;

    2. AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.

  4. Employment, workers management and access to self-employment:

    1. AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;

    2. AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.

  5. Access to and enjoyment of essential private services and public services and benefits:

    1. AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;

    2. AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;

    3. AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.

  6. Law enforcement:

    1. AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;

    2. AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;

    3. AI systems intended to be used by law enforcement authorities to detect deep fakes as referred to in article 52(3);

    4. AI systems intended to be used by law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;

    5. AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;

    6. AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;

    7. AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data

  7. Migration, asylum and border control management:

    1. AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;

    2. AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;

    3. AI systems intended to be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;

    4. AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.

  8. Administration of justice and democratic processes:

    1. AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.

Key points to know about High-risk classification:

  • High-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment.

  • The classification as high-risk does not only depend on (just) the function performed by the AI system, but also on the specific purpose and modalities for which that system is used.

  • Legal requirements for the high-risk AI applications include data management and governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security.

  • AI systems intended to be used for applications that have existing regulatory requirements or covered under the New Legislative Framework (e.g. machinery, toys, medical devices, etc.) will have to ensure compliance not only with the requirements established by sectorial legislations, but also with the requirements established by the EC Harmonised Rules on AI.

  • Approved applications under this regulation will be registered in a database managed by the Commission and publicly available to increase public transparency and oversight - and strengthening ex post supervision by competent authorities (regulators).

  • New ex ante re-assessment of the conformity will be needed if substantial modification is made to the AI systems.

  • Monitoring and reporting obligations are placed on providers of AI systems with regards to post-market monitoring, reporting and investigating on AI-related incidents and malfunctioning.

Seclea Platform template for EC Artificial Intelligence Act (AIA) consists of the following compliance categories:

Last updated