European Union, new rules on Artificial Intelligence

Intelligenza artificiale

The Commission is now proposing new rules and actions to turn Europe into the world pole for reliable artificial intelligence (AI). The combination of the first ever AI legal framework and a new plan coordinated with Member States will ensure the security and fundamental rights of individuals and businesses, while strengthening the adoption of AI and EU-wide investment and innovation in the sector. The new rules on machines will integrate this approach by adapting safety rules to grow user confidence in a new and versatile generation of products.

The new AI regulation will ensure that European citizens can trust what AI has to offer, through proportionate and flexible rules that will address the specific risks posed by AI systems and set the highest standards worldwide. The coordinated plan outlines the strategic changes and investments needed at Member State level to strengthen Europe’s leading position in the development of an anthropocentric, sustainable, secure, inclusive and reliable AI.

The new rules will be applied directly and in the same way in all Member States, based on a definition of AI appropriate to future needs, and follow a risk-based approach.

Unacceptable risk: AI systems considered as a clear threat to security, livelihoods and the rights of individuals will be prohibited. This includes AI systems or applications that manipulate human behaviour to circumvent the free will of users (e.g., toys that use voice assistance to encourage children’s hazardous behaviour) and systems that allow governments to award a "social score".

High risk: systems in which AI technology is used shall be considered high risk:

  • in critical infrastructure (such as transport), as it could endanger the lives and health of citizens;
  • in vocational education or training, as it can determine the access to education and the professional path of a person’s life (for example, assignment of examination marks);
  • in product safety components (such as an application of AI used in robot-assisted surgery);
  • in the field of employment, worker management and access to self-employment (for example, CV selection software for recruitment procedures);
  • in essential public and private services (for example, credit scoring, which may deny citizens the possibility of obtaining a loan);
  • law enforcement activities that may interfere with the fundamental rights of persons (for example, assessment of the reliability of evidence);
  • the management of migration, asylum and border control (for example, verification of the authenticity of travel documents);
  • in the administration of justice and in democratic processes (for example, application of the law to a concrete set of facts).

High-risk AI systems will be subject to stringent requirements before they can be placed on the market:

  • appropriate risk assessment and mitigation systems;
  • high quality of the data sets that feed the system to minimise discriminatory risks and outcomes;
  • recording of activities to ensure traceability of results;
  • detailed documentation providing all necessary information on the system and its purposes for authorities to assess its compliance;
  • clear and appropriate information for the user;
  • appropriate human surveillance measures to minimise risks;
  • high level of robustness, safety and accuracy.

In particular, all remote biometric identification systems are considered high-risk and subject to strict requirements. Their use in real time for law enforcement purposes in areas accessible to the public is in principle prohibited. There are a few strictly defined and regulated exceptions (for example, where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to identify, locate, identify or prosecute perpetrators or suspects of a serious crime). Such use shall be subject to the authorisation of a court or other independent body and to limitations as regards time, geographical scope and the databases sought.

Limited risk, namely AI systems with specific transparency requirements: when using AI systems such as chatbots, users should be aware that they are interacting with a machine, so that we can make an informed decision as to whether or not to continue using them.

Minimum risk: the legislative proposal allows the free use of applications such as video games or spam filters based on AI. The vast majority of AI systems fall into this category. The draft regulation does not intervene in this case, as these AI systems pose only a minimum or zero risk to citizens' rights or security.

In terms of governance, the Commission proposes that the competent national market surveillance authorities supervise the new rules, while the establishment of a European Artificial Intelligence Committee will facilitate its implementation and stimulate the development of AI standards. Voluntary codes of conduct are also proposed for non-high-risk AI systems, as well as regulatory testing spaces to facilitate responsible innovation.

The European approach to excellence in AI

Coordination will strengthen Europe’s leadership in relation to an anthropocentric, sustainable, secure, inclusive and reliable AI. In order to maintain global competitiveness, the Commission has committed itself to promoting innovation in the development and use of AI technologies in all industrial sectors and in all Member States.

First published in 2018 to define financing actions and instruments for the development and adoption of AI, the coordinated AI plan has made possible a vibrant landscape of national strategies and EU funding for public partnerships-and research and innovation networks. The full update of the coordinated plan proposes concrete joint collaborative actions to ensure that all efforts are in line with the European IA strategy and the European Green Deal, while taking into account, the new challenges posed by the coronavirus pandemic. It presents a vision to accelerate investment in AI that can help recovery. It also aims to stimulate the implementation of national AI strategies, eliminate fragmentation and address global challenges.

The updated coordinated plan will use the funding allocated through the Digital Europe and Horizon Europe programmes, as well as the Recovery and Resilience Facility, which includes a 20% digital spending target, and cohesion policy programmes with a view to:

  • creating the conditions for the development and adoption of AI through the exchange of strategic information, data sharing and investments in critical computing capabilities;
  • promoting AI excellence "from laboratory to market" by establishing a public-private partnership, building and mobilising research capacities, development and innovation and making testing and testing facilities and centres of digital innovation available to SMEs and public administrations;
  • ensure that AI is at the service of people and is a positive force in society, working at the forefront of the development and dissemination of a reliable AI, cultivating talents and skills through placements, doctoral networks and post-doctoral digital scholarships, integrating confidence in AI policies and promoting the European vision of a globally sustainable and reliable AI;
  • creating strategic leadership in high-impact sectors and technologies, including the environment, focusing on the contribution of AI to sustainable production, health, expanding cross-border information exchange, and the public sector, mobility, home affairs, agriculture and robotics.
Research and innovation