New Rules and Actions on Artificial Intelligence from the European Commission: “A Europe Fit For the Digital Age”

Last week, the European Commission proposed new rules and actions which seek to convert Europe into the ‘global hub for trustworthy Artificial Intelligence' (AI) (European Commission, 2021). The proposal seeks to guarantee the safety and rights of people and businesses, as AI becomes an ever-increasing part of day-to-day life and business.
artificial intelligence
What is the proposal on Artificial Intelligence?

The proposal aims to empower innovation and investment in Artificial Intelligence (AI) across the EU while ensuring the safety of the future economy and society by combining the first ever legal framework on AI and a new Coordinated Plan with all EU Member States. Furthermore, the machinery directive will also be strengthened to align with these new regulations in hardware or robotic technology that also uses AI.

Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, said: “Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

Commissioner for Internal Market Thierry Breton said: “Today’s proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.” 

The new rules will be applied directly to all Member States based on a ‘future-proofed’ definition of AI. They follow a risk-based approach:

Unacceptable Risk: AI systems considered a clear threat to the safety, livelihoods or rights of people will be banned. This includes AI systems or applications that manipulate human behaviour and systems that allow ‘social scoring’ by governments. 

High Risk: AI systems may be identified as ‘high risk’ if the technology is used in critical infrastructures (such as transport), education or training scores (such as scores on exams), safety components of products (such as AI in robot-assisted surgeries), employment or workers management (such as in recruitment procedures), essential private and public services (such as credit scoring or loan obtainment), law enforcement (such as verifying evidence), migration and border control management (such as verifying travel documents), administration of justice and democratic processes (like applying the law to concrete facts).

These systems will have to undergo adequate risk assessment and mitigation assessments and will have to meet a number of stringent requirements before they are deemed acceptable to be used in any of the above-mentioned situations. 

Limited Risk: AI systems with specific transparency obligations would be considered limited risk. This would be in situations such as using chatbots, where users are made aware they are interacting with a machine and can make an informed decision as to whether they want to engage or not. 

Minimal Risk: AI systems would be deemed minimal risk and free to use in applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category and the draft regulation would not interfere here. 

How will the EU govern compliance across member states?

In order to ensure governance, the EC proposed that national competent market surveillance authorities should supervise the new rules and that there will be the creation of the European Artificial Intelligence Board to facilitate their implementation and drive the development of AI standards.

To remain globally competitive, the EC has committed to increasing innovation in AI and has developed a Coordinated Plan across EU Member states which proposes concrete actions for collaboration. This also takes into account new challenges brought by the COVID-19 pandemic and has incorporated a clear vision to accelerate investments in AI which could benefit the recovery of such pandemics. 

The acceleration of implementing AI systems and technology has an incredible potential to bring about societal benefits, economic growth, and enhance EU innovation and global competitiveness. However, with the acceleration of such technology, there are also specific characteristics of some AI systems that may create risks related to user safety and rights. 

As such, this legal framework will apply to both public and private companies inside and outside the EU as long as the AI system is placed on the EU market or its use affects people located inside the EU. 

What should businesses bear in mind when placing AI systems onto the EU market?

Before placing a high-risk AI system onto the EU market, providers must subject it to a conformity assessment. This will allow businesses to demonstrate that the system complies with mandatory requirements for trustworthy AI systems. In case the system is substantially modified for its intended use, the assessment will have to be repeated.

For certain AI systems, an independent notified body will also have to be involved in this process. AI systems will always be deemed high-risk when subject to third party conformity assessment under that sectoral legislation. For example, with biometric identification systems a third party conformity assessment is always required. Providers of high risk AI systems will also have to implement quality and risk management systems to ensure they are compliant with the new requirements, even after a product is placed on the market. Market surveillance authorities will support post-market monitoring through audits and by offering providers the possibility to report on serious incidents or breaches of which they have become aware. 

In addition to this, Member States hold key roles in enforcing the regulation. Each member state should designate one or more national competent authorities to supervise the application and implementation of such protocols and to carry out market surveillance activities. Each Member State should also designate one national supervisory authority to represent the country in the European Artificial Intelligence Board.

The updates made to the Machinery Directive (soon The Machinery Regulation) ensure that the new generation of machinery products guarantee the safety of users and consumers and encourage innovation. Both the Machinery Regulation and the AI Regulation are complementary. The AI Regulation will address the safety risks of AI systems ensuring safety functions in machinery, while the Machinery Regulation will ensure, where applicable, the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole.

  • Certification Simplified

Certification Experts is an independent product compliance organisation. With over 25 years of experience, Certification Experts evolved into a hub of expertise and subsequently established a valuable track record. If you would like to future proof your business and are planning on placing AI or robot technology products onto the marketplace, get in touch today.

  • Certification Simplified

Certification Experts is an independent product compliance organisation. With over 25 years of experience, Certification Experts evolved into a hub of expertise and subsequently established a valuable track record. If you would like to future proof your business and are planning on placing AI or robot technology products onto the marketplace, get in touch today.

Keen to get updated about certifying your products?
We'll keep you posted!