Risk-based regulation of AI proposed by EU policy makers

Risk-based regulation of AI proposed by EU policy makers

High-risk AI would be subject to both conformity assessment prior to those systems being sold or put into service, as well as a system of post-market monitoring that each systems provider would need to put in place and adhere to.

Compliance with the requirements of the AI Act would also be assessed by national regulators under the Commission’s plans, with companies responsible for the most serious breaches subject to fines of up to €20 million or 4% of their annual global turnover, whichever is higher.


The Commission has proposed to define ‘high-risk’ AI within the AI Act. AI systems that are stand-alone products or used as safety components within a product could fall within the definition.


One of the factors relevant to whether an AI system is characterised as high-risk or not would be the extent of its “adverse impact” on EU fundamental rights.


“Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration,” according to the Commission.



Support the originator by clicking the read the rest link below.