EU begins significant implementation of ACT ACT as the first limitations are applied
The European Union has so far been the only jurisdiction globally to encourage comprehensive rules for artificial intelligence with its AI act.
Jaque Silva | Nurphoto | Getty Images
The European Union has officially started the implementation of its significant artificial intelligence law on Sunday, traced the path to severe limitations and potential major fines for violation.
The EU AI Law, the first such regulatory framework for technology, formally entered into force in August 2024.
On Sunday, there is a deadline for prohibiting certain artificial intelligence systems and requests to ensure sufficient technological literacy among staff officially missed.
This means that companies must now be in line with restrictions and can face penalties if they do not.
The AI -in the AI -in the AI application for which he considers to be a “unacceptable risk” to citizens.
They include social scoring systems, real -time facial recognition and other forms of biometric identification that categorize people by race, sex life, sexual orientation and other attributes and “manipulative” AI tools.
Company to deal with cash penalties Out of 35 million euros (USD 35.8 million) or 7% of their global annual revenues – depending on what the amount is higher – for the violation of the EU AI Law.
The size of the penalty will depend on the violation and size of the company.
This is greater than possible cash penalties under GDPREuropean Strict Digital Privacy Act. Companies face up to € 20 million or 4% of annual global traffic for breach of GDPR.
‘Not perfect’ but ‘very necessary’
It is worth pointing out that Ai act is still not in full power- this is just the first step in a series of many upcoming events.
Tasos Stampelos, Head of EU Public Policy Relations with the Government in Mozilli, told CNBC earlier that, although “not perfect”, EU -AI’s AI law is “very much needed”.
“It is quite important to recognize that AI Law is a predominantly legislation of product safety,” Stampelos said in the Council that moderated CNBC in November.
“With the rules on the safety of the product, the moment you have it, it is not a finished job. There are many things to come and follow after the adoption of the law,” he said.
“Currently, compliance will depend on how standards, guidelines, secondary legislation or derivative instruments that follow AI Law, which will actually determine what compliance looks like,” added Stampelos.
In December, EU AI OFFICE, a newly created body that regulates the use of the model in accordance with the AI Law AI, has published another draft Code of Practice for AI (GPAI) models (GPAI), which refers to systems like OpenAI’s GPT family from large language models or Llms.
Another draft contained exemptions for providers of certain AI models of open code, including the demand for the developers of the “system” GPAI models underwater risk assessments.
Setting a global standard?
Several technologies and investors are dissatisfied with some of the more burdened aspects of the AI -U AI -Ui care that this could strangle innovation.
In June 2024. Prince Constantyne from the Netherlands, a CNBC said in an interview that he was “really concerned” because of the focus of Europe to regulate AI.
“It seems that our ambition is limited to being good regulators,” Constanium said. “It’s good to have protective fences. We want to bring clarity to the market, predictability and all that. But it is very difficult to do in such a fast space.”
Still, some think that clear rules for AI could give preference to European leadership.
“While the US and China compete for the construction of the biggest AI models, Europe shows the leadership in the construction of the most reliable,” said Diyan Bogdanov, director of engineering intelligence and growth in the Bulgarian Fintech company Payhawk, said through the E -Stage.
“AI AI -AI Requests on Bias Detection, Regular Risk Assessments and Human Supervision do not limit innovation – They define how good it looks, “he added.