This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl
In the latest AI news, the European Union has sent out a final approval for the 27 Nations on the Artificial Intelligence Act, the first of its kind, which will take effect later this year.
This law was initially proposed five years ago, and European lawmakers have now voted in favor of this first-ever regulation on AI technology. It’s a risk-based approach, wherein companies within the EU - regardless of where the product is developed - must comply with this law before putting it out to the public.
This is also in time for the upcoming major elections happening this year. Bad actors can misuse AI by disseminating deep fakes and black propaganda, and this is a risk that must be reduced at all reasonable costs.
Again, this is the world’s first legislative proposal of its kind. Other nations can follow suit, making this the standard and promoting the European approach to technology regulation worldwide.
The European Union Parliament is taking measures to ensure that all AI products used in the EU meet certain standards. These standards include safety, traceability, transparency, non-discrimination, and environmentally sustainable.
For the most part, it focuses on how the technology is used rather than the technology itself. This is because AI can be used for different purposes and have different impacts depending on the context in which it is used.
This legislation is a risk-based approach, meaning the higher the risk, the stricter the regulation will be. So, essentially, it will be based on risk ‘levels’.
Two years ago, the European Commission also proposed ‘an AI regulatory framework’. AI systems can be used but need to be analyzed and classified depending on their application and the risk they pose to consumers. There are four risk levels, but in this article, we will discuss the two major risks: Unacceptable risk and high risk.
Unacceptable risks, according to The EU Parliament, are AI systems that pose a danger and threat to people. These systems will automatically be banned after being classified as such. These include:
According to its website, “Real-Time” remote biometric identification can be allowed for law enforcement purposes and other serious cases. The same goes for “Post” biometric recognition, which can also be allowed to prosecute serious crimes only after court approval.
High-level risks, on the other hand, are AI systems utilized by critical sectors, such as healthcare and educational institutions. These systems will be subject to high scrutiny and larger obligations. They also include all products that fall under the EU product safety legislation list (e.g., toys, medicine, cars, etc).
It’s worth mentioning that Generative-AI tools, such as Gemini and ChatGPT, are not considered high-risk. However, these companies still need to comply with transparency requirements and EU copyright law:
Google’s AI team, DeepMind Labs, has been working on putting watermarks on their output to make sure that people are aware that they are AI-generated.
The enforcement will be handled by a newly created AI office in Brussels under the EC (European Commission), which will oversee general-purpose models, monitoring capabilities, and risks under the supervision of independent experts.
Violations of the charter could cause companies to get hit with significant fines ranging from 7.5 to 35 million EUR.
The 24-nation bloc voted for this stipulation to regulate AI and ensure that small businesses are on the same playing field as bigger companies. They said it would offer opportunities for start-ups to develop and train Artificial intelligence before releasing it to the general public.
However, the reaction is very divided. The business sector is saying this is far too much; it's going too far and considers it overregulation. Contrary to what the parliament stated, they are convinced this will hamper the competition. Saying it’ll prevent SMEs from developing new AI solutions.
Some have even suggested that companies might leave the EU and go to the United States or Asia, where there is currently no statute to develop their applications.
They are saying that this isn’t about the strict policies they’ll be hammered with; it is more about the European continent being behind. This is because they're regulating much more ahead of their U.S. and Asian counterparts, while US and Asian tools are being developed and released significantly faster.
On the other hand, consumer protection groups claim this new regulation is not enough. Regarding data protection, no clear directive is stated within the proposal. A great example is toys like Moxie (conversational learning and GPT-Powered AI Robot), which are already out now.
Another issue raised was the clear labeling of ‘deep fakes’. When it comes to high-risk Artificial intelligence systems, like those used for immigration or critical infrastructure. It's important for legislators to require risk assessments and the use of top-notch data, among other things.
This legislative proposal is expected to be official by May or June 2024. Provisions will start to take effect in countries after it’s added to their lawbooks.
As for generative AI tools such as Gemini and ChatGPT, rules under its umbrella will be down by a year after the law takes effect. Between next year and 2027, the completion, including requirements for high-risk systems, will be in force.
According to Brando Benifei, co-author of this proposal, more AI legislation is still in the works and will be proposed to be passed after the election next summer. This includes workplace-related use of Artificial intelligence and much more.
European lawmakers have made the legislation adaptable to rapidly evolving technology. One provision of the law empowers the European Commission to update the technical aspects of its definition of general-purpose AI models. It will be based on market and technological developments.
This flexibility ensures that the legislation remains up to date with the latest advancements in the field.
Although the Artificial Intelligence Act aims to address issues of transparency and ethics, it imposes significant responsibilities on all companies that use or develop artificial intelligence. Even though some adjustments are planned for start-ups and small businesses, such as regulatory sandboxes, the obligations remain substantial.
This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl
In the latest AI news, the European Union has sent out a final approval for the 27 Nations on the Artificial Intelligence Act, the first of its kind, which will take effect later this year.
This law was initially proposed five years ago, and European lawmakers have now voted in favor of this first-ever regulation on AI technology. It’s a risk-based approach, wherein companies within the EU - regardless of where the product is developed - must comply with this law before putting it out to the public.
This is also in time for the upcoming major elections happening this year. Bad actors can misuse AI by disseminating deep fakes and black propaganda, and this is a risk that must be reduced at all reasonable costs.
Again, this is the world’s first legislative proposal of its kind. Other nations can follow suit, making this the standard and promoting the European approach to technology regulation worldwide.
The European Union Parliament is taking measures to ensure that all AI products used in the EU meet certain standards. These standards include safety, traceability, transparency, non-discrimination, and environmentally sustainable.
For the most part, it focuses on how the technology is used rather than the technology itself. This is because AI can be used for different purposes and have different impacts depending on the context in which it is used.
This legislation is a risk-based approach, meaning the higher the risk, the stricter the regulation will be. So, essentially, it will be based on risk ‘levels’.
Two years ago, the European Commission also proposed ‘an AI regulatory framework’. AI systems can be used but need to be analyzed and classified depending on their application and the risk they pose to consumers. There are four risk levels, but in this article, we will discuss the two major risks: Unacceptable risk and high risk.
Unacceptable risks, according to The EU Parliament, are AI systems that pose a danger and threat to people. These systems will automatically be banned after being classified as such. These include:
According to its website, “Real-Time” remote biometric identification can be allowed for law enforcement purposes and other serious cases. The same goes for “Post” biometric recognition, which can also be allowed to prosecute serious crimes only after court approval.
High-level risks, on the other hand, are AI systems utilized by critical sectors, such as healthcare and educational institutions. These systems will be subject to high scrutiny and larger obligations. They also include all products that fall under the EU product safety legislation list (e.g., toys, medicine, cars, etc).
It’s worth mentioning that Generative-AI tools, such as Gemini and ChatGPT, are not considered high-risk. However, these companies still need to comply with transparency requirements and EU copyright law:
Google’s AI team, DeepMind Labs, has been working on putting watermarks on their output to make sure that people are aware that they are AI-generated.
The enforcement will be handled by a newly created AI office in Brussels under the EC (European Commission), which will oversee general-purpose models, monitoring capabilities, and risks under the supervision of independent experts.
Violations of the charter could cause companies to get hit with significant fines ranging from 7.5 to 35 million EUR.
The 24-nation bloc voted for this stipulation to regulate AI and ensure that small businesses are on the same playing field as bigger companies. They said it would offer opportunities for start-ups to develop and train Artificial intelligence before releasing it to the general public.
However, the reaction is very divided. The business sector is saying this is far too much; it's going too far and considers it overregulation. Contrary to what the parliament stated, they are convinced this will hamper the competition. Saying it’ll prevent SMEs from developing new AI solutions.
Some have even suggested that companies might leave the EU and go to the United States or Asia, where there is currently no statute to develop their applications.
They are saying that this isn’t about the strict policies they’ll be hammered with; it is more about the European continent being behind. This is because they're regulating much more ahead of their U.S. and Asian counterparts, while US and Asian tools are being developed and released significantly faster.
On the other hand, consumer protection groups claim this new regulation is not enough. Regarding data protection, no clear directive is stated within the proposal. A great example is toys like Moxie (conversational learning and GPT-Powered AI Robot), which are already out now.
Another issue raised was the clear labeling of ‘deep fakes’. When it comes to high-risk Artificial intelligence systems, like those used for immigration or critical infrastructure. It's important for legislators to require risk assessments and the use of top-notch data, among other things.
This legislative proposal is expected to be official by May or June 2024. Provisions will start to take effect in countries after it’s added to their lawbooks.
As for generative AI tools such as Gemini and ChatGPT, rules under its umbrella will be down by a year after the law takes effect. Between next year and 2027, the completion, including requirements for high-risk systems, will be in force.
According to Brando Benifei, co-author of this proposal, more AI legislation is still in the works and will be proposed to be passed after the election next summer. This includes workplace-related use of Artificial intelligence and much more.
European lawmakers have made the legislation adaptable to rapidly evolving technology. One provision of the law empowers the European Commission to update the technical aspects of its definition of general-purpose AI models. It will be based on market and technological developments.
This flexibility ensures that the legislation remains up to date with the latest advancements in the field.
Although the Artificial Intelligence Act aims to address issues of transparency and ethics, it imposes significant responsibilities on all companies that use or develop artificial intelligence. Even though some adjustments are planned for start-ups and small businesses, such as regulatory sandboxes, the obligations remain substantial.