From the systems that recommend movies online to those that help doctors diagnose diseases, artificial intelligence is present in many areas of our lives.
But what happens when this technology is used in critical decisions, like the safety of medications or the operation of self-driving vehicles? Are we ready to integrate it safely into our daily lives?
The new European legislation regulating artificial intelligence, the AI Act, aims to answer these questions and tackle these challenges. And, since legal language can get (very) heavy, let's try to break down its content and make it more digestible.

What does the European AI law cover?
The law defines artificial intelligence as "systems that, through data processing techniques, can perform tasks that normally require human intelligence."
In other words, it covers the generative AI models that have exploded in the past year or so, like ChatGPT, but also systems that are already commonplace for us, such as Google Translate, Spotify's recommendation engine and many others. These systems can include machine-learning algorithms, neural networks and other computational approaches.
The law has three key objectives:
- Protect our fundamental rights: ensure that artificial intelligence respects our privacy, doesn't discriminate and is transparent in its decisions.
- Promote safety and trust: establish standards for safe, robust AI systems, especially those considered high-risk.
- Drive innovation: foster a favorable environment for the development and adoption of innovative, ethical AI technologies.
I want to clarify that these are the objectives that several EU communications and representatives have shared, although there are also critical voices arguing that the legislation will slow down technological development in the region. I'll leave it to you to decide which view you find most accurate.
AI is changing, and your strategy should too. Get qualitative social media insights in seconds. Request your free 7-day demo at Welov.io today.
The AI risk traffic light
One of the most relevant points of the law is that it classifies AI models and tools according to their risk level: high, medium or low. To make it easier, we can think of it as a risk traffic light:
- Green light: low risk
- Amber light: medium risk
- Red light: high risk
If you want to know which types of technologies belong to each group and the limitations they must meet, I recommend this post I published on my LinkedIn profile, where I explain the new European AI law so even my grandma could understand it.
The legislation also explicitly prohibits AI systems that use manipulative or subliminal techniques, those that exploit vulnerable groups (such as people with physical or mental disabilities), those that create social scores based on your behavior, and those that identify physical traits in real time in public spaces (although the door is left open for some exceptions on major security grounds). In these cases, let's say the law outright slams a stop sign.
How does the AI Act protect us against high-risk systems?
High-risk AI systems are those that can have a significant impact on our rights and safety, like the ones used in healthcare or self-driving transport. The high-risk criteria established by the European AI legislation are:
- Potential to cause significant harm or adverse impact on people's fundamental rights.
- Potential to cause significant harm to public safety or health.
Here are some examples or uses of artificial intelligence that the AI Act considers high-risk:

The regulation establishes specific requirements to ensure their safety and transparency in development and use. Specifically, developers and providers of high-risk AI systems are responsible for:
- Assessing risks: identifying potential dangers and minimizing them.
- Designing safety measures: implementing safeguards to prevent risks.
- Being transparent: explaining how the AI works and how it makes decisions.
- Monitoring performance: ensuring their systems continuously comply with the regulation.
How does the AI Act affect tools like ChatGPT?
ChatGPT, probably the world's best-known AI tool, is a text-generation model. As such, it's considered a limited-risk system, and the main requirement it must meet is transparency. It's an AI that interacts with people through a chatbot and generates text and image content (among others), so it must always inform users that we're interacting with a machine.
Law, technology and ethics
At its core, the AI Act represents the intersection of three disciplines that, I suspect, will be increasingly interconnected: law, technology and ethics. So it covers aspects like fairness (or fighting bias), privacy, confidentiality, transparency and explainability.
Let me close with a personal reflection: without being an expert in law, economics or computing, and from the perspective of a journalist who has read (a lot) to write this article, I think the AI Act is a step in the right direction. We have on our hands a tremendously powerful technology that is shaking up the way we work and the way we interact with machines, in a way, in my view, comparable to the arrival of the internet.
So it's urgent for legislative bodies to start taking action to guarantee its fair and safe expansion. Yes, the proposals are still imperfect, but we have to start legislating now, before the AI industry gains even more advantage and slips out of our control.
By the way, if you want to keep learning more about artificial intelligence applied to marketing and communication, follow me on my LinkedIn profile and subscribe to my newsletter.
Use AI to analyze social media without any hassle. Get qualitative insights in seconds, risk-free. Request your free 7-day demo today



.png)


