The new EU AI Act: Empire strikes back!
Posted: Thu 8th Aug 2024
We've all heard those iconic phrases: 'Taking back control', 'Voted Leave', 'It's time for real change', 'Oven-ready Brexit deal' and the classic 'Get Brexit done'.
We lived through an era so divisive it made the Hatfields and McCoys look like a garden party. Families disagreed, friends bickered and social media turned into a battleground. But through it all, here in the UK, we managed to disagree without falling out completely. We're a resilient bunch, aren't we? Let's agree to disagree, as they say!
The UK, our beloved rebel nation, has always prided itself on marching to the beat of its own drum. We're the ones who left the empire (well, Union) after all. We’ve been trained to think Federation is bad and freedom is good! Rule of law is boo and free trade is great! Our individual thinking away from Europe makes us special.
Now, let's talk about 'Taking back control'. The new EU AI Act came into force on 1 August. And what does taking back control mean in this context?
Well, it means we can happily ignore it for services within the UK. But for those of us who have business dealings with our European friends (remember, we're not part of the club anymore), we need to pay attention. The Empire — oh, sorry, the Union —wants to create the right conditions for AI development to benefit Europeans. And let’s not forget, we’ve quit, so don’t ask about UK interests.
Lucky for us, our brilliant contact Oliver Patel has taken on the Herculean task of digesting the over 90,000 words of the EU AI Act and created a concise summary for everyone. It’s been read 1.5 million times — clearly, it’s a hit!
Here’s the EU AI Act cheat sheet:
And for your convenience, here are the parts broken down:
Part 1: Definitions, scope and applicability: https://lnkd.in/eeE7MG2J
Part 2: Prohibited AI systems: https://lnkd.in/efyvzChx
Part 3: High-risk AI systems: https://lnkd.in/eaM4Xua4
Part 4: Requirements for providers: https://lnkd.in/ePMiVtUM
Part 5: Requirements for deployers: https://lnkd.in/eYfyhiCZ
Part 6: General-purpose AI models: https://lnkd.in/dRPHRe5p
Part 7: Compliance and conformity assessment: https://lnkd.in/e3YS3-pg
Part 8: Governance and enforcement: https://lnkd.in/egHvxmPv
To give you more context, let’s go over each of the parts in a bit more detail:
Part 1: Definitions, scope and applicability
Understanding the definitions and scope is crucial. This part covers what exactly falls under the AI Act's purview. From machine learning algorithms to expert systems, it's essential to know if your AI application is affected. Make sure you get the nuances right to avoid any unwelcome surprises.
Part 2: Prohibited AI systems
This is where the fun begins. These are the big no-nos. Using AI to influence behaviours subliminally or exploiting vulnerabilities are just a couple of the prohibited actions. It's worth a detailed read to ensure your innovations are on the right side of the law.
Let’s delve a bit into the prohibited AI systems.
Breaching these provisions carries the largest potential fines under the AI Act — up to 7% of your global revenue. Yes, you read that right, it’s enough to send any company spiralling into the abyss. Ignorance, in this case, could be a very costly mistake.
You might be wondering, what are some examples of prohibited AI systems? Well, here are a few:
Causing harm by deploying subliminal or deceptive techniques: Think of those subtle behaviour influences by our friends in the US at Facebook or Google
Causing harm by exploiting vulnerabilities: Imagine AI-enabled services targeting vulnerable groups —like AI agents calling disabled clients to buy certain things
Social credit scoring systems: Forget about those Black Mirror episodes, recycling points and social credit scores are out
Predictive policing: Sorry Minority Report fans, pre-crime predictions are a no-go. Catching someone in your driveway before they steal your car? Not happening
Emotion recognition in the workplace and education: AI mental health surveys after a bad performance management season? Nope
Creating facial recognition databases via untargeted scraping: How many cameras are scanning our faces right now? A lot less, thanks to this rule
Biometric categorisation to infer protected characteristics: AI systems classifying race, political opinions, religious beliefs, sex, sexual orientation, etc, are off-limits. HR AI systems will need a serious overhaul
Part 3: High-risk AI systems
High-risk systems are those that can significantly impact safety or fundamental rights. If your AI is used in critical sectors like healthcare, transport, or law enforcement, pay extra attention here. The compliance requirements are stringent but necessary.
Part 4: Requirements for providers
Providers of AI systems must adhere to a set of requirements to ensure their products are safe and compliant. This includes proper documentation, risk management and quality assurance processes. It’s like doing your homework before presenting it to the teacher.
Part 5: Requirements for deployers
Deployers, those who actually use the AI systems, have their own set of rules to follow. This part ensures that the deployment of AI systems is done responsibly, with an emphasis on transparency and accountability.
Part 6: General-purpose AI models
General-purpose AI models, those not designed for a specific task, have unique considerations. This part addresses the challenges in ensuring such versatile systems comply with the regulations.
Part 7: Compliance and conformity assessment
This is where the rubber meets the road. Compliance and conformity assessments are crucial to ensure your AI systems meet all the necessary standards. Think of it as your final exam before graduation.
Part 8: Governance and enforcement
Finally, governance and enforcement cover how these rules will be implemented and monitored. This includes the roles of various regulatory bodies and the penalties for non-compliance.
Final thoughts
So, there you have it. A whirlwind tour of the EU AI Act. It's a lot to digest, but it's crucial for staying ahead in the AI game. Remember, compliance is not just about avoiding fines — it's about building trust and ensuring your innovations are both safe and ethical.
Finally, if you haven’t read it yet, or don’t plan to, or are not aware of the EU AI Act, you really should. Compliance is key and as they say, may the rule compliance and enforcement be with you.