top of page

Europe’s AI Act: The Regulation Tech Giants Fear the Most

  • Writer: Franco Fernandez
    Franco Fernandez
  • Nov 5
  • 4 min read

European context

The European Union has moved faster than anyone else to put rules around two of the most disruptive technologies of the decade: artificial intelligence and cryptoassets.


In 2024–2025 the EU passed two landmark pieces of legislation — the Artificial Intelligence Act (AI Act) and the Markets in Crypto-Assets Regulation (MiCA) — with a clear promise: protect users and give businesses legal certainty in an increasingly chaotic digital environment.


But that same ambition has triggered a fierce backlash. Big tech companies and startups warn that Europe may be trading innovation for control. Brussels is getting applause for its regulatory courage, but also raised eyebrows from those who see a potential case of Europe shooting itself in the foot.


In this article, we look at how the AI Act and MiCA are reshaping the industry — and why many fear that Europe’s regulatory offensive could end up pushing away the very innovation it wants to attract.

Cover image for the article about the EU AI Act, the law regulating artificial intelligence and alarming Silicon Valley.

The AI Act: Europe’s Bold AI Law That’s Making Silicon Valley Uneasy

In 2024, the European Union proudly passed the world’s first comprehensive law on artificial intelligence.The AI Act isn’t just about “putting order” — it’s about protecting fundamental rights from intrusive uses of technology.


How it works

The regulation classifies AI systems according to the level of risk they pose.“Unacceptable” uses are flat-out banned: no mass biometric surveillance, no Chinese-style social scoring, no manipulative behavioral systems.


Next come high-risk applications — such as AI used in recruitment, healthcare, critical infrastructure, or public services.These can be deployed, but only under strict obligations: transparency, data quality, traceability, pre-market assessments, and above all, human oversight before launch.


The idea is simple: if you want to offer AI to 450 million Europeans, it must be safe, trustworthy, and aligned with EU values.That precautionary mindset — the opposite of the American “move fast and break things” — is exactly what’s setting off alarm bells in Silicon Valley.


A brewing backlash

Since its early drafts, the AI Act has faced fierce opposition.In May 2023, Sam Altman, CEO of OpenAI (the company behind ChatGPT), warned that if the law became too burdensome, he might withdraw services from Europe altogether.


He called the draft “over-regulatory” and claimed ChatGPT could vanish from the European market unless requirements were eased.Altman toured European capitals — meeting with leaders like Emmanuel Macron — to push for changes, even cancelling his Brussels stop in protest.

Members of the European Parliament were quick to respond.“We will not be blackmailed by American companies,” said MEP Kim van Sparrentak, adding that if OpenAI couldn’t meet basic transparency and safety standards, “its systems don’t belong in the European market.”Just months earlier, Italy had temporarily banned ChatGPT over GDPR breaches — a clear preview of Europe’s tough enforcement stance.


The tension rises

The rift between regulators and Silicon Valley widened as the law neared enforcement.By July 2025, one month before the first obligations took effect, 45 major European companies — including Airbus, Siemens, Heineken, Lufthansa and AI startup Mistral — signed an open letter urging Brussels to “hit pause” on the AI Act.


They argued the rules were unclear, overlapping, and increasingly complex, warning that Europe risked suffocating its own innovation.The letter called for a two-year delay on the toughest provisions — both for high-risk systems (due in 2026) and for general-purpose AI models like GPT-4 (due August 2025) — until clearer technical guidance was available.


Brussels officially rejected a broad moratorium, keeping the August 2025 start date.But behind the scenes, officials admitted the industry might have a point: implementation standards weren’t ready, and timelines might need “flexibility.”


Voices from within Europe

Criticism hasn’t come only from companies.In a late-2024 report, former Italian Prime Minister Mario Draghi urged the EU to pause the AI Act and even loosen certain privacy rules to avoid strangling growth.He warned that over-regulation — combined with slow investment — was already putting Europe behind the U.S. and China in the global AI race.


Draghi noted that generative AI models need vast amounts of data for training — something hard to achieve in Europe under GDPR.Since 2023, several national regulators have opened investigations into tools like ChatGPT under the data protection framework, creating what Draghi called “high levels of legal uncertainty for AI developers.”His recommendation: postpone parts of the Act “until we better understand its side effects,” and let Europe regain the speed, scale, and intensity needed to stay competitive.


“Regulate first, innovate later”

The EU’s “precaution first” philosophy divides opinion.Digital rights groups hail it as a moral victory — proof that Europe can lead ethically, banning dystopian practices and demanding accountability.But tech companies see a nightmare of bureaucracy, audits, and potential fines of up to €30 million or 6–7% of global turnover, warning that this could scare off investors and stall adoption.


The image of dozens of CEOs pleading for a regulatory timeout in 2025 captures a deeper anxiety:Is Europe protecting its citizens at the cost of its own competitiveness?


The real test will come in 2026, when the toughest provisions take effect.Only then will we know whether the AI Act delivers a trustworthy European AI ecosystem — or drives innovation to more permissive shores.


 
 
 

Comments


bottom of page