Tech
Apple and European Union
(Photo by Metin Aktas/Anadolu via Getty Images)
Risky business

The EU's Artificial Intelligence Act upends the free-for-all era of AI

It’s week one for the new regulation which could cost big tech billions if they don’t comply

Jon Keegan

For American tech companies building AI, the past few years have been an unregulated free-for-all, with companies rapidly developing and releasing their powerful new technologies, turning the public marketplace into a sort of petri dish. 

In the race to stuff these expensive new tools into every existing digital product, discussions about potential harms, transparency, and intellectual property have taken a back seat.

But on August 1 in the EU, the first comprehensive law regulating AI took effect, and together with other EU regulations aimed at big tech, already affecting American companies operating in Europe.  

Citing concerns with the EU’s GDPR privacy law, Meta recently announced that it will withhold its multi-modal Llama model (images, text, audio, and video) from the region, “due to the unpredictable nature of the European regulatory environment," said a Meta spokesperson in a statement to Sherwood News. 

Apple also is playing hardball with European regulators by threatening to withhold its new “Apple Intelligence” features, citing the Digital Markets Act, an EU law that seeks to enforce fair competition among the largest tech platforms.

While the new AI regulation gives companies a lot of time to comply with the laws (most regulations won’t be enforced until two years from now), it lays down some regulatory concepts that may make their way to the US, either at the federal, or more likely state level.

Risky business

Because AI is such a broad catch-all term that can include decades old machine learning algorithms and today’s state-of-the-art large language models that can be used in so many applications, the EU law starts with classifying AI systems focused on risks to people. Here’s how they break down: 

Unacceptable risk — This group is flat-out prohibited in the law. It includes the systems that have already been seen causing real harm to humans. Some of these include:

High risk — This is the category that faces the most regulation, which is the meat of the law. Insurance companies, banks, and AI tools used by government agencies, law enforcement and healthcare that make consequential decisions affecting peoples’ lives are likely to fall into this category. Companies developing or using these “high risk” AI systems would have increased transparency requirements and allow for human oversight. This includes any systems that involve: 

  • Biometric identification systems

  • Critical infrastructure, such as internet backbones, the electric grid, water systems and other energy infrastructure

  • Educational training, such as grading assessments, dropout prediction, admissions or placement and behavioral monitoring of students

  • Employment – such as systematically filtering resumes or job applications, and employee monitoring

  • Government and financial services, such as insurance pricing algorithms, credit scoring systems, public assistance eligibility and emergency response systems such as 911 calls or emergency healthcare

  • Law enforcement systems, such as predictive policing, polygraphs, evidence analysis such as DNA tests in trials and criminal profiling

  • Immigration application processing, risk assessment or profiling

  • Democratic processes, such as judicial decision making, elections and voting

Limited risk — This applies to generative AI tools like the chatbots and image generators that you might have actually used recently, such as ChatGPT or Midjourney 

  • Disclosure. When people are using these tools, they need to be informed that they are indeed talking to a AI powered chatbot

  • Labeling AI-generated content, so other computers (and humans) can detect if a work was generated via AI. This faces some serious technical challenges, as it has proven difficult to detect AI generated content automatically  

Minimal risk — These systems are left unregulated and include some of the AI that has been part of our lives for a while, such as spam filters, or AI used in video games. 

General purpose AI

Another key concept in the regulation is the definition of “general purpose AI” systems. This means AI models that have been trained on a wide variety of content, meant to be useful for a broad assortment of applications. The biggest models out there today, such as OpenAI’s GPT-4 or Google’s Gemini would fall under this category.

Any of these model builders are required to comply with the EU’s copyright laws, share a summary of the content that was used to train the model, release technical documentation about how it was trained and evaluated, and provide documentation for anyone incorporating these models into their own “downstream” AI products. 

The EU law actually lessens the restrictions for open source models, which would include Meta’s new Llama 3.1, especially because the model also includes the “weights” (the weighted contextual relationships between words). Open models — or to use a term preferred by FTC Chair Lina Khan, “open weights models” — like this would only need to comply with EU copyright laws and a summary of the training content. 

When asked about Meta’s plans to comply with the EU AI act, its spokesperson said, “We welcome harmonized EU rules to ensure AI is developed and deployed responsibly. From early on we have supported the Commission’s risk-based, technology-neutral approach and championed the need for a framework which facilitates and encourages open AI models and openness more broadly. It is critical we don’t lose sight of AI's huge potential to foster European innovation and enable competition, and openness is key here. We look forward to working with the AI Office and the Commission as they begin the process of implementing these new rules.” 

An OpenAI spokesperson directed us to a company blog post about the EU law, which noted:

“OpenAI is committed to complying with the EU AI Act and we will be working closely with the new EU AI Office as the law is implemented. In the coming months, we will continue to prepare technical documentation and other guidance for downstream providers and deployers of our GPAI models, while advancing the security and safety of the models we provide in the European market and beyond. ”

Penalties

The fines for violating the EU AI law can be steep. Like the fines for the EU’s privacy law or Digital Markets Act, the violations are tied to a company’s annual global revenue. Companies deploying prohibited “unacceptable risk” AI systems could face up to €35,000,000, or 7% of a company’s annual global revenue, whichever is higher. It’s a little lower than the 10% fine for violating the DMA, as Apple is finding out as it faces a possible $38 billion fine as the first target of that act, but an EU AI violation could still equal a nearly $27 billion hit.  

Google, Apple, and OpenAI did not respond to a request for comment as of press time.

Updated at 5:30 PM to include OpenAI’s response.

More Tech

See all Tech
tech

Apple poaches Meta’s chief legal officer

Just a day after Meta announced that it had hired away Apple’s user interface design lead, Apple has announced that it’s poached Jennifer Newstead, Meta’s chief legal officer, to become Apple’s new general counsel. Kate Adams, Apple’s general counsel since 2017, will be retiring late next year.

Apple also announced the retirement of Lisa Jackson, vice president for Environment, Policy, and Social Initiatives, who will leave the company in late January 2026.

The flurry of high-level management changes at Apple happens amid fervent speculation that CEO Tim Cook may be retiring soon.

Apple also announced the retirement of Lisa Jackson, vice president for Environment, Policy, and Social Initiatives, who will leave the company in late January 2026.

The flurry of high-level management changes at Apple happens amid fervent speculation that CEO Tim Cook may be retiring soon.

tech

EU calls for bids to build “AI gigafactories” in 2026

The European Union wants to shore up its domestic AI infrastructure and reduce its dependence on American tech companies.

To further this goal, the bloc is planning on accepting bids to build EU-based “AI gigafactories,” according to a report from The Wall Street Journal.

EU Executive Vice-President for Tech Sovereignty, Security and Democracy Henna Virkkunen announced that bids would begin in January or February, per the report.

As the AI arms race heats up, countries are racing to secure their own sovereign AI infrastructure, including building their own AI models that reflect their culture and language and offer control over cloud computing resources.

Europe is lagging behind the US and Asia in AI infrastructure. But it may be hard for the EU to fully break free of American tech — unlike the US and China, there is no European alternative for the powerful GPUs needed to train and run AI models. It’s very likely that any AI gigafactories in the EU will be filled with GPUs from Nvidia.

EU Executive Vice-President for Tech Sovereignty, Security and Democracy Henna Virkkunen announced that bids would begin in January or February, per the report.

As the AI arms race heats up, countries are racing to secure their own sovereign AI infrastructure, including building their own AI models that reflect their culture and language and offer control over cloud computing resources.

Europe is lagging behind the US and Asia in AI infrastructure. But it may be hard for the EU to fully break free of American tech — unlike the US and China, there is no European alternative for the powerful GPUs needed to train and run AI models. It’s very likely that any AI gigafactories in the EU will be filled with GPUs from Nvidia.

tech

Google’s AI chip business could be a $900 billion boon for the company

Google may be sitting on a massive new business that it has yet to fully exploit.

Google’s custom tensor processing unit (TPU) AI chips have been getting a lot of attention recently, making the tech world wonder if there are other ways to power its AI dreams rather than just by using Nvidia’s GPUs.

Bloomberg spoke with analysts who estimate that, if it does decide to sell its chips to others, Google could capture 20% of the AI market, making it a $900 billion business. For comparison, Google Cloud pulled in $43.2 billion of revenue last year.

Even if Google just sticks with renting access to its TPUs, it will continue to drive down costs and increase margins as it ekes out performance improvements, such as the 30x improvement in power efficiency that the latest generation of TPUs has delivered for the company.

Bloomberg spoke with analysts who estimate that, if it does decide to sell its chips to others, Google could capture 20% of the AI market, making it a $900 billion business. For comparison, Google Cloud pulled in $43.2 billion of revenue last year.

Even if Google just sticks with renting access to its TPUs, it will continue to drive down costs and increase margins as it ekes out performance improvements, such as the 30x improvement in power efficiency that the latest generation of TPUs has delivered for the company.

tech

OpenAI’s Sam Altman has explored bringing his feud with Tesla’s Elon Musk to space

Billionaires, they’re just like us: they want to bring their terrestrial beefs to outer space.

OpenAI CEO Sam Altman has explored buying or partnering with a rocket company to compete with Tesla CEO Elon Musk’s SpaceX, The Wall Street Journal reports. The two billionaires have had numerous public feuds over the years that have played out in the courts and on social media. They also both lead AI companies that have insatiable needs for data centers and have publicly discussed building data centers in space.

Altman seems like he thinks this could be more than science fiction. He reportedly reached out to rocket maker Stoke Space to potentially make equity investments in the company to get a controlling stake, though the talks are no longer active, WSJ reports.

Or perhaps he just wanted a Sherwood bobblehead of himself.

tech

Report: Meta to slash metaverse, VR spending by up to 30%

Four years after changing its name to reflect its focus on the loosely defined “metaverse,” Meta is planning deep cuts to the company’s money-losing virtual reality efforts, according to a report from Bloomberg.

Meta’s Reality Labs division, home to the teams working on metaverse products — which include Quest VR headsets, Horizon Worlds, and its Ray-Ban Meta glasses — has lost about $70 billion since the company started breaking out the unit in 2020.

The company has struggled to get consumers to buy into CEO Mark Zuckerberg’s vision of working and playing in virtual reality worlds, like the company’s Horizon Worlds platform.

Investors seem to love the news of the pivot, as shares shot up as much as 5% in early trading.

Meta’s recent hiring spree of AI superstars from competitors for its Meta Superintelligence Labs shows that the company’s attention is now all in on AI.

Meta’s Reality Labs division, home to the teams working on metaverse products — which include Quest VR headsets, Horizon Worlds, and its Ray-Ban Meta glasses — has lost about $70 billion since the company started breaking out the unit in 2020.

The company has struggled to get consumers to buy into CEO Mark Zuckerberg’s vision of working and playing in virtual reality worlds, like the company’s Horizon Worlds platform.

Investors seem to love the news of the pivot, as shares shot up as much as 5% in early trading.

Meta’s recent hiring spree of AI superstars from competitors for its Meta Superintelligence Labs shows that the company’s attention is now all in on AI.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.