Tech
Apple and European Union
(Photo by Metin Aktas/Anadolu via Getty Images)
Risky business

The EU's Artificial Intelligence Act upends the free-for-all era of AI

It’s week one for the new regulation which could cost big tech billions if they don’t comply

Jon Keegan

For American tech companies building AI, the past few years have been an unregulated free-for-all, with companies rapidly developing and releasing their powerful new technologies, turning the public marketplace into a sort of petri dish. 

In the race to stuff these expensive new tools into every existing digital product, discussions about potential harms, transparency, and intellectual property have taken a back seat.

But on August 1 in the EU, the first comprehensive law regulating AI took effect, and together with other EU regulations aimed at big tech, already affecting American companies operating in Europe.  

Citing concerns with the EU’s GDPR privacy law, Meta recently announced that it will withhold its multi-modal Llama model (images, text, audio, and video) from the region, “due to the unpredictable nature of the European regulatory environment," said a Meta spokesperson in a statement to Sherwood News. 

Apple also is playing hardball with European regulators by threatening to withhold its new “Apple Intelligence” features, citing the Digital Markets Act, an EU law that seeks to enforce fair competition among the largest tech platforms.

While the new AI regulation gives companies a lot of time to comply with the laws (most regulations won’t be enforced until two years from now), it lays down some regulatory concepts that may make their way to the US, either at the federal, or more likely state level.

Risky business

Because AI is such a broad catch-all term that can include decades old machine learning algorithms and today’s state-of-the-art large language models that can be used in so many applications, the EU law starts with classifying AI systems focused on risks to people. Here’s how they break down: 

Unacceptable risk — This group is flat-out prohibited in the law. It includes the systems that have already been seen causing real harm to humans. Some of these include:

High risk — This is the category that faces the most regulation, which is the meat of the law. Insurance companies, banks, and AI tools used by government agencies, law enforcement and healthcare that make consequential decisions affecting peoples’ lives are likely to fall into this category. Companies developing or using these “high risk” AI systems would have increased transparency requirements and allow for human oversight. This includes any systems that involve: 

  • Biometric identification systems

  • Critical infrastructure, such as internet backbones, the electric grid, water systems and other energy infrastructure

  • Educational training, such as grading assessments, dropout prediction, admissions or placement and behavioral monitoring of students

  • Employment – such as systematically filtering resumes or job applications, and employee monitoring

  • Government and financial services, such as insurance pricing algorithms, credit scoring systems, public assistance eligibility and emergency response systems such as 911 calls or emergency healthcare

  • Law enforcement systems, such as predictive policing, polygraphs, evidence analysis such as DNA tests in trials and criminal profiling

  • Immigration application processing, risk assessment or profiling

  • Democratic processes, such as judicial decision making, elections and voting

Limited risk — This applies to generative AI tools like the chatbots and image generators that you might have actually used recently, such as ChatGPT or Midjourney 

  • Disclosure. When people are using these tools, they need to be informed that they are indeed talking to a AI powered chatbot

  • Labeling AI-generated content, so other computers (and humans) can detect if a work was generated via AI. This faces some serious technical challenges, as it has proven difficult to detect AI generated content automatically  

Minimal risk — These systems are left unregulated and include some of the AI that has been part of our lives for a while, such as spam filters, or AI used in video games. 

General purpose AI

Another key concept in the regulation is the definition of “general purpose AI” systems. This means AI models that have been trained on a wide variety of content, meant to be useful for a broad assortment of applications. The biggest models out there today, such as OpenAI’s GPT-4 or Google’s Gemini would fall under this category.

Any of these model builders are required to comply with the EU’s copyright laws, share a summary of the content that was used to train the model, release technical documentation about how it was trained and evaluated, and provide documentation for anyone incorporating these models into their own “downstream” AI products. 

The EU law actually lessens the restrictions for open source models, which would include Meta’s new Llama 3.1, especially because the model also includes the “weights” (the weighted contextual relationships between words). Open models — or to use a term preferred by FTC Chair Lina Khan, “open weights models” — like this would only need to comply with EU copyright laws and a summary of the training content. 

When asked about Meta’s plans to comply with the EU AI act, its spokesperson said, “We welcome harmonized EU rules to ensure AI is developed and deployed responsibly. From early on we have supported the Commission’s risk-based, technology-neutral approach and championed the need for a framework which facilitates and encourages open AI models and openness more broadly. It is critical we don’t lose sight of AI's huge potential to foster European innovation and enable competition, and openness is key here. We look forward to working with the AI Office and the Commission as they begin the process of implementing these new rules.” 

An OpenAI spokesperson directed us to a company blog post about the EU law, which noted:

“OpenAI is committed to complying with the EU AI Act and we will be working closely with the new EU AI Office as the law is implemented. In the coming months, we will continue to prepare technical documentation and other guidance for downstream providers and deployers of our GPAI models, while advancing the security and safety of the models we provide in the European market and beyond. ”

Penalties

The fines for violating the EU AI law can be steep. Like the fines for the EU’s privacy law or Digital Markets Act, the violations are tied to a company’s annual global revenue. Companies deploying prohibited “unacceptable risk” AI systems could face up to €35,000,000, or 7% of a company’s annual global revenue, whichever is higher. It’s a little lower than the 10% fine for violating the DMA, as Apple is finding out as it faces a possible $38 billion fine as the first target of that act, but an EU AI violation could still equal a nearly $27 billion hit.  

Google, Apple, and OpenAI did not respond to a request for comment as of press time.

Updated at 5:30 PM to include OpenAI’s response.

More Tech

See all Tech
tech

Morgan Stanley thinks Tesla’s Terafab could cost an additional $35 billion to $45 billion in capex

Tesla’s Terafab project, which CEO Elon Musk said could launch this week, is poised to be one of the company’s most expensive bets yet. The facility is intended to manufacture the chips needed for Tesla’s autonomous vehicles and humanoid robots, and to avoid supply bottlenecks.

If the company reaches its long-term goal of producing 100 million humanoid robots annually, it could require more than 200 million chips a year — over 50x its current demand, Morgan Stanley said.

The firm estimates total capital expenditure for the facility could reach $35 billion to $45 billion, including construction costs and roughly $20 billion to $25 billion for wafer fabrication equipment alone. That spending is not included in Tesla’s already sizable $20 billion capex budget for this year. Morgan Stanley’s semiconductor analysts described the effort as a “Herculean task,” noting the difficulty of building leading-edge chip capabilities from scratch.

While Tesla would likely spread the investment out over several years — even on an aggressive timeline, initial output would likely not arrive until the latter part of the decade — the effort would still weigh heavily on free cash flow and mark a shift toward a more capital-intensive business model.

Tesla’s most expensive factory to date, its Nevada battery plant that it began building in 2014, is estimated to have cost about $10 billion over time — a fraction of the expected Terafab cost.

tech

Lyft and Uber jump after announcing expanded robotaxi partnerships with Nvidia

Uber and Lyft both announced expanded AI and autonomous vehicle partnerships with Nvidia at the company’s GTC event, sending both ride-hailing stocks up after-hours on Monday and into Tuesday’s premarket session.

Uber is currently up more than 2%, while Lyft has risen around 1.3%.

Uber said Nvidia-powered Level 4 robotaxis will launch on its platform in Los Angeles and San Francisco in 2027, with plans to scale to 28 cities globally by 2028. Meanwhile, Lyft said it will use Nvidia’s AI infrastructure to improve ride-matching, mapping, and efficiency, while also using Nvidia’s DRIVE Hyperion platform as a foundation for future autonomous fleets.

Separately, Nvidia announced expanded autonomous driving partnerships with Kia and Hyundai.

The announcements highlight Nvidia’s growing push to provide the AI hardware and software powering next-generation robotaxi networks — packaging the technology needed for self-driving cars into a platform that other companies can use to compete with Tesla.

15

Tesla’s Robotaxi program has disclosed its 15th accident, Electrek reports, citing the latest filing from the National Highway Traffic Safety Administration. According to Electrek’s estimation, extrapolated from the last time Tesla disclosed mileage figures, that amounts to a crash every 57,000 miles — about 9x the rate for humans.

The latest crash involved a Model Y hitting a fixed object at 9 mph in January while the autonomous system was engaged.

Humans are very much still involved with Tesla’s so-called autonomous driving service. Despite the service announcing in January that it had started removing safety monitors from the front seats, only two unsupervised vehicles have been spotted in the last month, per Robotaxi Tracker. The entire fleet has also dwindled from around 50 vehicles to just 35. Their mileage is unavailable.

tech
Rani Molla

Meta’s reported 20% layoff could bring headcount to its lowest level since 2021

Meta is rising Monday morning after Reuters reported the tech giant is planning to lay off 20% of its employees in an effort to use AI to make its workforce more efficient and offset its surging AI capex costs.

On the company’s last earnings call, CEO Mark Zuckerberg touted 30% efficiency gains for its software engineers and said some “power users” of the company’s AI coding tools saw productivity jump as high as 80% — what some saw as a veiled threat to employees who failed to use AI to boost their output.

Meta’s headcount was nearly 79,000 last quarter, having steadily risen since its layoffs during the self-described “year of efficiency” in 2023. A 20% cut would bring headcount to around 63,000 — the company’s lowest level since 2021.

Shares were recently up 2.7%.

Meta’s headcount was nearly 79,000 last quarter, having steadily risen since its layoffs during the self-described “year of efficiency” in 2023. A 20% cut would bring headcount to around 63,000 — the company’s lowest level since 2021.

Shares were recently up 2.7%.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.