Tech
OpenAI CEO Sam Altman talks with Apple senior Vice President of Services Eddy Cue during WWDC 2024
(Justin Sullivan/Getty Images)
Platformer

Will Apple Intelligence make apps obsolete?

Apple’s AI moment arrives — and raises lots of questions about what this means for the future of its ecosystem.

Casey Newton

CUPERTINO — By the time Apple got around to talking about artificial intelligence on Monday, it had already delivered a full keynote’s worth of announcements about improvements to come across its proliferating array of operating systems. Here was a novel math app that solved your handwritten equations in real time; there was a feature that would let you operate your iPhone from your Mac.

These are the sort of welcome but modest improvements the company often shows off at its Worldwide Developer Conference: gently enhancing the user experience on their devices, but usually not offering anything that would send you running to the Apple Store to upgrade to a newer and more capable device.

Then, halfway through, the pre-recorded keynote arrived at the announcement Mark Gurman’s extraordinary reporting over the weekend had prepared us for: Apple Intelligence, a suite of operating system-level applications of AI that represent the company’s first major effort to integrate generative models into their product lineup. 

It was a moment that has seemed inevitable since November 2022, when ChatGPT launched and catalyzed global interest in how AI can enhance products. In the 18 months since, impatient investors have worried that Apple might be letting the moment pass it by. Savvier observers have noted that this is how Apple has worked for decades now: approaching new technologies deliberately, and on its own time; developing its distinctive take on the product; and releasing it only when polished to the company’s quality standards. 

Judging from the preview, Apple Intelligence was created in just this way. The company took time to develop principles around what AI should do on its devices. It landed on a suite of AI features for the operating system, designed to make its devices more valuable by leveraging the massive amount of personalized data on your devices. (Sensitive to the implications of such an invasive technology, Apple also took pains to develop a more private approach to data processing for AI apps as well.)

The question now is how polished those features will feel at release. Will the new, more natural Siri deliver on its now 13-year-old promise of serving as a valuable digital assistant? Or will it quickly find itself in a Google-esque scenario where it’s telling anyone who asks to eat rocks

Journalists were not offered a chance to try any of the new features today, nor could we even ask questions of any of the executives.

It will be some time before we know. Journalists were not offered a chance to try any of the new features today, nor could we even ask questions of any of the executives. (Instead we were herded into the Steve Jobs Theater to watch the YouTuber iJustine lob carefully vetted softballs at Apple executives Craig Federighi and John Giannandrea for a half-hour.)

As a result, for now we can’t answer how well it works. And so the most interesting question available to discuss is more like: what is all of this pointing to?

During the keynote, Federighi — Apple’s senior vice president of software engineering — laid out the company’s principles for AI. It should be helpful; it should be easy to use; it should be integrated into products you’re already using; and it should be personalized based on what it knows about you.

Much of what Apple showed off today has long been available for free in other apps. (Perhaps that’s why, as MG Siegler noted today, the company’s stock was actually down about 2 percent after the event.) You’ll be able to automatically generate text almost anywhere you can type in the operating system, Federighi said, whether that be in Pages, Mail, or a third-party app. Similarly, you can use text-to-image tools to create custom emoji or generate DALL-E style images using a new app called Image Playground. (Notably, Image Playground will not generate photorealistic images, likely in an effort to prevent misuse.)

Here the pitch is less about innovation than it is convenience. The present-day AI experience involves a lot of copying and pasting between apps; Apple Intelligence promises to do the work directly on your device and route the resulting data around the operating system for you.

But it’s in Federighi’s final principal — that AI should be personalized around what it knows about you — that Apple’s real advantage is apparent. It’s how the company distinguishes itself from (friendly) rivals like OpenAI or Anthropic, which at the moment offer you only a box to type into, and have limited memory of how you have used their chatbots. Apple can pull from your email, your message, your contacts, and countless other surfaces throughout the operating system, and — in theory — can draw from them to help you more easily navigate the world. 

Apple Intelligence also represents a chance to reboot Siri, its perpetually tin-eared and tone-deaf voice assistant. The company demonstrated Siri handling more difficult syntax than it did previously, and with a longer memory. It will also be able to control more parts of the operating system. 

Still, it was not nearly as impressive as what OpenAI showed off last month with its emotionally intelligent, low-latency (and hugely controversial) voice mode for ChatGPT. I was left wondering whether Apple might be open to a deeper partnership with OpenAI to improve Siri, or whether the company still hopes to catch up over time.

Speaking of OpenAI, that company did get some limited time on screen Monday. ChatGPT will be integrated into Siri, but somewhat halfheartedly: Siri will still endeavor to answer questions on its own, while routing only some queries to OpenAI. In the demo, Siri makes you tap to confirm that you are OK with Siri doing this — which might be sensible from a privacy standpoint, but feels deeply annoying as a user experience. (Why not just map your iPhone’s action button to ChatGPT’s voice assistant and bypass it altogether?)

In any case, while the Apple partnership clearly represented a win for OpenAI after a bruising few weeks, I was also struck at the degree to which Apple played down its significance during the keynote. Sam Altman did not appear in the keynote, though he was present at the event. And at the iJustine event, Federighi took the unusual step of saying that other models — including Google Gemini — would likely be coming to the operating system.

“We think ultimately people will have a preference for which models they want to use,” he said. 

The most vexing part of the new Siri, at least as it was shown, is not whether it works but what it can do. The company flashed a few screens of possibilities: add an address to a contact card; show certain very specific photos; “make this photo pop.” But how do you remember to do that in the moment? The invisible interface has always been the problem with voice assistants, and I wonder if Apple is doing enough to address it. (One employee said during the keynote that Siri can now answer thousands of questions about how to use Apple’s operating systems, so maybe that’s one way.) 

There were also — if you squinted — hints of a much different future for computing. In the demo that I found most compelling, an employee asked Siri “how long will it take to get to the restaurant” and the OS figured out the answer by consulting email, text messages, and maps to derive the answer. I’ve written a few times lately about how AI (or at least Google’s version of it) has put the web into a state of managed decline; today’s keynote raised the question of whether AI will induce a similar senescence in the app economy. 

It’s kind of a grim thought for a developer conference — which is perhaps why Apple did not dwell on it.


Casey Newton writes Platformer, a daily guide to understanding social networks and their relationships with the world. This piece was originally published on Platformer.

More Tech

See all Tech
tech

Microsoft loses exclusive access to OpenAI’s models and tools while ending revenue-sharing deal with ChatGPT maker

Microsoft shares dropped as it announced a revised agreement with OpenAI.

The amended agreement ends revenue-sharing payments from Microsoft to OpenAI, and also ends Microsoft’s exclusive access to OpenAI’s intellectual property (i.e. models and products).

OpenAI’s revenue sharing with Microsoft will end in 2030, is subject to a total cap, and is no longer dependent on its achieving artificial general intelligence.

Amazon, a likely beneficiary of this lack of exclusivity, initially popped on the news but erased those gains.

This is a developing story.

tech

China just blew up one of Meta’s key AI bets

China has ordered Meta to unwind its $2 billion acquisition of Manus, a Chinese startup (since relocated to Singapore) that makes AI agents and was central to Meta’s push to turn its massive AI investments into a real business. The move is part of the Chinese government’s effort to stop US firms from gaining access to Chinese talent and intellectual property, as Washington continues to restrict sales of advanced AI chips to Chinese companies.

Unlike its tech peers, which can sell AI through cloud services, Meta mainly uses AI to improve its existing ad business rather than as a stand-alone revenue driver. The decision strips away one of Meta’s clearest paths to monetizing AI — leaving it spending like a hyperscaler, without a hyperscaler business model.

Unlike its tech peers, which can sell AI through cloud services, Meta mainly uses AI to improve its existing ad business rather than as a stand-alone revenue driver. The decision strips away one of Meta’s clearest paths to monetizing AI — leaving it spending like a hyperscaler, without a hyperscaler business model.

tech
Jon Keegan

DeepSeek releases new V4 series models highlighting efficiency and long context

Chinese AI lab DeepSeek has released a major new version of its eponymous open-source AI models that are nipping at the heels of leading frontier models in some areas.

The most significant DeepSeek-V4 Pro and DeepSeek-V4 Flash both have a 1 million-token context — the amount of information the model can actively work with in a single session — which is a crucial feature for complex, long-running coding tasks.

DeepSeek rebuilt how the models process information under the hood, making them substantially more efficient — and that efficiency is what makes the large context window actually usable.

Also, the new models’ coding skills have closed the gap with the major frontier models from Anthropic, OpenAI, and Google.

The authors of the model acknowledge some of V4’s shortcomings, such as its lower scores on reasoning benchmarks, saying that V4 “trails state-of-the-art frontier models by approximately 3 to 6 months.”

As open-weight models, V4 can be run on any user’s own hardware, making the V4 models among the top-performing open-source models out there. V4’s large context and token efficiency are especially significant among open-source models.

But like with earlier DeepSeek models, don’t ask it about Tiananmen Square.

DeepSeek rebuilt how the models process information under the hood, making them substantially more efficient — and that efficiency is what makes the large context window actually usable.

Also, the new models’ coding skills have closed the gap with the major frontier models from Anthropic, OpenAI, and Google.

The authors of the model acknowledge some of V4’s shortcomings, such as its lower scores on reasoning benchmarks, saying that V4 “trails state-of-the-art frontier models by approximately 3 to 6 months.”

As open-weight models, V4 can be run on any user’s own hardware, making the V4 models among the top-performing open-source models out there. V4’s large context and token efficiency are especially significant among open-source models.

But like with earlier DeepSeek models, don’t ask it about Tiananmen Square.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.