Tech
OpenAI CEO Sam Altman talks with Apple senior Vice President of Services Eddy Cue during WWDC 2024
(Justin Sullivan/Getty Images)
Platformer

Will Apple Intelligence make apps obsolete?

Apple’s AI moment arrives — and raises lots of questions about what this means for the future of its ecosystem.

Casey Newton

CUPERTINO — By the time Apple got around to talking about artificial intelligence on Monday, it had already delivered a full keynote’s worth of announcements about improvements to come across its proliferating array of operating systems. Here was a novel math app that solved your handwritten equations in real time; there was a feature that would let you operate your iPhone from your Mac.

These are the sort of welcome but modest improvements the company often shows off at its Worldwide Developer Conference: gently enhancing the user experience on their devices, but usually not offering anything that would send you running to the Apple Store to upgrade to a newer and more capable device.

Then, halfway through, the pre-recorded keynote arrived at the announcement Mark Gurman’s extraordinary reporting over the weekend had prepared us for: Apple Intelligence, a suite of operating system-level applications of AI that represent the company’s first major effort to integrate generative models into their product lineup. 

It was a moment that has seemed inevitable since November 2022, when ChatGPT launched and catalyzed global interest in how AI can enhance products. In the 18 months since, impatient investors have worried that Apple might be letting the moment pass it by. Savvier observers have noted that this is how Apple has worked for decades now: approaching new technologies deliberately, and on its own time; developing its distinctive take on the product; and releasing it only when polished to the company’s quality standards. 

Judging from the preview, Apple Intelligence was created in just this way. The company took time to develop principles around what AI should do on its devices. It landed on a suite of AI features for the operating system, designed to make its devices more valuable by leveraging the massive amount of personalized data on your devices. (Sensitive to the implications of such an invasive technology, Apple also took pains to develop a more private approach to data processing for AI apps as well.)

The question now is how polished those features will feel at release. Will the new, more natural Siri deliver on its now 13-year-old promise of serving as a valuable digital assistant? Or will it quickly find itself in a Google-esque scenario where it’s telling anyone who asks to eat rocks

Journalists were not offered a chance to try any of the new features today, nor could we even ask questions of any of the executives.

It will be some time before we know. Journalists were not offered a chance to try any of the new features today, nor could we even ask questions of any of the executives. (Instead we were herded into the Steve Jobs Theater to watch the YouTuber iJustine lob carefully vetted softballs at Apple executives Craig Federighi and John Giannandrea for a half-hour.)

As a result, for now we can’t answer how well it works. And so the most interesting question available to discuss is more like: what is all of this pointing to?

During the keynote, Federighi — Apple’s senior vice president of software engineering — laid out the company’s principles for AI. It should be helpful; it should be easy to use; it should be integrated into products you’re already using; and it should be personalized based on what it knows about you.

Much of what Apple showed off today has long been available for free in other apps. (Perhaps that’s why, as MG Siegler noted today, the company’s stock was actually down about 2 percent after the event.) You’ll be able to automatically generate text almost anywhere you can type in the operating system, Federighi said, whether that be in Pages, Mail, or a third-party app. Similarly, you can use text-to-image tools to create custom emoji or generate DALL-E style images using a new app called Image Playground. (Notably, Image Playground will not generate photorealistic images, likely in an effort to prevent misuse.)

Here the pitch is less about innovation than it is convenience. The present-day AI experience involves a lot of copying and pasting between apps; Apple Intelligence promises to do the work directly on your device and route the resulting data around the operating system for you.

But it’s in Federighi’s final principal — that AI should be personalized around what it knows about you — that Apple’s real advantage is apparent. It’s how the company distinguishes itself from (friendly) rivals like OpenAI or Anthropic, which at the moment offer you only a box to type into, and have limited memory of how you have used their chatbots. Apple can pull from your email, your message, your contacts, and countless other surfaces throughout the operating system, and — in theory — can draw from them to help you more easily navigate the world. 

Apple Intelligence also represents a chance to reboot Siri, its perpetually tin-eared and tone-deaf voice assistant. The company demonstrated Siri handling more difficult syntax than it did previously, and with a longer memory. It will also be able to control more parts of the operating system. 

Still, it was not nearly as impressive as what OpenAI showed off last month with its emotionally intelligent, low-latency (and hugely controversial) voice mode for ChatGPT. I was left wondering whether Apple might be open to a deeper partnership with OpenAI to improve Siri, or whether the company still hopes to catch up over time.

Speaking of OpenAI, that company did get some limited time on screen Monday. ChatGPT will be integrated into Siri, but somewhat halfheartedly: Siri will still endeavor to answer questions on its own, while routing only some queries to OpenAI. In the demo, Siri makes you tap to confirm that you are OK with Siri doing this — which might be sensible from a privacy standpoint, but feels deeply annoying as a user experience. (Why not just map your iPhone’s action button to ChatGPT’s voice assistant and bypass it altogether?)

In any case, while the Apple partnership clearly represented a win for OpenAI after a bruising few weeks, I was also struck at the degree to which Apple played down its significance during the keynote. Sam Altman did not appear in the keynote, though he was present at the event. And at the iJustine event, Federighi took the unusual step of saying that other models — including Google Gemini — would likely be coming to the operating system.

“We think ultimately people will have a preference for which models they want to use,” he said. 

The most vexing part of the new Siri, at least as it was shown, is not whether it works but what it can do. The company flashed a few screens of possibilities: add an address to a contact card; show certain very specific photos; “make this photo pop.” But how do you remember to do that in the moment? The invisible interface has always been the problem with voice assistants, and I wonder if Apple is doing enough to address it. (One employee said during the keynote that Siri can now answer thousands of questions about how to use Apple’s operating systems, so maybe that’s one way.) 

There were also — if you squinted — hints of a much different future for computing. In the demo that I found most compelling, an employee asked Siri “how long will it take to get to the restaurant” and the OS figured out the answer by consulting email, text messages, and maps to derive the answer. I’ve written a few times lately about how AI (or at least Google’s version of it) has put the web into a state of managed decline; today’s keynote raised the question of whether AI will induce a similar senescence in the app economy. 

It’s kind of a grim thought for a developer conference — which is perhaps why Apple did not dwell on it.


Casey Newton writes Platformer, a daily guide to understanding social networks and their relationships with the world. This piece was originally published on Platformer.

More Tech

See all Tech
tech

OpenAI races to release updated ChatGPT in response to Gemini, the WSJ reports

OpenAI could release an updated GPT-5.2 as soon as this week as it races to respond to Google’s Gemini 3 chatbot. Last week, OpenAI CEO Sam Altman declared a “code red” in response to the threat, as Google appeared to leap to the front of the pack with its high-scoring AI model.

Altman has directed OpenAI teams to pause work on its quest for AGI and the Sora 2 video generation app, and double down on its core flagship product, ChatGPT, as it faces new pressure from competitors, The Wall Street Journal reports.

Altman seems to be panicking that if the company’s core product falls out of favor, it may not be able to generate the cash needed to pay for the $1.4 trillion worth of deals it has signed, according to the report.

Altman has directed OpenAI teams to pause work on its quest for AGI and the Sora 2 video generation app, and double down on its core flagship product, ChatGPT, as it faces new pressure from competitors, The Wall Street Journal reports.

Altman seems to be panicking that if the company’s core product falls out of favor, it may not be able to generate the cash needed to pay for the $1.4 trillion worth of deals it has signed, according to the report.

900M

OpenAI’s ChatGPT is nearing 900 million weekly active users, according to a new report from The Information, up from 800 million in October. The Microsoft-backed chatbot has notably higher usership than Google’s Gemini, which as of its last earnings call had 650 million monthly active users. (ChatGPT’s monthly number is likely much higher than its weekly stats.)

Still, The Information notes that thanks to the success of Google’s latest AI model, app downloads have jumped and visits to its website are growing much more swiftly than those to ChatGPT’s — stats that have contributed to a “code red” situation at OpenAI.

tech

Falling behind its rivals and facing internal tension, Meta reportedly preps new “Avocado” AI model

2025 turned out to be quite a chaotic year for Meta’s big AI dreams.

This year was supposed to be all about Llama 4, Meta’s open-source AI model that never fully launched.

In a frenzied pivot to get back in the race, Mark Zuckerberg undertook an AI all-star hiring spree for his new Meta Superintelligence Labs, spending oodles of cash on NBA-sized pay packages to lure recruits.

So how’s it all going within the company? Things aren’t exactly humming along, according to CNBC.

It reports that the new team has encountered friction within Meta’s corporate structure. The cloistered team is apparently working on a new frontier AI model code-named “Avocado,” which, despite Mark Zuckerberg’s passionate open-source AI manifesto, might turn out to be a proprietary, closed-source model.

Per the report, the many in the company were expecting Avocado to be released before the end of this year, but it’s now planned for Q1 2026.

A Meta spokesperson said, “Our model training efforts are going according to plan and have had no meaningful timing changes.”

Updated to include comments from Meta

In a frenzied pivot to get back in the race, Mark Zuckerberg undertook an AI all-star hiring spree for his new Meta Superintelligence Labs, spending oodles of cash on NBA-sized pay packages to lure recruits.

So how’s it all going within the company? Things aren’t exactly humming along, according to CNBC.

It reports that the new team has encountered friction within Meta’s corporate structure. The cloistered team is apparently working on a new frontier AI model code-named “Avocado,” which, despite Mark Zuckerberg’s passionate open-source AI manifesto, might turn out to be a proprietary, closed-source model.

Per the report, the many in the company were expecting Avocado to be released before the end of this year, but it’s now planned for Q1 2026.

A Meta spokesperson said, “Our model training efforts are going according to plan and have had no meaningful timing changes.”

Updated to include comments from Meta

tech

Microsoft invests tens of billions for AI infrastructure in India and Canada

In two separate announcements this morning, Microsoft committed to investing tens of billions on AI infrastructure in India and Canada. In India, the company said it will invest $17.5 billion — its largest-ever investment in Asia — from 2026 to 2029, “to advance the country’s cloud and artificial intelligence (AI) infrastructure, skilling, and ongoing operations.”

Microsoft also said it’s adding to its investments in Canada for a total of CA$19 billion (roughly $13.73 billion) between 2023 and 2027 to “build new digital and AI infrastructure needed for the nation’s growth and prosperity.” This includes more than CA$7.5 billion in outlays over the next two years.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.