Tech
OpenAI CEO Sam Altman talks with Apple senior Vice President of Services Eddy Cue during WWDC 2024
(Justin Sullivan/Getty Images)
Platformer

Will Apple Intelligence make apps obsolete?

Apple’s AI moment arrives — and raises lots of questions about what this means for the future of its ecosystem.

Casey Newton
6/11/24 6:58AM

CUPERTINO — By the time Apple got around to talking about artificial intelligence on Monday, it had already delivered a full keynote’s worth of announcements about improvements to come across its proliferating array of operating systems. Here was a novel math app that solved your handwritten equations in real time; there was a feature that would let you operate your iPhone from your Mac.

These are the sort of welcome but modest improvements the company often shows off at its Worldwide Developer Conference: gently enhancing the user experience on their devices, but usually not offering anything that would send you running to the Apple Store to upgrade to a newer and more capable device.

Then, halfway through, the pre-recorded keynote arrived at the announcement Mark Gurman’s extraordinary reporting over the weekend had prepared us for: Apple Intelligence, a suite of operating system-level applications of AI that represent the company’s first major effort to integrate generative models into their product lineup. 

It was a moment that has seemed inevitable since November 2022, when ChatGPT launched and catalyzed global interest in how AI can enhance products. In the 18 months since, impatient investors have worried that Apple might be letting the moment pass it by. Savvier observers have noted that this is how Apple has worked for decades now: approaching new technologies deliberately, and on its own time; developing its distinctive take on the product; and releasing it only when polished to the company’s quality standards. 

Judging from the preview, Apple Intelligence was created in just this way. The company took time to develop principles around what AI should do on its devices. It landed on a suite of AI features for the operating system, designed to make its devices more valuable by leveraging the massive amount of personalized data on your devices. (Sensitive to the implications of such an invasive technology, Apple also took pains to develop a more private approach to data processing for AI apps as well.)

The question now is how polished those features will feel at release. Will the new, more natural Siri deliver on its now 13-year-old promise of serving as a valuable digital assistant? Or will it quickly find itself in a Google-esque scenario where it’s telling anyone who asks to eat rocks

Journalists were not offered a chance to try any of the new features today, nor could we even ask questions of any of the executives.

It will be some time before we know. Journalists were not offered a chance to try any of the new features today, nor could we even ask questions of any of the executives. (Instead we were herded into the Steve Jobs Theater to watch the YouTuber iJustine lob carefully vetted softballs at Apple executives Craig Federighi and John Giannandrea for a half-hour.)

As a result, for now we can’t answer how well it works. And so the most interesting question available to discuss is more like: what is all of this pointing to?

During the keynote, Federighi — Apple’s senior vice president of software engineering — laid out the company’s principles for AI. It should be helpful; it should be easy to use; it should be integrated into products you’re already using; and it should be personalized based on what it knows about you.

Much of what Apple showed off today has long been available for free in other apps. (Perhaps that’s why, as MG Siegler noted today, the company’s stock was actually down about 2 percent after the event.) You’ll be able to automatically generate text almost anywhere you can type in the operating system, Federighi said, whether that be in Pages, Mail, or a third-party app. Similarly, you can use text-to-image tools to create custom emoji or generate DALL-E style images using a new app called Image Playground. (Notably, Image Playground will not generate photorealistic images, likely in an effort to prevent misuse.)

Here the pitch is less about innovation than it is convenience. The present-day AI experience involves a lot of copying and pasting between apps; Apple Intelligence promises to do the work directly on your device and route the resulting data around the operating system for you.

But it’s in Federighi’s final principal — that AI should be personalized around what it knows about you — that Apple’s real advantage is apparent. It’s how the company distinguishes itself from (friendly) rivals like OpenAI or Anthropic, which at the moment offer you only a box to type into, and have limited memory of how you have used their chatbots. Apple can pull from your email, your message, your contacts, and countless other surfaces throughout the operating system, and — in theory — can draw from them to help you more easily navigate the world. 

Apple Intelligence also represents a chance to reboot Siri, its perpetually tin-eared and tone-deaf voice assistant. The company demonstrated Siri handling more difficult syntax than it did previously, and with a longer memory. It will also be able to control more parts of the operating system. 

Still, it was not nearly as impressive as what OpenAI showed off last month with its emotionally intelligent, low-latency (and hugely controversial) voice mode for ChatGPT. I was left wondering whether Apple might be open to a deeper partnership with OpenAI to improve Siri, or whether the company still hopes to catch up over time.

Speaking of OpenAI, that company did get some limited time on screen Monday. ChatGPT will be integrated into Siri, but somewhat halfheartedly: Siri will still endeavor to answer questions on its own, while routing only some queries to OpenAI. In the demo, Siri makes you tap to confirm that you are OK with Siri doing this — which might be sensible from a privacy standpoint, but feels deeply annoying as a user experience. (Why not just map your iPhone’s action button to ChatGPT’s voice assistant and bypass it altogether?)

In any case, while the Apple partnership clearly represented a win for OpenAI after a bruising few weeks, I was also struck at the degree to which Apple played down its significance during the keynote. Sam Altman did not appear in the keynote, though he was present at the event. And at the iJustine event, Federighi took the unusual step of saying that other models — including Google Gemini — would likely be coming to the operating system.

“We think ultimately people will have a preference for which models they want to use,” he said. 

The most vexing part of the new Siri, at least as it was shown, is not whether it works but what it can do. The company flashed a few screens of possibilities: add an address to a contact card; show certain very specific photos; “make this photo pop.” But how do you remember to do that in the moment? The invisible interface has always been the problem with voice assistants, and I wonder if Apple is doing enough to address it. (One employee said during the keynote that Siri can now answer thousands of questions about how to use Apple’s operating systems, so maybe that’s one way.) 

There were also — if you squinted — hints of a much different future for computing. In the demo that I found most compelling, an employee asked Siri “how long will it take to get to the restaurant” and the OS figured out the answer by consulting email, text messages, and maps to derive the answer. I’ve written a few times lately about how AI (or at least Google’s version of it) has put the web into a state of managed decline; today’s keynote raised the question of whether AI will induce a similar senescence in the app economy. 

It’s kind of a grim thought for a developer conference — which is perhaps why Apple did not dwell on it.


Casey Newton writes Platformer, a daily guide to understanding social networks and their relationships with the world. This piece was originally published on Platformer.

More Tech

See all Tech
tech

Nebius soars after signing a five year deal with Microsoft to supply nearly $20 billion worth of AI computing power

Artificial intelligence infrastructure group Nebius jumped more than 50% in early trading on Tuesday after the company announced a major deal to supply computing power for Microsoft’s AI operations.

Under the agreement, Nebius will provide Microsoft “access to dedicated GPU infrastructure capacity in tranches at its new data center in Vineland, New Jersey over a five-year term.” The total contract value through 2031 is $17.4 billion, although, if further capacity is required, the contract value could rise to $19.4 billion.

The deal is a sizable portion of Microsoft's proposed annual capital expenditure on AI, which is expected to reach $120 billion by the end of fiscal 2026.

Under the agreement, Nebius will provide Microsoft “access to dedicated GPU infrastructure capacity in tranches at its new data center in Vineland, New Jersey over a five-year term.” The total contract value through 2031 is $17.4 billion, although, if further capacity is required, the contract value could rise to $19.4 billion.

The deal is a sizable portion of Microsoft's proposed annual capital expenditure on AI, which is expected to reach $120 billion by the end of fiscal 2026.

President Trump hosts tech executives and their guests to a dinner at the White House in the Oval Office.

Here are the Trump ties among the tech leaders who had dinner at the White House

Many of the attendees have donated to, vocally supported, or even worked for the president.

tech

Tesla’s EV market share declined to 38% in August

In August, Tesla’s share of the US EV market fell to 38%, according to new data from Cox Automotive reported by Reuters. Tesla’s market share fell below 50% for the first time last year, as competitors’ EVs began hitting the market. Now, as Tesla’s own sales slip more drastically than they had last year, it’s giving up even more ground. Tesla’s market share fell from 48.7% in June to 42% in July to 38% in August, according to Reuters. That slide has come even as buyers rushing to take advantage of the federal tax credit that ends this month provide a near-term boon for sales at Tesla and other EV makers.

$115B

OpenAI now expects to burn around $115 billion through 2029 — a full $80 billion higher than the company had previously estimated, The Information reports.

Just how much is that? It’s roughly equivalent to:

Fortunately for OpenAI, which is raising money at a $500 billion valuation, its revenue is also growing faster than expected. The ChatGPT maker now expects to make $13 billion in revenue this year and $200 billion in 2030.

An annotated photo of who attended the tech dinner at the White House.

An interactive who's-who of the tech execs at Trump's White House dinner

The White House invited a gaggle of top founders and tech executives for an intimate dinner at the White House.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.