Apple will reportedly focus its first round of generative AI improvements on improving Siri’s conversational capabilities. Sources speaking with New York Times say company executives realized early last year that ChatGPT was making Siri look outdated. The company is said to have decided that the Large Language Model (LLM) principles behind OpenAI’s chatbot could give the iPhone’s virtual assistant a much-needed boost. Apple will reportedly launch a new version of Siri powered by generative AI at its WWDC keynote on June 10.

Apple senior vice presidents Craig Federighi and John Giannandrea reportedly tested ChatGPT weeks before the company realized Siri seemed outdated. (I’d say the epiphany came about a decade late.) What followed NYT described as “Apple’s most significant reorganization in more than a decade.”

The company sees generative AI as a once-in-a-decade field worth committing tons of resources to tackle. You may recall that the company canceled its $10 billion “Apple Car” project earlier this year. Apple has reportedly reassigned many of these engineers to work on generative AI.

Apple executives are said to fear that AI models could eventually replace established software like iOS, making the iPhone a “dumb brick” by comparison. The awkward, awkward, and generally unconvincing first wave of dedicated AI gadgets we reviewed, like the Human AI Pin and Rabbit R1, aren’t good enough to pose a threat. But that could change as software evolves, other smartphone makers incorporate more AI into their operating systems, and other hardware makers have a chance to innovate.

So, at least for now, it looks like Apple isn’t releasing direct competitors to generative AI powerhouses like ChatGPT (words), Midjourney (images), or ElevenLabs (voices). Instead, it will start with a new Siri and updated iPhone models with expanded memory to better handle local processing. Additionally, the company is reportedly adding a text summarization feature to the Messages app.

Apple's John Ternus stands in front of a digital slide of the M4 chip.

Apple’s M4 chip (shown next to VP John Ternus) can help handle local Siri requests. (apple)

Apple’s first attempt at generative AI if NYTThe sources are correct, it sounds like less of an immediate threat to the creators than some imagined. At the iPad event in May, the company released a video featuring the new iPad Pro that showed various creative tools crushed by a hydraulic press. The clip happened to serve as the perfect metaphor for the (legitimate) fears of artists, musicians, and other creators whose work AI models have been trained on — and who will be replaced by those same tools as they become more normalized for content creation.

On Thursday, Apple apologized for the ad and said it had canceled plans to air it on television.

Samsung and Google have already loaded their flagship phones with various generative AI features that go far beyond improving their virtual assistants. These include tools for photo editing, text generation, and transcription enhancement (among other things). These features typically rely on cloud servers for processing, while Apple’s approach suggests it will prioritize privacy and handle requests locally. So Apple will obviously start with a more streamlined approach that sticks to improving what’s already there, as well as keeping most or all of the processing on the device.

New York TimesSources add that Apple’s culture of internal secrecy and privacy-focused marketing have hindered AI progress. Former Siri engineer John Berkey told the newspaper that the company’s tendency to separate the information that different divisions share with each other is another major culprit in Siri’s inability to evolve far beyond what the assistant was when it launched the day before Siri’s death. Steve Jobs in 2011.