artificial intelligence

Tim Cook says Apple Intelligence not completely spared from AI hallucinations

Rappler.com

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Tim Cook says Apple Intelligence not completely spared from AI hallucinations

TIM COOK. Apple CEO Tim Cook attends the annual developer conference event at the company's headquarters in Cupertino, California, USA, on June 10, 2024.

Carlos Barria/Reuters

Apple CEO Tim Cook, when asked about his confidence that Apple Intelligence would not hallucinate, admits it is 'short of 100%'

Apple CEO Tim Cook, when asked about his confidence in Apple Intelligence’s immunity to AI hallucinations, said it was “not 100 percent.”

In an interview with the Washington Post on Tuesday, June 11, Cook was asked, “What’s your confidence that Apple Intelligence will not hallucinate?”

Cook replied, “It’s not 100%. But I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we’re using it in. So I am confident it will be very high quality. But I’d say in all honesty that’s short of 100%. I would never claim that it’s 100%.”

Cook made this admission on the same day Apple held its 2024 Worldwide Developers Conference and announced its own AI model, Apple Intelligence, .

Apple Intelligence is planned to be integrated into Apple’s brand-new software features, with the goal of automating everyday tasks such as sorting out messages, writing emails, or helping with creative pursuits.

Apple Intelligence will also enhance Apple’s digital assistant, Siri, allowing it to perform more actions than before.

There are clear examples of AI hallucinations and how they plague current AI models today.

CNET, in a report explaining what AI hallucinations are, said these occur when generative AI models answer questions they are unprepared for, leading to the AI giving out false or misleading information. A study in JAMA Opthalmology, meanwhile, said that 30% of the references provided by ChatGPT for ophthalmic scientific abstracts were proven to be fake, demonstrating their unreliability in academic research. Another article from Reuters told of New York lawyers being fined for using verifiably inaccurate case citations generated by ChatGPT with their legal briefs.

Given Apple’s partnership with OpenAI, there is an expectation of more collaborations in the future, and while time will tell if Apple Intelligence will solve the hallucination issue, Cook is optimistic about the OpenAI collaboration and plans to team up with other potential partners too.

Said Cook in the Washington Post interview in reference to making OpenAI its first partner, “I think our customers want something with world knowledge some of the time. So we considered everything and everyone. And obviously we’re not stuck on one person forever or something. We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.” – Rav Ayag/Rappler.com

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!