Apple’s next-generation operating systems will feature Project Greymatter, bringing a host of AI-related improvements. We have new details on AI features planned for Siri, Notes and Messages.
AI will enhance some core apps with summarization and transcription features
After widespread claims and reports about AI-related improvements in iOS 18, AppleInsider has received additional information about Apple’s plans in the field of AI.
People familiar with the matter have revealed that the company is internally testing a number of new AI-related features ahead of its annual WWDC. Known by the project codename “Greymatter,” the company’s AI enhancements will focus on practical benefits for the end user.
In pre-release versions of Apple’s operating systems, the company worked on a notification summary feature known as “Greymatter Catch Up”. The feature is linked to Siri, meaning users will be able to search for and get an overview of their recent notifications through the virtual assistant.
Siri is expected to get significantly updated response generation capabilities, through a new intelligent response framework, as well as LLM on Apple devices. When generating answers and summaries, Siri will be able to take into account entities such as people and companies, calendar events, locations, dates, and more.
In our previous reports on Safari 18, Ajax LLM and the updated Voice Memos app, AppleInsider revealed that Apple plans to introduce AI-powered text summarization and transcription into its built-in apps. We have since learned that the company intends to bring these features to Syria as well.
This eventually means Siri will be able to answer questions on the device, create summaries of long articles, or transcribe audio like in the updated Notes or Voice Memos apps. All of this would be done through the use of Ajax LLM or cloud-based processing for more complex tasks.
We’re also told that Apple has been testing improved and “more natural” voices, along with text-to-speech improvements, which should ultimately result in a significantly better user experience.
Apple has also been working on interactive media and TV controls for Siri. This feature would allow someone to, for example, use Siri on the Apple Watch to play music on another device, though the feature isn’t expected until later in 2024.
The company has decided to integrate artificial intelligence into some of its core system applications, with different use cases and tasks in mind. One obvious area of improvement concerns photo editing.
Apple has developed AI generating software for enhanced image editing
iOS 18 and macOS 15 are expected to bring AI photo editing options to apps such as Photos. Internally, Apple has developed a new Clean Up feature, which will allow users to remove objects from images through the use of AI-generating software.
The Clean Up tool will replace Apple’s current Retouch tool
Also related to Project Greymatter, the company has created an app for internal use known as Generative Playground. People familiar with the app revealed exclusively about AppleInsider that it can use Apple’s AI-generating software to create and edit images, and that it features iMessage integration in the form of a dedicated app extension.
In Apple’s testing environments, it is possible to generate an image through artificial intelligence and then send it via iMessage. There are indications that the company has a similar feature planned for end users of its operating systems.
This information is in line with another report that claims users will be able to use AI to generate unique emojis, though there are additional options for image generation features.
According to people familiar with the matter, pre-release versions of Apple’s Notes app also contain references to a generation tool, though it’s unclear whether that tool will generate text or images — as is the case with the Generative Playground app.
Notes will receive AI-powered transcription and summarization, along with Math Notes
Apple has prepared significant improvements to its built-in Notes app, set to make its debut with iOS 18 and macOS 15. The updated Notes will get support for in-app audio recording, audio transcription, and LLM-powered summarization.
iOS 18’s Notes app will support in-app audio recording, transcription, and summarization
Audio recordings, transcriptions, and text-based summaries will be available within a note, alongside any other material users choose to add. This means that a single note can contain, for example, a recording of an entire lecture or meeting, complete with pictures and whiteboard text.
These features would turn Notes into a true powerhouse, making it the go-to app for students and business professionals. Adding audio transcription and summarizing features will also allow Apple’s Notes app to better position itself against rival offerings like OneNote or Microsoft’s Otter.
While support for app-level audio recording, along with AI-powered audio transcription and summarization features will greatly improve the Notes app — those aren’t the only things Apple has been working on.
Math Notes — create graphs and solve equations through the use of AI
The Notes app will get a brand new addition in the form of Math Notes, which will bring support for proper math notation and enable integration with Apple’s new GreyParrot Calculator app. We now have additional details on what the math notes will include.
iOS 18’s Notes app will introduce support for AI-assisted audio transcription and Math Notes
People familiar with the new feature have revealed that Math Notes will allow the app to recognize text in the form of math equations and provide solutions to them. Support for graphing expressions is also in the works, which means we could see something similar to the Grapher app on macOS, but inside Notes.
Apple is also working on improvements focused on math-related data, in the form of a feature known as “Keyboard Math Predictions.” AppleInsider was told that the function would allow math expressions to be completed whenever they are recognized as part of text input.
This means that, within Notes, users will get an option to automatically complete their math equations in a similar way to how Apple currently offers predictive text or inline completions on iOS — which are also expected to make their way into visionOS later this year.
Apple’s visionOS will also see improved integration with Apple’s Transformer LM – the predictive text model that provides suggestions as you type. The operating system is also expected to receive a redesigned voice command interface, which serves as an indication of how much Apple values data-related improvements.
The company is also looking to improve user input through the use of so-called “smart replies,” which will be available in Messages, Mail, and Siri. This would allow users to reply to messages or emails with basic text-based responses generated instantly by Ajax LLM on the Apple device.
Apple AI vs. Google Gemini and other third-party products
AI has reached almost every application and device. The use of AI-focused products such as OpenAI’s ChatGPT and Google Gemini have also seen a significant increase in overall popularity.
Google Gemini is a popular AI tool
While Apple has developed its own AI software to better position itself against the competition, the company’s AI isn’t as impressive as something like Google Gemini Advanced. AppleInsider has learned
During the annual Google I/O developer conference on May 14, Google showcased an interesting use case for artificial intelligence — where users could ask a question in video form and receive an AI-generated answer or suggestion.
As part of the event, Google’s AI was shown a video of a broken record and asked why it wasn’t working. The software identified the model of the disc player and suggested that the record player may not be properly balanced and that it did not work because of this.
The company also announced Google Veo – software that can generate video through the use of artificial intelligence. OpenAI also has its own video generation model known as Sora.
Apple’s Project Greymatter and Ajax LLM can’t generate or process video, which means the company’s software can’t answer complex video queries about consumer products. This is why Apple has sought to collaborate with companies such as Google and OpenAI to reach a licensing agreement and make more features available to its user base.
Apple will compete with products like the Rabbit R1 by offering vertically integrated AI software on custom hardware
Compared to physical AI-themed products, such as the Pini Humane AI or the Rabbit R1, Apple’s AI projects have a significant advantage in that they work on devices that users already own. This means that users will not have to buy a special AI device to enjoy the benefits of artificial intelligence.
Humane’s Pin AI and Rabbit R1 are also commonly regarded as unfinished or partially functional products, and the latter was even revealed to be little more than a custom Android app.
Apple’s AI-related projects are expected to make their debut at the company’s annual WWDC on June 10 as part of iOS 18 and macOS 15. Updates to the Calendar, Freeform, and System Settings apps are also in the works.