AI News

Apple Tests Multi-Command Siri Powered by Gemini-Derived AI

Apple Intelligence features

Key Highlights

  • Apple is internally testing a Siri upgrade that handles multiple requests within a single natural-language prompt — for example, a chained instruction covering calendar, messaging, and search in one sentence — according to reporting from Bloomberg and Reuters today
  • The capability is built on Gemini-derived AI technology, representing Apple’s clearest acknowledgment yet that its in-house AI development requires external architecture to compete with modern assistants
  • The feature is targeting iOS 27, iPadOS 27, and macOS 27, expected later this year — no confirmed release date has been given

Apple is internally testing a significant Siri upgrade ahead of iOS 27. The feature allows Siri to process and execute multiple requests contained within a single natural-language prompt, something the industry refers to as chained commands, instead of requiring users to issue each instruction separately as Siri currently does.

As you may know, this is a capability that competing assistants have handled for some time. Google Assistant and modern ChatGPT voice mode both process compound instructions without requiring the user to break them into sequential prompts. Apple’s own internal testing suggests the company is treating this gap as a priority ahead of a major OS release cycle.

The Gemini Connection

According to the report, the multi-command capability is built on Gemini-derived AI architecture which means that Apple has incorporated elements of Google’s model design into the system powering the new Siri behaviour.

For perspective, Apple has historically built its on-device AI capabilities in-house, treating its Neural Engine and proprietary models as a competitive advantage. The reliance on Gemini-derived architecture for a core Siri feature is a notable departure from that posture, and the clearest public signal yet that Apple’s own AI development has not kept pace with where the assistant market has moved.

The arrangement also raises a question that Apple has not yet answered publicly: whether this represents a licensing agreement, a technical adaptation of open architecture, or something else entirely.

What Chained Commands Actually Change

The practical impact for users is meaningful. Current Siri requires a separate prompt for each task, say for example – “Set a reminder”, then “Send a message to”, then “Search for”. The tested upgrade would allow a single instruction like “Remind me at 6pm to call James, and find a good Italian restaurant near his office in Bangalore” to execute as one coherent action rather than three separate requests.

That closes the gap between Siri and the assistants it is most directly competing with, including Gemini Assistant on Android and increasingly capable voice modes from ChatGPT and Claude. Whether the implementation is reliable enough to change user behaviour apart from just being technically capable, is the test that matters, and internal testing results have not been made public.

You may also like to read – Microsoft Copilot Now Runs Claude and GPT Side by Side

Wrapping Up

Apple is not building the most capable AI assistant. It is building an AI assistant capable enough that its users do not leave. Multi-command processing is the floor for that standard in 2026, not a differentiator. Whether iOS 27 delivers it reliably enough to close the perception gap with Android’s assistant ecosystem is the question that will be answered at WWDC, or not, if in case the feature slips.

Abhijay Singh Rawat
Abhijay is the News Editor at TimesofAI, who loves to follow up on the latest tech and AI trends. After office hours, you would find him either grinding competitive ranked games, or trek up his way in the hills of Uttarakhand.
You may also like
More in:AI News