Google is transforming Gemini into an agent capable of performing tasks directly within your apps.
Feb 26
Thu, 26 Feb 2026 at 02:30 PM 1

Google is transforming Gemini into an agent capable of performing tasks directly within your apps.

With a new update announced by Google, Gemini is taking on a new dimension on Android. Artificial intelligence is becoming a true agent capable of acting directly within applications. This evolution outlines the future of mobile assistance, where AI no longer simply responds, but executes.

This feature marks a further step in Google's strategy to make Gemini the heart of the Android experience. While the new feature is currently in beta and limited to a few models, it provides a very concrete glimpse of what the daily use of our smartphones could become…

An AI agent that takes control of applications

Gemini takes up space on Android – Source: Google

Integrated into Android, Gemini can now perform multi-step tasks on behalf of the user, such as ordering a ride-hailing service, booking a meal, or managing a family discussion before placing an order. The AI opens applications, navigates menus, enters the necessary information, and prepares the final action. In practice, if a user requests an Uber to go home, Gemini launches the application, enters the destination, selects the vehicle type, and prepares the ride. Then, the user only needs to verify and confirm the payment. Furthermore, it's important to note that the AI cannot confirm a transaction on its own, as the final validation always remains human. Technically, this automation relies on a secure "virtual window," where Gemini manipulates the graphical interfaces just like a human user, right before your eyes. Just like an AI browser such as ChatGPT Atlas, it's possible to follow each step in real time and interrupt the action at any moment. Finally, to reassure users, according to Google, the agent only has access to certain compatible applications and remains isolated from the rest of the smartphone's data. Availability is still very limited. For the moment, only certain devices can test this new feature. The beta is available on the Google Pixel 10, Google Pixel 10 Pro, and Google Pixel 10 Pro XL, as well as on models in the Samsung Galaxy S26 range, where Perplexity will also be available.

Regarding the rollout, the United States and South Korea are the first to receive it, while France, like other countries, will have to wait.

At launch, compatible apps focus on food delivery and transportation services, such as Uber, DoorDash, and Grubhub, but Google promises to gradually expand this ecosystem.

In parallel, the company is rolling out other AI-powered updates, such as fraudulent call detection and The improved "Circle to Search" feature, now capable of identifying multiple elements displayed on the screen.

With these improvements, Gemini no longer simply assists, but actively takes action, provided that users agree to delegate their actions…

Comments

Leave a Comment

Suggested for You