Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks
  • The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs
  • Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In
  • Hacking the EU’s new age-verification app takes only 2 minutes
  • Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale
  • This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.
  • The Huey Code Guide: Build a High-Performance Background Task Processor Using Scheduling with Retries and Pipelines.
  • Top 19 AI Red Teaming Tools (2026): Secure Your ML Models
AI-trends.todayAI-trends.today
Home»Tech»Google introduces A2UI: an open source protocol for agent driven interfaces

Google introduces A2UI: an open source protocol for agent driven interfaces

Tech By Gavin Wallace22/12/20255 Mins Read
Facebook Twitter LinkedIn Email
Samsung Researchers Introduced ANSE (Active Noise Selection for Generation): A
Samsung Researchers Introduced ANSE (Active Noise Selection for Generation): A
Share
Facebook Twitter LinkedIn Email

Google released A2UI. This specification is an Agent to User Interface set of libraries. It allows agents to describe native rich interfaces using a declarative JSON syntax, while the client application renders them by using their own components. The project addresses a very specific issue, namely how to present interactive, secure interfaces over trust boundaries, without sending executable codes.

What is A2UI

A2UI allows agents to talk UI. The agent doesn’t output HTML, JavaScript or other scripts. The A2UI payload is JSON and describes the components, properties, data model, etc. Client applications read this description, and map each component with its native widget, such as Angular components (for example), Flutter widgets (for example), web components (for instance), React components (for example), or SwiftUI views.

To solve the problem, agents need to be able to communicate UI

The majority of chat agents reply with long texts. This results in many answers and turns for tasks like restaurant bookings or data entry. A2UI’s launch post gives an example of a slow restaurant where the user requests a reservation, and then is asked several questions by text. Better is to use a form that has a time picker, date selector, and submit button. A2UI allows the agent to request that form in a structured UI instead of narrating.

A multi-agent mesh makes the problem more complex. An orchestrator from one organisation can delegate A2A work to an A2A remote agent located in another. The remote agent is not allowed to touch the Document Object Model in the host application. The remote agent can only send messages. In the past, this meant HTML code or script within an iframe. This approach is cumbersome, visually incompatible with the host site and potentially risky. A2UI defines data formats that are safe and secure like data, yet expressive enough to define complex layouts.

The Core Design is a Security-Friendly Structure and LLM Compatible.

A2UI’s focus is on safety, LLM-friendliness and portability.

  • Safety first. A2UI, unlike executable code, is a declarative format of data. The client keeps a catalog with trusted components, like Card, Button and TextField. Only types from this catalog can be referenced by the agent. The model output is not injected into the UI, so there’s no risk of arbitrary scripts being executed.
  • LLM-friendly representation. The UI appears as a flat-list of components with identifiers. It is easier to update or generate interfaces in incremental steps and stream updates. Agents can change a conversation’s view without having to regenerate a full JSON tree.
  • Framework agnostic. A2UI paysloads can be displayed on more than one client. Agent describes component tree with associated data model. Clients map this structure into native widgets within frameworks like Angular Flutter React SwiftUI. It allows the reuse of agent logic on web, desktop and mobile surfaces.
  • Progressive rendering. The format was designed to stream, so clients can display partial interfaces as the agent is still computing. The interface is displayed in real-time, so users don’t have to wait for the complete response.

The Data Architecture

A2UI has a separate pipeline for generation, rendering and transport.

  1. An agent receives the message from a user via a chat window or other interface.
  2. A2UI is produced by the agent. This response can be generated either using Gemini, or any other model capable of generating JSON. This response describes the components, layouts and data bindings.
  3. A2UI streams messages to clients over transports such as Agent to Agent Protocol or AG UI Protocol.
  4. A2UI is used by the client. The renderer resolves component types in host codebase by parsing the payload.
  5. Agents receive events that represent user actions such as button clicks and form submissions. Agents can respond by sending new A2UI message that updates the current interface.

The Key Takeaways

  • A2UI is an open standard and library set from Google that lets agents ‘speak UI’ by sending a declarative JSON specification for interfaces, while clients render them using native components such as Angular, Flutter or Lit.
  • This specification focuses on security, treating UI data and not code. Agents only refer to a catalog controlled by the client, which minimizes UI infiltration risk, and prevents executing arbitrary Scripts directly from output.
  • It uses an internal format that allows agents to refine the user interface throughout a session. The LLMs support streaming and incremental updates.
  • A2UI has no transport requirements and can be used in conjunction with A2A and AG UI. This allows the orchestrator agent and sub-agents to send UIs over trust boundaries, while the host application retains control of branding and layout.
  • This project, which is still in its early stages of public preview, has been released as Apache 2.0 version v0.8. With reference renderers, Quickstart Samples and Production Integrations with projects such Gemini Enterprise (Opal), Flutter GenUI (Flutter GenUI) and Opal Enterprise.

Take a look at the Github Repo You can also find out more about the following: Technical Details. Also, feel free to follow us on Twitter Don’t forget about our 100k+ ML SubReddit Subscribe now our Newsletter. Wait! Are you using Telegram? now you can join us on telegram as well.


Asif Razzaq, CEO of Marktechpost Media Inc. is a visionary engineer and entrepreneur who is dedicated to harnessing Artificial Intelligence’s potential for the social good. Marktechpost was his most recent venture. This platform, focusing on Artificial Intelligence, is both technical and understandable for a broad audience. Over 2 million views per month are a testament to the platform’s popularity.

Google open source
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks

19/04/2026

The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs

19/04/2026

Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale

18/04/2026

This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.

18/04/2026
Top News

OpenAI Sneezes – and the Software Firms Get a Cold

Why AI Wants Massive Numerical Fashions (LNMs) for Mathematical Mastery • AI Weblog

Chatbots Use Your Emotions To Avoid Saying Goodbye

OpenAI Launches GPT-5.2 as It Navigates ‘Code Red’

It’s all over with the fake AI about the Iran war.

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

OpenAI should stop naming its creations after products that already exist

08/12/2025

IBM and ETH Zürich Researchers Unveil Analog Foundation Models to Tackle Noise in In-Memory AI Hardware

21/09/2025
Latest News

Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks

19/04/2026

The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs

19/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.