Google’s FunctionGemma is a special version of Gemma 3 model, trained for calling functions. This edge agent maps natural language actions to APIs.
FunctionGemma is a new software that allows you to create a function.
FunctionGemma, a text-only transformer with 270M parameters based on Gemma 3’s 270M. The model is open under Gemma’s license and has the same structure as Gemma 3. However, the chat and training objectives are geared towards function calls rather than free-form dialogue.
Model is designed to be tuned specifically for function-calling tasks. This is not meant to serve as a general-purpose chat assistant. Its primary goal is translating user instructions and tools definitions into structured functions calls. Then, if desired, summarizing tool responses to the user.
FunctionGemma can be viewed as an ordinary causal language model. The inputs and outputs of FunctionGemma are both text sequences. There is an input budget up to 32K characters per request and a context input of 32K.
Architectural and Training Data
The model is based on the Gemma 3 architecture, and uses the same Gemma 3 parameter scale. The stack for training and running is based on the infrastructure and research used in Gemini. This includes JAX and ML Pathways, which were run by large TPU clusters.
FunctionGemma is based on Gemma’s vocabulary of 256K words, which has been optimized for JSON text and structures. The token-efficient function schemas are optimized for tool responses, and the sequence length is reduced for edge deployments with limited memory.
Model is trained using 6T tokens with an August 2024 knowledge cutoff. This dataset is divided into two categories.
- Public tool and API Definitions
- Tool use interactions include function calls, prompts and responses, as well as natural language messages to summarize or clarify outputs.
This signal is used to teach both syntax – which function you should call, how to format your arguments – and intent – when and why to ask for further information and call a particular function.
Control tokens and conversation formats
FunctionGemma uses a strict chat template. The conversation is expected to follow a template which separates the roles from tool regions. Conversations are concluded with Where roles typically occur Developer, User The following are some examples of how to use The model is a.
FunctionGemma uses a set of tokens pairs to control the game during those turns
The following are some examples of how to get started:Definitions of toolsThe following are some examples of how to get started:For the tool’s callThe following are some examples of how to get started:For serialized outputs
This allows the model to distinguish between text in natural language and execution results. The hugging face apply_chat_template The official Gemma templates and API generates this structure for message and tool list automatically.
The Mobile Actions Performance can be tuned.
FunctionGemma comes pre-trained for the use of generic tools. The official Mobile Actions Guide and model card stress that the small models will only reach production-level reliability after fine tuning.
Mobile Actions uses a dataset that exposes small sets of Android tools, such as creating a contact or setting a calendar. It also controls the flashlight, and displays maps. FunctionGemma learns to map utterances such as ‘Create a calendar event for lunch tomorrow’ or ‘Turn on the flashlight’ to those tools with structured arguments.
The base FunctionGemma system reaches an accuracy of 58 per cent on the Mobile Actions test. The accuracy of the model increases by 85 percent after fine-tuning with the recipe from a public cookbook.
Edge Agents and Reference Demos
FunctionGemma’s primary deployment targets are edge agents, which run on local devices such as phones, laptops, and accelerators like NVIDIA Jetson nano. Quantization and the small parameter count of 0.3B allow for inference on consumer hardware with little memory.
Google AI Edge Gallery offers a number of reference experiences
- Mobile Actions This video shows how to create a device assistant that is fully offline, using FunctionGemma. It was fine-tuned on the Mobile Actions dataset before being deployed.
- Tiny Garden It is a game that can be controlled by voice. The model will decompose commands like “Plant sunflowers in the top row and water them” Domain-specific functions such as
plant_seedThe following are some examples of how to get started:water_plotsGrid coordinates are explicit. - FunctionGemma Physics Playground Transformers.js is used to run the game entirely on the web. It lets the user solve the physics puzzles using natural language instructions, which the model then converts into simulation action.
The demos demonstrate that, with the right fine-tuning and interfaces for tools, a function caller supporting 270M parameters can be used to support multiple step logic without server calls.
What you need to know
- FunctionGemma, a text-only variant with 270M parameters of Gemma 3, is specifically trained for function calls, rather than open ended chat. It is available as an open version under Gemma’s terms of usage.
- This model retains the Gemma 3 Transformer architecture with a 256k token dictionary, but supports 32k tokens for each request, shared between inputs and outputs, and has been trained using 6T tokens.
- FunctionGemma is a rigid chat template.
It is necessary to use reliable tools in production systems that have control tokens dedicated for functions declarations, calls, and responses.Role... - Mobile Actions Benchmark: Accuracy improves by 58 percent from the base model up to 85 percent with task-specific fine tuning. This shows that smaller function callers require domain knowledge more than quick engineering.
- FunctionGemma can run on Jetson devices as well as phones and laptops. The model has been integrated with ecosystems like Hugging face, Vertex AI and LM Studio, along with edge demos Mobile Actions Tiny Garden, Physics Playground and Mobile Actions.
Take a look at the Technical details The following are some examples of how to get started: Model on HF. Also, feel free to follow us on Twitter Join our Facebook group! 100k+ ML SubReddit Subscribe now our Newsletter. Wait! What? now you can join us on telegram as well.
Asif Razzaq, CEO of Marktechpost Media Inc. is a visionary engineer and entrepreneur who is dedicated to harnessing Artificial Intelligence’s potential for the social good. Marktechpost was his most recent venture. This platform, devoted to Artificial Intelligence, is notable for its technical and accessible coverage of news on machine learning and deep-learning. This platform has over 2,000,000 monthly views which shows its popularity.

