YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

FunctionGemma (270M)

Model Description

FunctionGemma is a lightweight, specialized language model from Google built for function calling (tool use).
It is based on the Gemma 3 270M foundation and fine-tuned to reliably translate natural language instructions into structured tool calls using a dedicated formatting/token scheme.

FunctionGemma is designed as a strong base for building agentic systems and is often best used in a tool-calling loop (define tools → generate tool call → execute tool → feed result back).

Features

  • Function calling / tool use: produces structured function calls from tool definitions.
  • Lightweight (270M): practical for low-latency and edge-friendly deployments.
  • More reliable parsing: uses control tokens and formatting patterns for tool definitions/calls/results.
  • Ecosystem-friendly: works with common stacks like Hugging Face Transformers.
  • Fine-tune ready: intended to be adapted for domain-specific tools and schemas.

Use Cases

  • Agent tool orchestration (APIs, database queries, internal actions)
  • Mobile/desktop “actions” routing (e.g., calendar, messaging, reminders)
  • Structured extraction into JSON arguments for downstream systems
  • Deterministic-ish tool routing for assistants (when paired with constraints/validation)
  • On-device prototypes where model size matters

Inputs and Outputs

Input:

  • Natural language instruction(s)
  • Tool definitions (typically JSON schema or structured tool specs)
  • Optional conversation history + tool results, following the FunctionGemma formatting

Output:

  • A structured tool/function call (tool name + arguments)
  • Optionally, follow-up text depending on your prompting/loop design
Downloads last month
1,070
GGUF
Model size
0.3B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support