Building Agents with Google ADK
Add to your library first to use in Claude Code
About
Complete developer reference for building, orchestrating, evaluating, and deploying agents with Google ADK (Python). Covers tools, multi-agent systems, memory, callbacks, plugins, and production deployment.
Preview
Building Agents with Google ADK
A Developer's Reference for Production Agent Development
Version coverage: v1.27.2 (March 2026) · ADK 2.0 Alpha notes in Chapter 14
Chapter 1: Core Concepts
Google ADK is an open-source, code-first Python framework for building, evaluating, and deploying AI agents. Model-agnostic (Gemini, Anthropic, Ollama, etc.) and deployment-agnostic (Cloud Run, Vertex AI Agent Engine).
The 7 Core Abstractions
| Abstraction | Role |
|---|---|
| Agent | Blueprint defining identity, instructions, and tools (declarative config object) |
| Tool | Python function the agent calls to interact with the world |
| Runner | Engine orchestrating the Reason-Act loop; manages LLM calls and tool execution |
| Session | Conversation state — holds history for a single continuous dialogue |
| Memory | Long-term recall across multiple sessions |
| Artifact Service | Manages non-textual data (files, blobs) |
| Skill | Reusable, packaged capability — a directory of instructions, scripts, and resources that agents discover and use on demand |
Skills are a first-class abstraction in ADK. A SkillToolset registers skills as tools the agent can activate:
from google.adk.tools.skill_toolset import SkillToolset
from google.adk import models
my_skills = [models.Skill(name="my-skill", ...)]
skill_tools = SkillToolset(skills=my_skills) # skills_dir= removed in 1.27.2
agent = Agent(
name="skilled_agent",
model="gemini-2.5-flash",
instruction="Use your skills to help users.",
tools=[skill_tools],
)
⚠️ 1.27.2 breaking change:
SkillToolset(skills_dir=...)was removed. Pass an explicitskills=[]list, or dropSkillToolsetand pass tool functions directly. See Chapter 16.
Skills differ from plain tools: a skill is a self-contained knowledge package (SKILL.md + optional scripts/references/assets) that the agent loads in context when relevant. Plain tools are Python functions the agent calls imperatively.
Installation & Scaffolding
pip install google-adk
adk create my_agent_project
adk create my_agent_project --model gemini-2.5-flash --api_key YOUR_API_KEY
Chapter 2: Building Your First Agent
import datetime
from zoneinfo import ZoneInfo
from google.adk.agents import Agent
def get_weather(query: str) -> str:
"""Returns weather for a location. Use for weather questions.
Args:
query: Location name (e.g. 'San Francisco').
Returns:
Weather description string.
"""
if "sf" in query.lower() or "san francisco" in query.lower():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."
def get_current_time(query: str) -> str:
"""Returns current time for a city.
Args:
query: City name.
Returns:
Current time string.
"""
if "sf" in query.lower() or "san francisco" in query.lower():
tz = ZoneInfo("America/Los_Angeles")
now = datetime.datetime.now(tz)
return f"The current time is {now.strftime('%Y-%m-%d %H:%M:%S %Z%z')}"
return f"No timezone info for: {query}."
root_agent = Agent(
name="root_agent",
model="gemini-2.5-flash", # Note: default temperature 1.0 — do not change unless you have a specific reason
instruction="You are a helpful AI assistant.",
tools=[get_weather, get_current_time],
)
Docstrings are tool contracts — the LLM reads only the docstring to understand what a tool does. Be precise with Args and Returns.
CLI Commands
adk run path/to/my_agent
adk run path/to/my_agent --session_service_uri "sqlite:///sessions.db"
adk run path/to/my_agent --save_session
adk run path/to/my_agent --resume saved_session.json
adk web path/to/agents_directory --port 8080 --host 0.0.0.0
adk api_server path/to/agents_directory --port 8000 --with_ui
adk api_server path/to/agents_directory --auto_create_session # v1.25+
Server endpoints (v1.25+):
GET /health # → {"status": "ok"} — use for load balancer health checks
GET /version # → {"version": "1.27.2"}
Chapter 3: Tools Deep Dive
Tool Types
| Type | How | When |
|---|---|---|
| Function Tools | Plain Python function with typed args + docstring | Custom logic |
| Built-in Tools | GoogleSearchTool(), load_memory, preload_memory | Search, memory |
| OpenAPI Tools | Auto-generated from OpenAPI spec | REST API integration |
| MCP Tools | Model Context Protocol | External tool ecosystems |
| Google Cloud Tools | GCP service integrations | Cloud-native ops |
| AgentTool | Wraps another agent as a callable tool | Controlled delegation |
| SkillToolset | Loads a skills directory as tools | Reusable knowledge packages |
AgentTool — Agent as Tool
from google.adk.tools.agent_tool import AgentTool
tools=[AgentTool(plan_generator)] # wrap an agent so a parent can call it like a tool
YAML Tool Config
tools:
- name: ma_llm.check_prime # fully qualified Python module path
Chapter 4: Multi-Agent Systems
ADK supports three orchestration patterns plus a full graph-based Workflow Runtime.
Orchestration Patterns
| Type | Import | Behavior |
|---|---|---|
LlmAgent / Agent | google.adk.agents | LLM-driven, dynamic routing |
SequentialAgent | google.adk.agents | Ordered steps, deterministic |
ParallelAgent | google.adk.agents | Independent concurrent tasks |
LoopAgent | google.adk.agents | Repeat until condition, max max_iterations |
Hierarchical Planner/Executor
from google.adk.agents import LlmAgent, SequentialAgent, LoopAgent
from google.adk.tools.agent_tool import AgentTool
research_pipeline = SequentialAgent(
name="research_pipeline",
description="Executes a pre-approved research plan.",
sub_agents=[
section_planner,
section_researcher,
LoopAgent(
name="iterative_refinement_loop",
max_iterations=3,
sub_agents=[research_evaluator, escalation_checker, enhanced_search_executor],
),
report_composer,
],
)
interactive_planner_agent = LlmAgent(
name="interactive_planner_agent",
model="gemini-2.5-flash",
description="Research planning assistant. Collaborates with user to create and execute research plans.",
instruction="""
1. Plan: Use plan_generator tool to create a draft research plan.
2. Refine: Incorporate user feedback until approved.
3. Execute: Once user gives EXPLICIT approval, delegate to research_pipeline.
Do not do the research yourself.
""",
sub_agents=[research_pipeline],
tools=[AgentTool(plan_generator)],
output_key="research_plan",
)
root_agent = interactive_planner_agent
YAML Multi-Agent Config
agent_class: LlmAgent
model: gemini-2.5-flash
name: root_agent
description: Learning assistant for code and math.
instruction: |
Delegate coding questions to code_tutor_agent and math questions to math_tutor_agent.
sub_agents:
- config_path: code_tutor_agent.yaml
- config_path: math_tutor_agent.yaml
Agent Registry (v1.26+)
For large systems with many agents, use the Agent Registry for centralized discovery:
from google.adk.registry import AgentRegistry
registry = AgentRegistry()
registry.register("greeter", greeter_agent)
registry.register("researcher", researcher_agent)
agent = registry.get("researcher") # lookup by name
Registry is a lookup service, not an orchestrator — routing is still done via sub_agents and AgentTool.
Workflow Runtime (v1.27+)
For complex control flows beyond SequentialAgent/LoopAgent, use the graph-based Workflow Runtime:
Capabilities: routing (conditional branching), fan-out/fan-in, loops with retry, state management, dynamic nodes, human-in-the-loop (HITL) pause/resume, nested workflows.
| Need | Use |
|---|---|
| Ordered steps | SequentialAgent |
| Conditional branching | Workflow Runtime |
| Fan-out + aggregation | Workflow Runtime |
| HITL pause/resume | Workflow Runtime |
| Retry with backoff | Workflow Runtime |
| Nested workflow graphs | Workflow Runtime |
Task API (v1.27+)
Structured agent delegation with multi-turn task mode and explicit state (pending → in_progress → completed/failed). Use for long-running tasks that need status tracking and controlled output.
Agent-to-Agent (A2A) Protocol
Connect agents across services using the A2A protocol:
from google.adk.agents.remote_a2a_agent import RemoteA2aAgent
remote_agent = RemoteA2aAgent(
name="remote_researcher",
agent_url="https://my-other-service.example.com/agent",
# v1.27+: request interceptors for auth, logging, tracing
)
root_agent = LlmAgent(
name="orchestrator",
model="gemini-2.5-flash",
instruction="Delegate research tasks to the remote researcher.",
sub_agents=[remote_agent],
)
Use A2A for microservice-style architectures where agents span multiple deployments. v1.27 added request interceptors for adding auth headers, tracing, and retry logic.
Key rules:
descriptiondrives delegation — the root agent picks sub-agents based solely on theirdescription. Write it specific and actionable.output_keyis the inter-agent data bus — saves agent output to session state for downstream agents.
Chapter 5: Session Management & State
- Session = single conversation context (history + state dict)
- State =
callback_context.statedict — mutable within a session, accessible by all agents and callbacks
Session Backends
| Backend | Use |
|---|---|
| In-memory | Development only |
| SQLite | --session_service_uri "sqlite:///sessions.db" — local persistence |
| Vertex AI managed | Production — durable, scalable |