CandleKeep
Google Cloud

Building Agents with Google ADK

geminiadkgoogle
Pages35
Formatmarkdown
ListedMarch 21, 2026
UpdatedMarch 21, 2026
Subscribers26

About

Complete developer reference for building, orchestrating, evaluating, and deploying agents with Google ADK (Python). Covers tools, multi-agent systems, memory, callbacks, plugins, and production deployment.

35Chapters
113Topics
35Pages

Preview

Building Agents with Google ADK

A Developer's Reference for Production Agent Development

Version coverage: v1.27.2 (March 2026) · ADK 2.0 Alpha notes in Chapter 14


Chapter 1: Core Concepts

Google ADK is an open-source, code-first Python framework for building, evaluating, and deploying AI agents. Model-agnostic (Gemini, Anthropic, Ollama, etc.) and deployment-agnostic (Cloud Run, Vertex AI Agent Engine).

The 7 Core Abstractions

AbstractionRole
AgentBlueprint defining identity, instructions, and tools (declarative config object)
ToolPython function the agent calls to interact with the world
RunnerEngine orchestrating the Reason-Act loop; manages LLM calls and tool execution
SessionConversation state — holds history for a single continuous dialogue
MemoryLong-term recall across multiple sessions
Artifact ServiceManages non-textual data (files, blobs)
SkillReusable, packaged capability — a directory of instructions, scripts, and resources that agents discover and use on demand

Skills are a first-class abstraction in ADK. A SkillToolset registers skills as tools the agent can activate:

from google.adk.tools.skill_toolset import SkillToolset
from google.adk import models
my_skills = [models.Skill(name="my-skill", ...)]
skill_tools = SkillToolset(skills=my_skills)   # skills_dir= removed in 1.27.2
agent = Agent(
    name="skilled_agent",
    model="gemini-2.5-flash",
    instruction="Use your skills to help users.",
    tools=[skill_tools],
)

⚠️ 1.27.2 breaking change: SkillToolset(skills_dir=...) was removed. Pass an explicit skills=[] list, or drop SkillToolset and pass tool functions directly. See Chapter 16.

Skills differ from plain tools: a skill is a self-contained knowledge package (SKILL.md + optional scripts/references/assets) that the agent loads in context when relevant. Plain tools are Python functions the agent calls imperatively.

Installation & Scaffolding

pip install google-adk
adk create my_agent_project
adk create my_agent_project --model gemini-2.5-flash --api_key YOUR_API_KEY

Chapter 2: Building Your First Agent

import datetime
from zoneinfo import ZoneInfo
from google.adk.agents import Agent

def get_weather(query: str) -> str:
    """Returns weather for a location. Use for weather questions.
    Args:
        query: Location name (e.g. 'San Francisco').
    Returns:
        Weather description string.
    """
    if "sf" in query.lower() or "san francisco" in query.lower():
        return "It's 60 degrees and foggy."
    return "It's 90 degrees and sunny."

def get_current_time(query: str) -> str:
    """Returns current time for a city.
    Args:
        query: City name.
    Returns:
        Current time string.
    """
    if "sf" in query.lower() or "san francisco" in query.lower():
        tz = ZoneInfo("America/Los_Angeles")
        now = datetime.datetime.now(tz)
        return f"The current time is {now.strftime('%Y-%m-%d %H:%M:%S %Z%z')}"
    return f"No timezone info for: {query}."

root_agent = Agent(
    name="root_agent",
    model="gemini-2.5-flash",  # Note: default temperature 1.0 — do not change unless you have a specific reason
    instruction="You are a helpful AI assistant.",
    tools=[get_weather, get_current_time],
)

Docstrings are tool contracts — the LLM reads only the docstring to understand what a tool does. Be precise with Args and Returns.

CLI Commands

adk run path/to/my_agent
adk run path/to/my_agent --session_service_uri "sqlite:///sessions.db"
adk run path/to/my_agent --save_session
adk run path/to/my_agent --resume saved_session.json
adk web path/to/agents_directory --port 8080 --host 0.0.0.0
adk api_server path/to/agents_directory --port 8000 --with_ui
adk api_server path/to/agents_directory --auto_create_session   # v1.25+

Server endpoints (v1.25+):

GET /health   # → {"status": "ok"}   — use for load balancer health checks
GET /version  # → {"version": "1.27.2"}

Chapter 3: Tools Deep Dive

Tool Types

TypeHowWhen
Function ToolsPlain Python function with typed args + docstringCustom logic
Built-in ToolsGoogleSearchTool(), load_memory, preload_memorySearch, memory
OpenAPI ToolsAuto-generated from OpenAPI specREST API integration
MCP ToolsModel Context ProtocolExternal tool ecosystems
Google Cloud ToolsGCP service integrationsCloud-native ops
AgentToolWraps another agent as a callable toolControlled delegation
SkillToolsetLoads a skills directory as toolsReusable knowledge packages

AgentTool — Agent as Tool

from google.adk.tools.agent_tool import AgentTool
tools=[AgentTool(plan_generator)]   # wrap an agent so a parent can call it like a tool

YAML Tool Config

tools:
  - name: ma_llm.check_prime   # fully qualified Python module path

Chapter 4: Multi-Agent Systems

ADK supports three orchestration patterns plus a full graph-based Workflow Runtime.

Orchestration Patterns

TypeImportBehavior
LlmAgent / Agentgoogle.adk.agentsLLM-driven, dynamic routing
SequentialAgentgoogle.adk.agentsOrdered steps, deterministic
ParallelAgentgoogle.adk.agentsIndependent concurrent tasks
LoopAgentgoogle.adk.agentsRepeat until condition, max max_iterations

Hierarchical Planner/Executor

from google.adk.agents import LlmAgent, SequentialAgent, LoopAgent
from google.adk.tools.agent_tool import AgentTool

research_pipeline = SequentialAgent(
    name="research_pipeline",
    description="Executes a pre-approved research plan.",
    sub_agents=[
        section_planner,
        section_researcher,
        LoopAgent(
            name="iterative_refinement_loop",
            max_iterations=3,
            sub_agents=[research_evaluator, escalation_checker, enhanced_search_executor],
        ),
        report_composer,
    ],
)

interactive_planner_agent = LlmAgent(
    name="interactive_planner_agent",
    model="gemini-2.5-flash",
    description="Research planning assistant. Collaborates with user to create and execute research plans.",
    instruction="""
    1. Plan: Use plan_generator tool to create a draft research plan.
    2. Refine: Incorporate user feedback until approved.
    3. Execute: Once user gives EXPLICIT approval, delegate to research_pipeline.
    Do not do the research yourself.
    """,
    sub_agents=[research_pipeline],
    tools=[AgentTool(plan_generator)],
    output_key="research_plan",
)

root_agent = interactive_planner_agent

YAML Multi-Agent Config

agent_class: LlmAgent
model: gemini-2.5-flash
name: root_agent
description: Learning assistant for code and math.
instruction: |
  Delegate coding questions to code_tutor_agent and math questions to math_tutor_agent.
sub_agents:
  - config_path: code_tutor_agent.yaml
  - config_path: math_tutor_agent.yaml

Agent Registry (v1.26+)

For large systems with many agents, use the Agent Registry for centralized discovery:

from google.adk.registry import AgentRegistry

registry = AgentRegistry()
registry.register("greeter", greeter_agent)
registry.register("researcher", researcher_agent)

agent = registry.get("researcher")   # lookup by name

Registry is a lookup service, not an orchestrator — routing is still done via sub_agents and AgentTool.

Workflow Runtime (v1.27+)

For complex control flows beyond SequentialAgent/LoopAgent, use the graph-based Workflow Runtime:

Capabilities: routing (conditional branching), fan-out/fan-in, loops with retry, state management, dynamic nodes, human-in-the-loop (HITL) pause/resume, nested workflows.

NeedUse
Ordered stepsSequentialAgent
Conditional branchingWorkflow Runtime
Fan-out + aggregationWorkflow Runtime
HITL pause/resumeWorkflow Runtime
Retry with backoffWorkflow Runtime
Nested workflow graphsWorkflow Runtime

Task API (v1.27+)

Structured agent delegation with multi-turn task mode and explicit state (pending → in_progress → completed/failed). Use for long-running tasks that need status tracking and controlled output.

Agent-to-Agent (A2A) Protocol

Connect agents across services using the A2A protocol:

from google.adk.agents.remote_a2a_agent import RemoteA2aAgent

remote_agent = RemoteA2aAgent(
    name="remote_researcher",
    agent_url="https://my-other-service.example.com/agent",
    # v1.27+: request interceptors for auth, logging, tracing
)

root_agent = LlmAgent(
    name="orchestrator",
    model="gemini-2.5-flash",
    instruction="Delegate research tasks to the remote researcher.",
    sub_agents=[remote_agent],
)

Use A2A for microservice-style architectures where agents span multiple deployments. v1.27 added request interceptors for adding auth headers, tracing, and retry logic.

Key rules:

  • description drives delegation — the root agent picks sub-agents based solely on their description. Write it specific and actionable.
  • output_key is the inter-agent data bus — saves agent output to session state for downstream agents.

Chapter 5: Session Management & State

  • Session = single conversation context (history + state dict)
  • State = callback_context.state dict — mutable within a session, accessible by all agents and callbacks

Session Backends

BackendUse
In-memoryDevelopment only
SQLite--session_service_uri "sqlite:///sessions.db" — local persistence
Vertex AI managedProduction — durable, scalable
Add to library to read more

Table of Contents

Development
Production (v1.26+: supports memory consolidation)

Basic run
Run with a persistent session backend
Save session on exit (for replay or debugging)
Resume a previously saved session

Basic launch
Custom port and host

Record real agent interactions as test fixtures

Cloud Run

Setup
Dev
Eval & Optimize

Point adk web at the agents/ directory — it finds all agents inside
Point adk run at the specific agent directory

Cloud Run target

❌ Broken in 1.27.2

❌ Broken in 1.27.2
✅ Option A — explicit skills list

Add to Library

Free · Live updates included

26 readers subscribed