Leveraging the Google Agent Development Kit (ADK) and the underlying Gemini LLM to build Multi-Agent Applications with A2A protocol support using the Python programming language deployed to AWS Lambda.
Aren’t There a Billion Python ADK Demos?
Yes there are.
Python has traditionally been the main coding language for ML and AI tools. The goal of this article is to provide a multi-agent test bed for building, debugging, and deploying multi-agent applications.
Say It Ain’t So
So what is different about this lab compared to all the others out there?
This is one of the first deep dives into a Multi-Agent application leveraging the advanced tooling of Gemini CLI. The starting point for the demo was an existing Codelab- which was updated and re-engineered with Gemini CLI.
The original Codelab- is here:
Building a Multi-Agent System | Google Codelabs
Python Version Management
One of the downsides of the wide deployment of Python has been managing the language versions across platforms and maintaining a supported version.
The pyenv tool enables deploying consistent versions of Python:
GitHub - pyenv/pyenv: Simple Python version management
As of writing — the mainstream python version is 3.13. To validate your current Python:
python --version
Python 3.13.13
Amazon Lambda
AWS Lambda is a serverless, event-driven compute service that enables users to run code without provisioning or managing servers. With Lambda, developers can focus solely on their code (functions), while AWS handles all underlying infrastructure management, including capacity provisioning, automatic scaling, and operating system maintenance.
Full details are here:
Serverless Computing Service - Free AWS Lambda - AWS
Gemini CLI
If not pre-installed you can download the Gemini CLI to interact with the source files and provide real-time assistance:
npm install -g @google/gemini-cli
Testing the Gemini CLI Environment
Once you have all the tools and the correct Node.js version in place- you can test the startup of Gemini CLI. You will need to authenticate with a Key or your Google Account:
▝▜▄ Gemini CLI v0.33.1
▝▜▄
▗▟▀ Logged in with Google /auth
▝▀ Gemini Code Assist Standard /upgrade no sandbox (see /docs) /model Auto (Gemini 3) | 239.8 MB
Node Version Management
Gemini CLI needs a consistent, up to date version of Node. The nvm command can be used to get a standard Node environment:
Agent Development Kit
The Google Agent Development Kit (ADK) is an open-source, Python-based framework designed to streamline the creation, deployment, and orchestration of sophisticated, multi-agent AI systems. It treats agent development like software engineering, offering modularity, state management, and built-in tools (like Google Search) to build autonomous agents.
The ADK can be installed from here:
Agent Skills
Gemini CLI can be customized to work with ADK agents. Both an Agent Development MCP server, and specific Agent skills are available.
More details are here:
To get the Agent Skills in Gemini CLI:
> /skills list
Available Agent Skills:
and the ADK documentation:
> /mcp list
Configured MCP servers:
🟢 adk-docs-mcp (from adk-docs-ext) - Ready (2 tools)
Tools:
- mcp_adk-docs-mcp_fetch_docs
- mcp_adk-docs-mcp_list_doc_sources
Where do I start?
The strategy for starting multi agent development is a incremental step by step approach.
First, the basic development environment is setup with the required system variables, and a working Gemini CLI configuration.
Then, ADK Multi-Agent is built, debugged, and tested locally. Finally — the entire solution is deployed to AWS Lambda.
Setup the Basic Environment
At this point you should have a working Python environment and a working Gemini CLI installation. All of the relevant code examples and documentation is available in GitHub.
The next step is to clone the GitHub repository to your local environment:
cd ~
git clone https://github.com/xbill9/gemini-cli-aws
cd multi-lambda
Then run init2.sh from the cloned directory.
The script will attempt to determine your shell environment and set the correct variables:
source init2.sh
If your session times out or you need to re-authenticate- you can run the set_env.sh script to reset your environment variables:
source set_env.sh
Variables like PROJECT_ID need to be setup for use in the various build scripts- so the set_env script can be used to reset the environment if you time-out.
Login to the AWS console:
aws login --remote
Finally install the packages and dependencies:
make install
Verify The ADK Installation
To verify the setup, run the ADK CLI locally with the researcher agent:
xbill@penguin:~/gemini-cli-aws/multi-lambda/agents$ adk run researcher
/home/xbill/.pyenv/versions/3.13.13/lib/python3.13/site-packages/authlib/_joserfc_helpers.py:8: AuthlibDeprecationWarning: authlib.jose module is deprecated, please use joserfc instead.
It will be compatible before version 2.0.0.
from authlib.jose import ECKey
/home/xbill/.pyenv/versions/3.13.13/lib/python3.13/site-packages/google/adk/features/_feature_decorator.py:72: UserWarning: [EXPERIMENTAL] feature FeatureName.PLUGGABLE_AUTH is enabled.
check_feature_enabled()
Log setup complete: /tmp/agents_log/agent.20260422_134822.log
To access latest log: tail -F /tmp/agents_log/agent.latest.log
{"asctime": "2026-04-22 13:48:23,011", "name": "root", "levelname": "INFO", "message": "Logging initialized for researcher", "filename": "logging_config.py", "lineno": 54, "service": "researcher", "log_level": "INFO"}
{"asctime": "2026-04-22 13:48:23,013", "name": "researcher.agent", "levelname": "INFO", "message": "Initialized researcher agent with model: gemini-2.5-flash", "filename": "agent.py", "lineno": 85}
{"asctime": "2026-04-22 13:48:23,015", "name": "google_adk.google.adk.cli.utils.envs", "levelname": "INFO", "message": "Loaded .env file for researcher at /home/xbill/gemini-cli-aws/multi-lambda/.env", "filename": "envs.py", "lineno": 83}
{"asctime": "2026-04-22 13:48:23,016", "name": "google_adk.google.adk.cli.utils.local_storage", "levelname": "INFO", "message": "Using per-agent session storage rooted at /home/xbill/gemini-cli-aws/multi-lambda/agents", "filename": "local_storage.py", "lineno": 84}
{"asctime": "2026-04-22 13:48:23,016", "name": "google_adk.google.adk.cli.utils.local_storage", "levelname": "INFO", "message": "Using file artifact service at /home/xbill/gemini-cli-aws/multi-lambda/agents/researcher/.adk/artifacts", "filename": "local_storage.py", "lineno": 110}
{"asctime": "2026-04-22 13:48:23,017", "name": "google_adk.google.adk.cli.utils.service_factory", "levelname": "INFO", "message": "Using in-memory memory service", "filename": "service_factory.py", "lineno": 266}
{"asctime": "2026-04-22 13:48:23,047", "name": "google_adk.google.adk.cli.utils.local_storage", "levelname": "INFO", "message": "Creating local session service at /home/xbill/gemini-cli-aws/multi-lambda/agents/researcher/.adk/session.db", "filename": "local_storage.py", "lineno": 60}
Running agent researcher, type exit to exit.
[user]:
Test The ADK Web Interface
This tests the ADK agent interactions with a browser:
xbill@penguin:~/gemini-cli-aws/multi-lambda/agents$ adk web --host 0.0.0.0
/home/xbill/.local/lib/python3.13/site-packages/google/adk/features/_feature_decorator.py:72: UserWarning: [EXPERIMENTAL] feature FeatureName.PLUGGABLE_AUTH is enabled.
check_feature_enabled()
2026-04-12 16:43:14,152 - INFO - service_factory.py:266 - Using in-memory memory service
2026-04-12 16:43:14,153 - INFO - local_storage.py:84 - Using per-agent session storage rooted at /home/xbill/gemini-cli-aws/multi-eks/agents
2026-04-12 16:43:14,153 - INFO - local_storage.py:110 - Using file artifact service at /home/xbill/gemini-cli-aws/multi-eks/agents/.adk/artifacts
/home/xbill/.local/lib/python3.13/site-packages/google/adk/cli/fast_api.py:198: UserWarning: [EXPERIMENTAL] InMemoryCredentialService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.
credential_service = InMemoryCredentialService()
/home/xbill/.local/lib/python3.13/site-packages/google/adk/auth/credential_service/in_memory_credential_service.py:33: UserWarning: [EXPERIMENTAL] BaseCredentialService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.
super(). __init__ ()
INFO: Started server process [32675]
INFO: Waiting for application startup.
Then use the web interface — either on the local interface 127.0.0.1 or the catch-all web interface 0.0.0.0 -depending on your environment:
Special note for Google Cloud Shell Deployments- add a CORS allow_origins configuration exemption to allow the ADK agent to run:
adk web --host 0.0.0.0 --allow_origins 'regex:.*'
Multi Agent Design
The multi-agent deployment consists of 5 agents:
- Researcher
- Judge
- Orchestrator
- Content Builder
- Course Builder
For a detailed analysis of the multi-agent architecture- this article provides the background information:
Multi-Agent A2A with the Agent Development Kit(ADK), Cloud Run, and Gemini CLI
Running/Testing/Debugging Locally
The main Makefile has been extended with extensive targets for managing the agents on the local development environment.
The key targets include:
First check for local running agents:
xbill@penguin:~/gemini-cli-aws/multi-lambda$ make status
Checking status of locally running agents and servers...
--- Network Status ---
No services listening on expected ports (8000-8004, 5173).
--- Process Status ---
No matching processes found.
Then all the agents can be started together:
✦ The frontend has been built and all local services (Researcher, Judge, Content Builder, Orchestrator, App Backend, and Frontend dev server) have
been started in the background.
- Frontend: [http://localhost:5173](http://localhost:5173) (http://localhost:5173)
- Backend: [http://localhost:8000](http://localhost:8000) (http://localhost:8000)
Logs for each service are available in the project root: researcher.log, judge.log, content_builder.log, orchestrator.log, backend.log, and
frontend.log.
Then connect to the front end locally:
And the entire agent system will run in the local environment:
Local Logging / Debugging
Gemini CLI has full access to the local agent logs for debugging and troubleshooting:
> review the local logs for the multi-agent run
✦ I will check run_local.sh and the project structure to determine where logs are stored or how they are generated.
This will verify the local operation of the code:
✦ The local logs confirm a successful multi-agent run for the topic "the history of the internet." Here is a summary of the execution flow:
* Orchestrator (orchestrator.log): Successfully managed the pipeline. It captured the topic, coordinated the Research-Judge loop, and escalated to
content generation after the first iteration.
* Researcher (researcher.log): Gathered approximately 9.2KB of research data using the google_search tool. It completed its task with a 200 OK
response.
* Judge (judge.log): Evaluated the findings and gave a "pass" status. The judge noted that the research was "comprehensive, accurate, and
well-structured," making it an "excellent foundation for educational content."
* Content Builder (content_builder.log): Successfully transformed the validated research into a structured Markdown course module and finished
execution.
* Backend & Frontend (backend.log, frontend.log): The FastAPI backend (port 8000) and Vite frontend (port 5173) are running and communicating with
the orchestrator as expected.
Notable Observation:
* A minor warning appeared in researcher.log: Researcher callback: 'response' not found or has no parts. However, this did not impact the
successful completion of the research or the overall pipeline.
All services are currently active and logged at the DEBUG level as specified in run_local.sh.
Deploying to Amazon Lambda
The project level Makefile has targets for managing the Agent deployment to serverless endpoints. You can submit the build for Lamba deployment:
xbill@penguin:~/gemini-cli-aws/multi-lambda$ make deploy
chmod +x lambda/deploy-lambda.sh
./lambda/deploy-lambda.sh
Ensuring IAM role McpLambdaExecutionRole exists...
Logging in to Amazon ECR...
Once the containers are deployed- you can then get the endpoint:
xbill@penguin:~/gemini-cli-aws/multi-lambda$ make endpoint
https://wqv5reqmno6skv3xsqb64kgrsm0hletn.lambda-url.us-east-1.on.aws/
xbill@penguin:~/gemini-cli-aws/multi-lambda$ make status
Course Creator Lambda Status:
-----------------------------------------------------
---------------------------------------------
| GetFunction |
+---------------------------------+---------+
| Name | Status |
+---------------------------------+---------+
| course-creator-course-builder | Active |
+---------------------------------+---------+
URL: https://wqv5reqmno6skv3xsqb64kgrsm0hletn.lambda-url.us-east-1.on.aws/
-----------------------------------------------------
-------------------------------------------
| GetFunction |
+-------------------------------+---------+
| Name | Status |
+-------------------------------+---------+
| course-creator-orchestrator | Active |
+-------------------------------+---------+
URL: https://q5bciiujjktr6wris6tple6fra0yyrqc.lambda-url.us-east-1.on.aws/
-----------------------------------------------------
-----------------------------------------
| GetFunction |
+----------------------------+----------+
| Name | Status |
+----------------------------+----------+
| course-creator-researcher | Active |
+----------------------------+----------+
URL: https://gfhdoxhiiznflcz2cdhc65z2eq0cwimd.lambda-url.us-east-1.on.aws/
-----------------------------------------------------
------------------------------------
| GetFunction |
+-----------------------+----------+
| Name | Status |
+-----------------------+----------+
| course-creator-judge | Active |
+-----------------------+----------+
URL: https://kaen6rupkl5ph5kde2g6h7wgr40sirch.lambda-url.us-east-1.on.aws/
-----------------------------------------------------
----------------------------------------------
| GetFunction |
+----------------------------------+---------+
| Name | Status |
+----------------------------------+---------+
| course-creator-content-builder | Active |
+----------------------------------+---------+
URL: https://k5wt4o6vrdao3w4zjiabszdeue0kauxp.lambda-url.us-east-1.on.aws/
The service will be visible in the AWS console:
And the entire system can be tested:
xbill@penguin:~/gemini-cli-aws/multi-lambda$ make e2e-test-lambda
Fetching Lambda endpoint...
make[1]: Entering directory '/home/xbill/gemini-cli-aws/multi-lambda'
Running end-to-end test against https://wqv5reqmno6skv3xsqb64kgrsm0hletn.lambda-url.us-east-1.on.aws/...
Temporary JSON file content: {"message": "Create a short course about the history of the internet", "user_id": "e2e_test_user"}
Executing: curl -s -X POST https://wqv5reqmno6skv3xsqb64kgrsm0hletn.lambda-url.us-east-1.on.aws/api/chat_stream -H "Content-Type: application/json" -d @/tmp/tmp.vmxl0Dsf88 --no-buffer
{"type": "progress", "text": "\ud83d\ude80 Connected to backend, starting research..."}
{"type": "progress", "text": "\ud83d\ude80 Starting the course creation pipeline..."}
{"type": "progress", "text": "\ud83d\udd0d Research is starting..."}
{"type": "progress", "text": "\ud83d\udd0d Researcher is gathering information..."}
{"type": "progress", "text": "\u2696\ufe0f Judge is evaluating findings..."}
{"type": "progress", "text": "\u2696\ufe0f Judge is evaluating findings..."}
{"type": "progress", "text": "\u270d\ufe0f Building the final course content..."}
Running the Web Interface
Start a connection to the deployed app:
https://wqv5reqmno6skv3xsqb64kgrsm0hletn.lambda-url.us-east-1.on.aws/
Then connect to the app :
Then use online course generator:
Final Gemini CLI Code Review
As a final step — Gemini CLI was used for a full code review of the project:
✦ The multi-lambda project is a well-engineered, distributed multi-agent system that effectively leverages the
Google Agent Development Kit (ADK) and the A2A protocol within an AWS Lambda environment. The architecture is
modular, resilient, and optimized for streaming AI interactions.
Key Strengths
* Robust Orchestration: The use of SequentialAgent and LoopAgent in agents/orchestrator/agent.py creates a
sophisticated Research-Judge pipeline. Custom components like StateCapturer and EscalationChecker provide
fine-grained control over the agent loop and state persistence.
* Infrastructure-Aware A2A: The middleware in shared/a2a_utils.py dynamically rewrites agent card URLs based on
x-forwarded-* headers, a critical feature for services running behind proxies or as Lambda Function URLs.
* Intelligent Agent Callbacks: Each sub-agent (researcher, judge, content_builder) utilizes
before_agent_callback to sanitize inputs and recover topics/findings from session history. This makes the
agents highly resilient to the "noise" of accumulated conversation history.
* Unified Deployment: The project employs a single Docker image (using the aws-lambda-adapter) for all
services. The deploy-lambda.sh script automates the complex task of deploying five distinct services,
managing their Function URLs, and wiring them together via environment variables.
* Modern Python Stack: The use of Python 3.13 and uvicorn ensures high performance, while the gateway's
merge_strings logic gracefully handles the complexities of streaming deduplication.
Observations & Recommendations
* Session Persistence: Currently, the system appears to use ADK's default in-memory session storage. In a
Lambda environment, sessions may be lost across cold starts or scaling events. For production use,
integrating a persistent session store (e.g., DynamoDB or Redis) via the session_service_uri option is
recommended.
* Security: Inter-agent communication currently bypasses IAM authentication (using NONE auth type) and relies
on the authenticated_httpx.py bypass logic. While acceptable for a demo, production environments should
leverage AWS IAM or Google Identity Tokens for service-to-service authorization.
* State Management: The orchestrator's StateCapturer effectively "syncs" findings from the event stream into
the session state. This is a clever approach to distributed state management, ensuring the Orchestrator
remains the source of truth even when sub-agents are remote.
Overall, this is a sophisticated implementation that successfully bridges Google's AI agent framework with AWS's
serverless infrastructure.
Summary
The Agent Development Kit (ADK) was used to build a multi-agent system with A2A support using the Gemini Flash LLM Model. This application was tested locally with Gemini CLI and then deployed to AWS Lambda. Finally, Gemini CLI was used for a complete project code review.
This article was originally published by DEV Community and written by xbill.
Read original article on DEV Community





