You’ve built your first intelligent agent with Google’s Agent Development Kit (ADK).
It works locally, passes its tests, and now it needs the final flourish: deployment.
Should you let adk deploy cloud_run
do the heavy lifting, or drop to gcloud
for full control?
And once it’s live, how do you automate updates and sanity-check your endpoints?
This guide walks through exactly that—using our hackathon project for the Agent Development Kit Hackathon with Google Cloud : GetHired, a multi-agent job-search assistant, as the running example.
Table of Contents
- Prerequisites
-
Option 1:
adk deploy cloud_run
(The Easy Button) -
Option 2: Custom Docker +
gcloud
(Full Control) - CI/CD with GitHub Actions
- Handling Secrets Like a Pro
- Testing Your Service
Prerequisites
- Google Cloud project with billing enabled
- ADK-structured repository (sample layout)
- GitHub repo with Actions enabled
- Some knowledge of Github Actions
- Optional: fresh coffee—builds can take a minute
Option 1: adk deploy cloud_run
(The Easy Button)
adk deploy
hides most boilerplate: it builds your container, sets env-vars, deploys to Cloud Run, and can toggle extras like UI, tracing, and artifact storage.
# Install ADK
pip install google-adk
# Deploy from project root
adk deploy cloud_run my_agents \
--project=YOUR_PROJECT_ID \
--region=us-central1 \
$AGENT_PATH
⚠️ If you decide to use this command, here are a couple of gotchas:
-
Interactive prompt
adk deploy
asks whether to allow unauthenticated traffic. In CI you’ll need to pre-answer (e.g.,yes | adk deploy ...
). -
Agent path sanity
$AGENT_PATH
must contain an__init__.py
and your main agent file. - IAM bare minimum - hopefully this saves you a few iterations
Role | Why you need it |
---|---|
roles/run.admin |
Create/update Cloud Run services |
roles/iam.serviceAccountUser |
Impersonate the service account that actually runs the container |
roles/cloudbuild.builds.editor |
Kick off Cloud Build jobs |
roles/artifactregistry.writer |
Push the built image |
Option 2: Custom Docker + gcloud (Full Control)
It gives you full control: specify build steps, fine-tune IAM roles, tweak concurrency settings, and wrestle with YAML. More power, more responsibility. If something breaks, congrats—you now own it.
FROM python:3.11-slim
WORKDIR /app
ENV PORT=8080 PYTHONUNBUFFERED=1 GOOGLE_GENAI_USE_VERTEXAI=true TIMEOUT=300
# OS deps
RUN apt-get update && apt-get install -y curl gnupg && rm -rf /var/lib/apt/lists/*
# Node (for Firebase MCP helper)
RUN curl -fsSL https://deb.nodesource.com/setup_lts.x | bash - && apt-get install -y nodejs && apt-get clean && rm -rf /var/lib/apt/lists/*
# Python deps
COPY jobsearch_agents/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Node deps
COPY jobsearch_agents/package*.json ./
RUN npm install && npm install @gannonh/firebase-mcp
# App code
COPY template/ ./template/
COPY jobsearch_agents/ .
EXPOSE ${PORT}
CMD ["python", "-m", "coordinator", "--host=0.0.0.0", "--port=8080"]
The companion requirements.txt
lives in the same directory:
google-adk>=1.1.1
google-genai>=1.5.0
google-cloud-bigquery>=3.31.0
# …snip…
firebase-admin>=6.4.0
google-cloud-storage>=2.10.0
CI/CD with GitHub Actions
Workflow Overview
Trigger | Job | Target |
---|---|---|
push to main
|
build → deploy-production | Production |
pull_request |
build → deploy-staging | Staging |
Authentication & Environment Setup
Both build and deployment jobs begin by authenticating with GCP and setting up the necessary CLI tools:
- id: auth
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
export_environment_variables: true
cleanup_credentials: true
- name: Set up Google Cloud SDK
uses: google-github-actions/setup-gcloud@v2
Key Points:
- GCP service account credentials are securely loaded via GitHub Secrets.
- These credentials are exported to the environment for use by the gcloud CLI, and they get cleaned up by the action post-run.
Docker Image Build & Push
The build job performs the following:
- Checks out code
- Builds and tags a Docker image
- Pushes the image to Google Container Registry
- name: Build and Push Docker Image
id: build
working-directory: ${{ github.workspace }}
run: |
# Build Docker image from project root to access template directory
IMAGE_NAME="gcr.io/$GOOGLE_CLOUD_PROJECT/gethired-agents"
IMAGE_TAG="${{ github.sha }}"
docker build -f jobsearch_agents/Dockerfile -t ${IMAGE_NAME}:${IMAGE_TAG} .
docker tag ${IMAGE_NAME}:${IMAGE_TAG} ${IMAGE_NAME}:latest
# Configure docker to use gcloud credentials
gcloud auth configure-docker gcr.io -q
# Push images to Google Container Registry
docker push ${IMAGE_NAME}:${IMAGE_TAG}
docker push ${IMAGE_NAME}:latest
echo "image=${IMAGE_NAME}:${IMAGE_TAG}" >> $GITHUB_OUTPUT
Output:
- Docker image tagged with SHA and latest
- Image URL passed to subsequent deploy jobs via outputs.image
Deploy to Google Cloud Run
Two deployment jobs handle release to staging and production:
✅ Deploy-Staging
Triggered by: pull_request
gcloud run deploy gethired-agents-staging \
--image=${{ needs.build.outputs.image }} \
--region=us-central1 \
--allow-unauthenticated \
--memory=2Gi \
--cpu=2.0 \
--set-env-vars=...
The snippet above is an example of a staging/ testing container that we used to test a change. Here a few key takeaways
- Uses moderate resources
- Deploys the same Docker image as built
- Sets necessary environment variables like:
- GOOGLE_CLOUD_LOCATION
- FIREBASE_SERVICE_ACCOUNT_KEY # if you're using the Firebase MCP with your agents
💡For production, you might want to give attention to giving your containers more resources, setting min-instances to reduce cold starts, and using a secret manager alongside your service.
Speaking of it...
Handling Secrets Like a Pro
Use Google Secret Manager for secure credential management in production.
Via the gcloud run deploy
command, you can grab stored secrets and set them as environment variables or mount them as files within your Cloud Run containers To do the ladder, you can add this parameter to your gcloud run deploy
command
--update-secrets=$PATH=secret-service-account-key:latest
Example:
gcloud run deploy my-service --update-secrets=/var/secrets/firebase=firebase-service-account-key:latest
At runtime, your container reads /var/secrets/firebase
.
Testing Your Service
-
Swagger UI Visit
<SERVICE_URL>/docs
test your endpoints. - Curl one-liner
curl -X POST "$SERVICE_URL/chat" -H "Content-Type: application/json" -d '{"message":"Hello, agent!"}'
If the response looks good, your agent is live.
Final Thoughts
You now have two paths to production:
-
adk deploy cloud_run
when speed > need for control and additional tools in your environments. -
Custom Docker +
gcloud
when control > convenience
Either way, your agents are ready to power UIs, talk to other agents, or whatever else you dream up.
👀
Have you deployed an agent using the ADK yet? How was your experience?
Stay tuned for a follow-up on Agent Engine, Google Cloud’s managed runtime that handles sessions, scaling, and leaves you free to build smarter agents.
Top comments (0)