Mission Check 2 â Run Standalone Petstore Agent
Overviewâ
đ Mission Status: As a newly arrived Mars Inhabitant, your first assignment is to manage the colony's biological companions and supply systems.
In this mission, you'll deploy a standalone Petstore AI agent to handle critical colony operations:
- đž Companion Management: Track, care for, and manage colony animals that boost morale and assist with tasks
- đĻ Supply Operations: Monitor inventory, process resource orders, and analyze colony logistics
- đ¨âđ Inhabitant Management: Maintain records and manage access for fellow Mars inhabitants
- đ Smart Search: Efficiently locate animals and supplies using advanced filtering systems
- ⥠Response Optimization: Handle large datasets crucial for colony survival without system overload
Architecture Overviewâ
The petstore agent can run in two different MCP (Model Control Protocol) modes, each with distinct advantages:
Key Differences Between the Modesâ
Mode | How it Works | Benefits |
---|---|---|
đ STDIO | The agent starts its own MCP server process and communicates directly through simple text commands (like a conversation through a pipe). | âĸ Faster communication (no network delays) âĸ Everything runs in one place âĸ Simpler setup for development âĸ No authentication needed |
đ HTTP | The agent connects to a separate MCP server running elsewhere using web requests (like calling an API over the internet). | âĸ MCP server can serve multiple agents âĸ Better for production deployments âĸ Can scale components independently âĸ Supports authentication and security |
The following diagrams illustrate how the chat client connects to the petstore agent in each mode:
STDIO Mode | HTTP Mode |
---|---|
đ NOTE: If you prefer to build and run the agent locally, refer to the step at the bottom of this page: Optional Step 3: Build and run the petstore agent locally.
Step 1: Navigate to AI Platform Engineering Repositoryâ
cd $HOME/work/ai-platform-engineering
Step 2: Set Up Environment Variablesâ
2.1: Copy the example environment fileâ
cp .env.example .env
2.2: Edit the environment file with your LLM credentialsâ
For this workshop, we will use Azure OpenAI. The API credentials are available in the .env_vars
file in your home directory. Run below command in the terminal to source the variables from .env_vars
and update the .env
file you just created:
source $HOME/.env_vars && \
sed -i \
-e 's|^LLM_PROVIDER=.*|LLM_PROVIDER=azure-openai|' \
-e "s|^AZURE_OPENAI_API_KEY=.*|AZURE_OPENAI_API_KEY=${AZURE_OPENAI_API_KEY}|" \
-e "s|^AZURE_OPENAI_ENDPOINT=.*|AZURE_OPENAI_ENDPOINT=${AZURE_OPENAI_ENDPOINT}|" \
-e "s|^AZURE_OPENAI_DEPLOYMENT=.*|AZURE_OPENAI_DEPLOYMENT=${AZURE_OPENAI_DEPLOYMENT}|" \
-e "s|^AZURE_OPENAI_API_VERSION=.*|AZURE_OPENAI_API_VERSION=${AZURE_OPENAI_API_VERSION}|" \
.env
đĄ Tip: Check if your Azure credentials are set in your .env
cat .env | grep -Ei 'azure|llm' | sed -E 's/(=.{5}).+/\1****/'
Alternatively, you can also check the variables have been set correctly in the .env
file by going to the IDE tab on the top right of this page and locating the file under ai-platform-engineering/
directory.
Step 3: Run the Petstore Agentâ
đĄ Mode Selection Tip:
- Use STDIO mode for local development and testing with minimal overhead
- Use HTTP mode for production environments or when you need to connect to remotely hosted MCP servers
You can run the petstore agent in two different MCP (Model Control Protocol) modes. For this workshop, we will use the HTTP mode but you can also use the STDIO mode if you prefer (see Step 7: [Optional] Using MCP STDIO Mode).
3.1: Using Remote MCP Streamable HTTP Modeâ
HTTP mode enables network-based communication with remote MCP servers, useful for production deployments or when the MCP server is running separately. In this mode, the agent connects to a separately hosted internal MCP server running at https://petstore.outshift.io/mcp, which then handles the Petstore API operations.
3.1.1: Set the Petstore API key
PETSTORE_MCP_API_KEY=$(echo -n 'caiperocks' | sha256sum | cut -d' ' -f1) && \
sed -i "s|^PETSTORE_MCP_API_KEY=.*|PETSTORE_MCP_API_KEY=${PETSTORE_MCP_API_KEY}|" .env
3.1.2: Run the petstore agent
IMAGE_TAG=latest MCP_MODE=http docker compose -f workshop/docker-compose.mission2.yaml up
What happens:
- âŦ Downloads petstore agent image with the latest tag from the registry
- đ Connects to remote MCP server via HTTP/streaming mode at https://petstore.outshift.io/mcp
- đ Exposes agent on
http://localhost:8000
- đ Shows logs directly in terminal
- đ Advantage: Supports remote MCP servers, useful for production deployments, better separation of concerns
3.3: Expected Output (Both Modes)â
Regardless of which mode you choose, you should see the following output:
...
===================================
PETSTORE AGENT CONFIG
===================================
AGENT_URL: http://0.0.0.0:8000
===================================
Running A2A server in p2p mode.
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
đ¯ Success indicator: Ensure you wait until you see the message: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
regardless of the mode you choose.
Step 4: Test the Petstore Agentâ
Open a new terminal and run the following command to test the agent health. You should see the agent card with petstore capabilities. This includes the agent's name, description, and capabilities including example prompts that you can use to test the agent.
đĄ Note: Click the + button on the terminal window to open a new terminal before running the following commands.
curl http://localhost:8000/.well-known/agent.json | jq
Step 5: Connect Chat Clientâ
Once you confirm the agent is running, start the chat client:
docker run -it --network=host ghcr.io/cnoe-io/agent-chat-cli:stable
đĄ Tip: When asked to đŦ Enter token (optional):
, just press enter â.
In production, your system will use a JWT or Bearer token for authentication here.
The chat client will connect to the petstore agent on port 8000 and download the agent card from Step 4. It will then use the agent card to discover the agent's capabilities.
Wait for the agent's welcome message with example skills and CLI prompt đ§đ§âđģ You:
. You can now start interacting with the agent.
Step 6: Interact with the Petstore Agentâ
6.1: Discovery Commandsâ
Try these example interactions:
What actions can you perform?
Show me what you can do with pets
6.2: Pet Management Examplesâ
âšī¸ Info: HTTP mode persists data so you can try adding pets and then retrieve them. However, STDIO mode uses a demo sandbox where data is not persisted, so create/update/delete operations may not reflect in subsequent reads.
â ī¸ Note (HTTP MCP mode): All lab users share the same remote Petstore endpoint. To avoid collisions, use unique pet names and random pet IDs when creating new pets.
(Admins will reset the data after the workshop.)
Find all available pets in the store
Get all cats that are available
Get a summary of pets by status
I want to add a new pet to the store
6.3: Store Operationsâ
Check store inventory levels
Show me pets with 'rain proof' tag
Expected Behaviorâ
- â Fast responses - Agent uses optimized functions with response limits
- â Smart search - Can handle combined criteria like "cats that are pending"
- â Interactive guidance - Agent will ask for required details when needed e.g. ask to add a new pet and it will ask for required details like name, category, status, etc.
- â Rich summaries - Shows counts and statistics without overwhelming data
Step 7: [Optional] Using MCP STDIO Modeâ
STDIO mode runs the MCP server embedded within the agent container, using standard input/output streams for internal communication. The embedded MCP server then connects to the external Petstore API.
đ Note: If you are already running the agent in HTTP mode, first stop the docker compose:
docker compose -f $HOME/work/ai-platform-engineering/workshop/docker-compose.mission2.yaml down
IMAGE_TAG=latest MCP_MODE=stdio docker compose -f workshop/docker-compose.mission2.yaml up
What happens:
- âŦ Downloads petstore agent image with the latest tag from the registry
- đ Connects to MCP server via STDIO mode to https://petstore.swagger.io/v2 which is a public sandbox API
- đ Exposes agent on
http://localhost:8000
- đ Shows logs directly in terminal
- đ Advantage: Lower latency, direct process communication
Step 8: Teardown that agent and chat clientâ
đ Before You Proceed: Bring Down Your Docker Containers
- Important: Run
docker compose down
in your terminal to stop and remove all running containers for this demo before moving on to the next steps. - This ensures a clean environment and prevents port conflicts or resource issues.
You can stop the agent and chat client by pressing Ctrl+C
(or Cmd+C
on Mac) in each terminal. Or if you have already closed the terminals, ensure you run the specific docker compose down command to make sure the agent has stopped:
docker compose -f $HOME/work/ai-platform-engineering/workshop/docker-compose.mission2.yaml down
Mission Checksâ
đ Colony Mission Checklistâ
- Navigate to AI Platform Engineering repository
- Set up .env file with LLM credentials
- Run docker compose to pull the latest petstore agent image and run it on port 8000
- Connect chat client to the petstore agent and test the agent
- Test discovery: "What actions can you perform?"
- Test companion search: "Find all available companions"
- Test smart search: "Get all cats that are pending"
- Test interactive: "I want to add a new companion"
- Teardown the agent and chat client
Troubleshootingâ
Here are some common issues you may encounter and how to fix them.
Agent won't startâ
# Check if port 8000 is in use
lsof -i :8000
# Stop any existing containers
make stop
make clean
Chat client can't connectâ
# Verify agent health
curl http://localhost:8000/.well-known/agent.json
# Check if agent is running
make status
Environment issuesâ
# Check environment variables
make show-env
# Rebuild with fresh environment
make run-rebuild
[Optional] Steps 1-3: Build and run the petstore agent locallyâ
Set up environment variablesâ
If you are using your local machine, first get the Azure OpenAI credentials from the lab environment:
cat $HOME/.env_vars
Then run below to copy the example environment file to your local machine and update the .env
file with the Azure OpenAI credentials:
cp .env.example .env
Build and run the petstore agent locallyâ
You can also build and run the petstore agent locally:
MCP_MODE=stdio docker compose -f workshop/docker-compose.mission2.yaml -f workshop/docker-compose.dev.override.yaml --profile mission2-dev up
What happens:
- đ§ Builds Docker image located in
ai_platform_engineering/agents/template/build/Dockerfile.a2a
- đ Mounts code via volumes for live development
- đ Exposes agent on
http://localhost:8000
- đ Shows logs directly in terminal
Above command uses the dev override file to mount the code from your local machine and rebuild the petstore agent image on each change. This is useful for testing local changes to the agent code. You can now return to Step 4: Test the Petstore Agent to test the agent.