Skip to main content

Run CAIPE with KinD (Kubernetes in Docker)

This guide gets you from zero to a running CAIPE (Community AI Platform Engineering) environment on your laptop using KinD (Kubernetes in Docker). No prior experience with CAIPE or Kubernetes is required.

What is CAIPE? CAIPE is an open-source platform for building and running AI agents that can use tools, talk to LLMs (like Claude or GPT), and work together in multi-agent systems. This setup gives you a local environment where you can try agents, add RAG (retrieval-augmented generation), and observe traces—all on your machine.


Step 1: Clone the repository​

Open a terminal and clone the CAIPE repository. You need this repo to run the setup script.

git clone https://github.com/cnoe-io/ai-platform-engineering.git
cd ai-platform-engineering

You now have the project on your machine. The script we use next lives at the root of this repo: setup-caipe.sh.


Step 2: Prerequisites​

Before running the setup script, install these tools if you don’t have them yet:

ToolPurpose
DockerRuns containers (Docker Desktop or compatible runtime)
KindRuns a small Kubernetes cluster inside Docker
kubectlCommand-line client for Kubernetes
HelmInstalls CAIPE and its components on the cluster
Python 3Used by the setup script
curlUsed for health checks

The script will check for these at startup and can create a Kind cluster for you if one doesn’t exist.


Step 3: Run the setup script​

From the repository root (the ai-platform-engineering folder you cloned), run:

./setup-caipe.sh

The script is interactive and will:

  1. Select or create a Kubernetes cluster — If you don’t have one, it can create a Kind cluster named caipe.
  2. Choose an LLM provider — Anthropic Claude is the recommended default; OpenAI and AWS Bedrock are also supported.
  3. Ask for your API key — You’ll enter the key when prompted (or the script can read it from a config file; see below).
  4. Optionally enable RAG and tracing — You can add a RAG stack and Langfuse tracing when asked.

When it finishes, you’ll have CAIPE running locally. The script will tell you how to open the UI and run your first queries.

Tear down when you’re done​

To remove the environment and free resources:

./setup-caipe.sh cleanup

Quick reference​

What the script does​

  • Deploys CAIPE (supervisor, agents, UI) on your Kind cluster
  • Configures your chosen LLM provider and stores credentials in Kubernetes secrets
  • Optionally deploys RAG (knowledge base) and Langfuse (tracing)
  • Can create the Kind cluster for you and run health checks

Useful commands​

CommandDescription
./setup-caipe.shFull interactive setup (default)
./setup-caipe.sh port-forwardStart port-forwarding and run validation
./setup-caipe.sh validateRun validation and sanity tests only
./setup-caipe.sh cleanupInteractive teardown of all resources
./setup-caipe.sh nukeNon-interactive teardown
./setup-caipe.sh statusShow pod status and Helm releases

Non-interactive mode (CI or scripts)​

For automation, use --non-interactive and environment variables:

# Create cluster and deploy with Claude (default)
./setup-caipe.sh --non-interactive --create-cluster

# Full stack: RAG + Langfuse tracing + auto-heal
./setup-caipe.sh --non-interactive --create-cluster --rag --tracing --auto-heal

# Non-interactive teardown
./setup-caipe.sh nuke

Credentials are read from ~/.config/claude.txt and ~/.config/openai.txt when set. You can override with ANTHROPIC_API_KEY or OPENAI_API_KEY.

Options​

FlagDescription
--non-interactiveSkip prompts; use env vars or defaults
--create-clusterCreate a Kind cluster if none exists (name: caipe)
--ragDeploy the RAG stack (knowledge base, embeddings)
--graph-ragDeploy Graph RAG (Neo4j + ontology agent; implies --rag)
--tracingDeploy Langfuse and enable tracing
--ingest-url=URLIngest a URL into the RAG knowledge base (implies --rag; repeatable)
--auto-healEnable auto-heal loop (every 30s)
--yes, -yAuto-confirm cleanup prompts
-h, --helpShow help

LLM providers​

Run the script and follow the prompts. When asked for a provider, choose Anthropic Claude (or accept the default).

To avoid typing your key every time, put it in a file (one line, no extra spaces):

# Create the file and add your key (replace with your real key)
echo "sk-ant-your-key-here" > ~/.config/claude.txt
chmod 600 ~/.config/claude.txt

The script looks for your key in this order: ANTHROPIC_API_KEY env var → ~/.config/claude.txt → interactive prompt.

Non-interactive:

ANTHROPIC_API_KEY=sk-ant-xxx ./setup-caipe.sh --non-interactive --create-cluster

OpenAI​

Choose OpenAI when the script asks for a provider, or set:

LLM_PROVIDER=openai ./setup-caipe.sh

Store your key in ~/.config/openai.txt (one line) to skip the prompt.

Non-interactive:

LLM_PROVIDER=openai OPENAI_API_KEY=sk-xxx ./setup-caipe.sh --non-interactive --create-cluster

AWS Bedrock​

Choose AWS Bedrock when prompted, or:

LLM_PROVIDER=aws-bedrock ./setup-caipe.sh

The script uses your AWS credentials (env vars, ~/.config/bedrock.txt, or ~/.aws/credentials).

Non-interactive:

LLM_PROVIDER=aws-bedrock AWS_PROFILE=my-profile ./setup-caipe.sh --non-interactive --create-cluster

Enabling RAG (knowledge base)​

When you run the script interactively, it can enable RAG (retrieval-augmented generation)—a knowledge base that agents can query. You can also pass the flag:

./setup-caipe.sh --rag

RAG uses OpenAI embeddings by default. If your main LLM is Claude, the script will ask for both your Anthropic key (for the LLM) and an OpenAI key (for embeddings). It can read the OpenAI key from ~/.config/openai.txt or OPENAI_API_KEY.

Ingesting documentation​

After RAG is deployed, you can ingest a documentation site:

./setup-caipe.sh --non-interactive --rag --ingest-url=https://cnoe-io.github.io/ai-platform-engineering/

You can monitor progress in the CAIPE UI under the Knowledge Base tab.


Environment variables​

VariableDefaultDescription
LLM_PROVIDERanthropic-claudeanthropic-claude, aws-bedrock, or openai
OPENAI_API_KEY—OpenAI API key (also used for RAG embeddings when RAG is enabled)
ANTHROPIC_API_KEY—Anthropic API key
KIND_CLUSTER_NAMEcaipeKind cluster name (used with --create-cluster)
CAIPE_CHART_VERSIONlatestPin Helm chart version

Ports​

ServiceLocal port
Supervisor (A2A API)8000
CAIPE UI3000
Langfuse (tracing)3100
RAG Server9446

Next steps​