Skip to main content

Quickstart

Get BubuStack running on a local cluster and deploy your first workflow in under 10 minutes.

Before you start, make sure you have the prerequisites installed: kubectl, Helm, kind, and Docker.

Overview

1. Create a local cluster (kind)
2. Install cert-manager (webhook TLS)
3. Install S3 storage (payload offloading)
4. Install BubuStack controllers (bobrapet + bobravoz-grpc)
5. Install component templates (EngramTemplate / ImpulseTemplate)
6. Deploy an example (start experimenting)

Step 1: Create a cluster

kind create cluster --name bubustack

Verify the cluster is running:

kubectl cluster-info

Step 2: Install cert-manager

BubuStack admission webhooks require TLS certificates. cert-manager handles provisioning and rotation.

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.4/cert-manager.yaml

Wait for cert-manager to become ready:

kubectl wait --for=condition=ready pod \
-l app.kubernetes.io/instance=cert-manager \
-n cert-manager --timeout=300s

Step 3: Install SeaweedFS (S3 storage)

BubuStack offloads large payloads to S3-compatible storage. The quickstart uses SeaweedFS — a lightweight S3 server.

Add the Helm repo

helm repo add seaweedfs https://seaweedfs.github.io/seaweedfs/helm
helm repo update

Create the namespace and anonymous-access config

kubectl create namespace seaweedfs --dry-run=client -o yaml | kubectl apply -f -
kubectl create secret generic seaweedfs-s3-anon-config -n seaweedfs \
--from-literal='seaweedfs_s3_config={"identities":[{"name":"anonymous","actions":["Read","Write","List","Tagging","Admin"]}]}' \
--dry-run=client -o yaml | kubectl apply -f -

Install SeaweedFS via Helm

helm upgrade --install seaweedfs -n seaweedfs \
seaweedfs/seaweedfs \
--set filer.s3.enabled=false \
--set s3.enabled=true \
--set s3.replicas=1 \
--set s3.port=8333 \
--set s3.enableAuth=true \
--set s3.existingConfigSecret=seaweedfs-s3-anon-config \
--set 's3.createBuckets[0].name=bubu-default' \
--set 's3.createBuckets[0].ttl=7d' \
--set 's3.createBuckets[0].objectLock=true' \
--set 's3.createBuckets[0].versioning=Enabled'

This creates a bubu-default bucket with 7-day TTL and object locking enabled.

Verify storage is running

kubectl get pods -n seaweedfs
# All pods should be Running/Ready

This storage backend is part of the runtime contract, not just an optional addon. Examples that offload trigger inputs, StoryRun inputs, or large step payloads require Bobrapet to keep controller.storage.* configured against this shared backend.

Step 4: Install BubuStack

Install the two core controllers via Helm:

Charts are published in the BubuStack Helm repo and indexed on Artifact Hub.

# Add the Helm repo
helm repo add bubustack https://bubustack.github.io/helm-charts
helm repo update

# Install the workflow operator
helm install bobrapet bubustack/bobrapet \
--namespace bobrapet-system \
--create-namespace

# Install the streaming transport hub
helm install bobravoz-grpc bubustack/bobravoz-grpc \
--namespace bobrapet-system

If you install bobrapet with a non-default Helm release name, install bobravoz-grpc with the matching shared CA issuer:

helm install bobravoz-grpc bubustack/bobravoz-grpc \
--namespace bobrapet-system \
--set sharedCAIssuerName=<bobrapet-release>-bobrapet-shared-ca

Optionally, install the web console:

helm install bubuilder bubustack/bubuilder \
--namespace bobrapet-system

Verify controllers are running

kubectl get pods -n bobrapet-system
# bobrapet-controller-manager and bobravoz-grpc-controller-manager should be Running

Verify CRDs are installed

kubectl api-resources | grep bubustack
# Should list: stories, storyruns, stepruns, engrams, impulses, transports, etc.

Step 5: Install the example templates

Examples create namespaced Engram and Impulse resources that point at cluster-scoped templates via spec.templateRef. Until the public registry release lands, install those templates from the published GitHub Release assets before you apply engrams.yaml or impulse.yaml.

Hello World dependencies

kubectl apply -f https://github.com/bubustack/http-request-engram/releases/latest/download/Engram.yaml

LiveKit Voice dependencies

for repo in livekit-bridge-engram conversation-memory-engram openai-chat-engram \
openai-stt-engram openai-tts-engram silero-vad-engram; do
kubectl apply -f "https://github.com/bubustack/$repo/releases/latest/download/Engram.yaml"
done

kubectl apply -f https://github.com/bubustack/livekit-webhook-impulse/releases/latest/download/Impulse.yaml

Verify templates are registered

kubectl get engramtemplates
kubectl get impulsetemplates

Step 6: Deploy an example

Examples in the examples repository share a common shape, but not every example uses every file:

git clone https://github.com/bubustack/examples.git
cd examples

Expected:

  • examples/batch/ and examples/realtime/ directories exist.
bootstrap.yaml Namespace plus shared RBAC, transport, or setup resources
secrets.yaml User-supplied credentials (often paired with secrets.yaml.example)
engrams.yaml Engram instances (component deployments)
prompts.yaml Prompt or config maps used by the Story or Engrams
story.yaml Workflow definition (DAG of steps)
impulse.yaml Trigger (webhook, cron, or Kubernetes event)
README.md Example-specific setup, verification, and demo guidance

Batch example: Hello World

cd examples/batch/hello-world

# Requires the http-request EngramTemplate from Step 5
kubectl apply -f bootstrap.yaml
kubectl apply -f engrams.yaml
kubectl apply -f story.yaml
kubectl apply -f storyrun.yaml

Realtime example: LiveKit Voice Assistant

cd examples/realtime/livekit-voice

# Requires the EngramTemplates and ImpulseTemplate from Step 5
cp secrets.yaml.example secrets.yaml
# edit secrets.yaml

kubectl apply -f bootstrap.yaml
kubectl apply -f secrets.yaml
kubectl apply -f engrams.yaml
kubectl apply -f prompts.yaml
kubectl apply -f story.yaml
kubectl apply -f impulse.yaml

Many credentialed examples ship secrets.yaml.example; copy it to secrets.yaml and fill in your credentials before applying.

Verify your workflow

# Check that Engrams are ready
kubectl get engrams -A
# Expected: Engram objects are listed in your target namespace.

# Check that the Story is registered
kubectl get stories -A
# Expected: Story resources are present and accepted by the API server.

# Watch for StoryRuns (triggered by the Impulse)
kubectl get storyruns -A --watch
# Expected: a StoryRun appears once the trigger condition is met.

What's next