Quick Start

Get Klarsicht running in your cluster in under 5 minutes.

Prerequisites

  • Kubernetes cluster (1.26+)
  • Helm 3
  • Grafana with alerting enabled
  • Prometheus (optional, for metrics correlation)

1. Install with Helm

helm install klarsicht oci://ghcr.io/outcept/klarsicht/helm/klarsicht \
  --namespace klarsicht --create-namespace \
  --set agent.llmApiKey=<your-api-key> \
  --set agent.metricsEndpoint=http://prometheus.monitoring.svc:9090

This deploys:

  • klarsicht-agent — FastAPI service that receives webhooks and runs investigations
  • klarsicht-dashboard — React UI for viewing incidents and RCA results
  • klarsicht-postgres — PostgreSQL for incident storage

2. Configure Grafana

Create a webhook contact point in Grafana:

  1. Go to Alerting → Contact points → Add contact point
  2. Name: klarsicht
  3. Type: Webhook
  4. URL: http://klarsicht-agent.klarsicht.svc:8000/alert

Set it as the default receiver in Alerting → Notification policies.

Or use the one-click setup in the Klarsicht dashboard at /setup.

3. Test it

Send a test alert to verify the pipeline:

# Option A: Mock alert via API
curl -X POST http://klarsicht-agent.klarsicht.svc:8000/test

# Option B: Deploy a pod that actually crashes
kubectl apply -f https://raw.githubusercontent.com/outcept/Klarsicht/main/examples/test-crashloop.yaml

The investigation result will appear in the dashboard within 60 seconds.

4. Clean up test

kubectl delete -f https://raw.githubusercontent.com/outcept/Klarsicht/main/examples/test-crashloop.yaml

What’s next