YAML-defined automation backed by Temporal
Wire your tools.
Ship automation.
Event-driven workflow orchestrator that connects your tools through YAML rules and AI agents. Write a rule, wire a webhook — Fiber handles the rest.
# One YAML block: three agents chained, CI verification, notifications rules: - name: deploy-pipeline on: plane.label_added if: { label: deploy } do: - agent: deploy wait: true - agent: code wait: true - agent: review verify: on: github.check.success within: 30m finally: - matrix.send: "Pipeline {_status} for {issue.title}"
Connectors
AI Agents
MCP Tools
Setup
The Problem
Your tools don't talk to each other
Issue trackers, CI pipelines, alert systems, chat — every tool has its own webhook, its own API, its own data model. You write glue code. It breaks. You write more glue code.
Alert fires at 3 AM
You manually create an issue, page someone on Slack, hope they check the dashboard.
CI fails on a PR
Someone notices eventually. Maybe. The issue tracker doesn't update. The author context-switches.
New service request
Days of scaffolding: repo, Helm chart, CI pipeline, DNS, ingress. Copy-paste from the last one.
How It Works
Write a rule. Wire a webhook.
Fiber receives events from your tools, matches them against YAML-defined rules, and executes multi-step workflows — all backed by Temporal for crash recovery.
Event arrives
Webhook from GitHub, Plane, Alertmanager, Matrix, or MQTT stream
Rules match
Engine evaluates conditions with full operator support: in, not, contains, regex, numeric
Workflow executes
Sequential or DAG steps, AI agents, shell commands — durable via Temporal
Outcome verified
Async verification: wait for a confirming event or escalate on timeout
# Real example: alert fires -> issue created -> code agent fixes -> CI verified rules: - name: alert-response on: alertmanager.firing if: { severity: critical } do: - plane.issue: title: "[ALERT] {alert.name}" priority: urgent - agent: code wait: true - agent: review verify: on: alertmanager.resolved within: 1h else: - plane.label.add: "escalate" finally: - matrix.send: "Alert response {_status}: {alert.name}"
Connectors
8 connectors, both ways
Every connector handles inbound events and outbound actions. Config is encrypted at rest. Add or update connectors via API — no restart required.
Plane
Inbound + Outbound
Issues, labels, states, comments. Full project management integration.
GitHub
Inbound + Outbound
PRs, reviews, CI checks, issues, labels, merge. Complete code lifecycle.
Alertmanager
Inbound + Outbound
Prometheus alert webhooks. Auto-create issues, dispatch agents on firing.
Matrix
Inbound + Outbound
ChatOps messages. Trigger workflows from chat, send notifications.
MQTT
Stream + Publish
Topic subscriptions with per-topic throttle. JSON auto-flattening.
LLM
Outbound
Any model via Bifrost gateway. Completions and agent dispatch.
Grafana
Outbound
Unified queries: metrics, logs, and traces via datasource proxy.
Core
Outbound
HTTP requests, shell commands, alerts, workflow invocation.
AI Agents
AI agents as workflow steps
Dispatch Claude agents from YAML. Each agent runs in an isolated workspace with a timeout budget. Agents chain — each sees prior outputs. Memory persists across runs on the same issue.
Deploy agent
Scaffold repos, Helm charts, DNS, CI/CD. Full service provisioning end-to-end.
Code agent
Implement features, write tests, create PRs. Reads project conventions from CLAUDE.md.
Review agent
Structured code review: quality, security, testing, deployment readiness.
SRE agent
Investigate infrastructure incidents. Query Grafana, analyze logs, propose fixes.
Security agent
Scan for vulnerabilities on every PR. Scheduled patrols catch configuration drift.
Kaizen agent
Continuous improvement: test coverage, performance, code quality, observability.
Triage agent
Auto-classify incoming issues: type, priority, labels, routing.
Docs agent
Generate and update documentation. Keeps docs in sync with code changes.
Product agent
Analyze product requirements, user feedback, and feature specifications.
Plus: onboarding, finance, and growth agents. All configurable with reusable presets and custom instructions.
Features
Built for production
Not a prototype. Durable execution, encrypted config, distributed tracing, multi-tenancy — the infrastructure you need to run automation at scale.
Durable execution
Temporal-backed workflows with crash recovery, retry, execution history, and timer-based outcome verification. Workflows survive restarts.
YAML DSL
Conditions, branching, DAG execution, template variables, workflow composition, verification blocks, cooldowns, schedules. No code required.
Observable
Prometheus metrics, distributed tracing, execution history, dry-run simulation, dead letter auto-retry. Know exactly what happened and why.
Multi-tenant
Per-tenant encrypted config, rate limiting, workflow isolation. Hot-reload on config changes. Tenants managed via API.
Encrypted at rest
Envelope encryption: per-row DEK wraps config, KEK wraps DEK. Online key rotation without touching ciphertext. Secrets never returned by API.
Four interfaces
HTTP gateway for webhooks and REST. Terminal UI for dashboards. MCP server with 20+ Claude-native tools. Generated Python SDK from OpenAPI.
YAML Reference
The DSL
Four layers per workflow: trigger, preparation, execution, verification. Every construct compiles to a durable Temporal workflow.
Conditional logic
if: severity: { in: [critical, warning] } state: { not: Done } issue.text: { matches: "(?i)deploy|scaffold" } pr.additions: { gt: 500 } any_of: # OR of ANDs - { severity: critical } - severity: warning alert.name: { contains: OOM }
Parallel DAGs
dag: start: - plane.comment: "Starting..." then: [lint, test] lint: - shell:lint: null then: [gate] test: - shell:test: null then: [gate] gate: - join: all then: [deploy] deploy: - agent: deploy
Named steps + chaining
do: - llm: "Classify: {alert.name}" as: classify - plane.issue: title: "[ALERT] {alert.name}" state: "Triage" as: new_issue - matrix.send: "Created {new_issue.issue_id}"
Custom shell tools
tools: cluster-health: command: "kubectl get nodes -o json" timeout: 15 rules: - name: health-check on: schedule schedule: 1h do: - shell:cluster-health: null as: health - plane.issue: title: "Cluster issue detected" if: { health: { contains: NotReady } }
Interfaces
Four ways in
HTTP gateway for webhooks. Terminal UI for dashboards. MCP server for Claude-native tools. Generated SDK for programmatic access.
fiber serve
Gateway
Webhook receiver + REST API. FastAPI-powered, Keycloak auth, Prometheus metrics.
fiber-tui
Terminal UI
Live dashboard for workflow execution, agent status, and system health in the terminal.
fiber-mcp
MCP Server
20+ Claude-native tools. Agent dispatch, workflow management, observability — all from Claude Code.
Python SDK
Generated
Typed Python client auto-generated from OpenAPI. Full programmatic access to every endpoint.
Quick Start
Get started in 5 minutes
Install
pip install fiber-orchestrator
Configure
cp .env.example .env
# Set TEMPORAL_SERVER_URL and INTEGRATION_SECRET_KEY
fiber seed-integrations integrations.jsonWrite your first rule
rules: - name: pr-notify on: github.pr.opened do: - plane.comment: "PR opened: [{pr.title}]({pr.url})" - plane.state: "In Review"
Test and launch
fiber validate rules.d/*.yaml # check your YAML fiber serve # start the gateway