MCP SERVER
Semantic layer over structured logs. Describe what your services log in YAML — logdash builds correct KQL so agents never guess at field names, tag formats, or environment substitution.
| Tool | Purpose | Returns |
|---|---|---|
| list_processes | List all available process specs with team, description, and pattern counts | Process ID, name, team, source count, pattern count per spec |
| describe_process | Full process detail — sources, all patterns with meaning and field names | Metadata, ES source coordinates, pattern summaries with semantics |
| get_log_pattern | Single pattern with all fields resolved, including inherited scope fields | Complete pattern definition with provenance-tracked field list |
| get_source | ES coordinates for a process — namespace, container, cluster | Source map with $env placeholders intact |
| build_kql | Construct correct KQL from spec data — the primary tool | Ready-to-use KQL string, source coordinates, available filter fields |
Each YAML file in specs/ describes one process — a coherent sequence of observable events that may cross multiple services. A spec declares process identity, Elasticsearch source coordinates, reusable field scopes, and log patterns with semantic meaning. Specs validate against a JSON Schema at load time. Invalid specs are logged and skipped.
Scopes model the common case where a logger sets context fields that all downstream log calls inherit. A scope defines a reusable field set — patterns reference scopes by ID and get their fields merged in. The loader resolves this at startup, tracking provenance so you know which fields came from which scope.
Each source maps to ES coordinates — namespace, container, cluster. Namespaces use $env placeholders substituted at query time. A process spanning multiple services defines multiple named sources; patterns bind to a source (defaulting to "default").
A pattern is a known log message with typed fields, semantic category (lifecycle, metric, error, debug, audit), severity level, and a human-readable meaning. Patterns are the unit of KQL generation — build_kql matches on the exact message string.
Every pattern carries semantic metadata — what category of event it represents, what it indicates, and how severe it is. This lets agents reason about log patterns without reading raw messages. Categories distinguish lifecycle events from metrics from errors.
The core value proposition. Agents get three things wrong when writing KQL manually: ES field names (kubernetes.namespace vs guessing), tag format (tags:"key=value" — not tags.key), and environment substitution in namespaces. build_kql eliminates all three failure modes by construction.
Provide a pattern_id and get KQL that matches the exact log message on the correct source. The query includes namespace, container, cluster, and message match. Optional tag filters narrow further — field names and enum values come from the resolved pattern, so agents pick from known-valid options instead of guessing.
Omit the pattern ID, provide a source and tag filters. Gets KQL matching source coordinates plus tag constraints — no message match. Used for cross-source tracing: find a correlation ID (like a workflow ID or invoice ID) on a different service than where you first saw it. Requires at least one tag filter to prevent overly broad queries.
Process specs are hand-authored YAML checked into the repo. No runtime dependencies — specs load once at startup from the specs/ directory. The JSON Schema validates structure. The loader resolves scope inheritance and indexes patterns for fast lookup.
The KQL strings go to an Elasticsearch tool for execution. Logdash produces the query; a separate tool runs it. The domain mapping in AGENTS.md connects billing identifiers (invoice IDs, customer IDs, account IDs) across logdash fields, BigQuery tables, and Stripe objects.
View specs (views/*.yaml) define Grafana dashboard layout. A generator binary produces dashboard JSON from the same process specs that power the MCP tools. One source of truth — two output formats.
The server that never queries Elasticsearch — it just makes sure you do it right.