MCP SERVER

LOGDASH

Semantic layer over structured logs. Describe what your services log in YAML — logdash builds correct KQL so agents never guess at field names, tag formats, or environment substitution.

5
Tools
7
Process Specs
2
KQL Modes
0
ES Queries Made
TOOLS
ToolPurposeReturns
list_processesList all available process specs with team, description, and pattern countsProcess ID, name, team, source count, pattern count per spec
describe_processFull process detail — sources, all patterns with meaning and field namesMetadata, ES source coordinates, pattern summaries with semantics
get_log_patternSingle pattern with all fields resolved, including inherited scope fieldsComplete pattern definition with provenance-tracked field list
get_sourceES coordinates for a process — namespace, container, clusterSource map with $env placeholders intact
build_kqlConstruct correct KQL from spec data — the primary toolReady-to-use KQL string, source coordinates, available filter fields
PROCESS SPECS
The Spec Format

Each YAML file in specs/ describes one process — a coherent sequence of observable events that may cross multiple services. A spec declares process identity, Elasticsearch source coordinates, reusable field scopes, and log patterns with semantic meaning. Specs validate against a JSON Schema at load time. Invalid specs are logged and skipped.

Scopes and Inheritance

Scopes model the common case where a logger sets context fields that all downstream log calls inherit. A scope defines a reusable field set — patterns reference scopes by ID and get their fields merged in. The loader resolves this at startup, tracking provenance so you know which fields came from which scope.

Sources

Each source maps to ES coordinates — namespace, container, cluster. Namespaces use $env placeholders substituted at query time. A process spanning multiple services defines multiple named sources; patterns bind to a source (defaulting to "default").

Patterns

A pattern is a known log message with typed fields, semantic category (lifecycle, metric, error, debug, audit), severity level, and a human-readable meaning. Patterns are the unit of KQL generation — build_kql matches on the exact message string.

Semantics

Every pattern carries semantic metadata — what category of event it represents, what it indicates, and how severe it is. This lets agents reason about log patterns without reading raw messages. Categories distinguish lifecycle events from metrics from errors.

KQL GENERATION

The core value proposition. Agents get three things wrong when writing KQL manually: ES field names (kubernetes.namespace vs guessing), tag format (tags:"key=value" — not tags.key), and environment substitution in namespaces. build_kql eliminates all three failure modes by construction.

Pattern Mode

Provide a pattern_id and get KQL that matches the exact log message on the correct source. The query includes namespace, container, cluster, and message match. Optional tag filters narrow further — field names and enum values come from the resolved pattern, so agents pick from known-valid options instead of guessing.

Correlation Mode

Omit the pattern ID, provide a source and tag filters. Gets KQL matching source coordinates plus tag constraints — no message match. Used for cross-source tracing: find a correlation ID (like a workflow ID or invoice ID) on a different service than where you first saw it. Requires at least one tag filter to prevent overly broad queries.

INTEGRATION
Upstream
Where logdash gets its knowledge

Process specs are hand-authored YAML checked into the repo. No runtime dependencies — specs load once at startup from the specs/ directory. The JSON Schema validates structure. The loader resolves scope inheritance and indexes patterns for fast lookup.

Downstream
What consumes logdash output

The KQL strings go to an Elasticsearch tool for execution. Logdash produces the query; a separate tool runs it. The domain mapping in AGENTS.md connects billing identifiers (invoice IDs, customer IDs, account IDs) across logdash fields, BigQuery tables, and Stripe objects.

Dashboard Generation
Beyond MCP

View specs (views/*.yaml) define Grafana dashboard layout. A generator binary produces dashboard JSON from the same process specs that power the MCP tools. One source of truth — two output formats.

The server that never queries Elasticsearch — it just makes sure you do it right.