Observability
mockd provides comprehensive observability features for monitoring, debugging, and integrating with your existing observability stack.
Prometheus Metrics
Section titled “Prometheus Metrics”The admin API exposes Prometheus-compatible metrics at /metrics.
Enabling Metrics
Section titled “Enabling Metrics”Metrics are available by default on the admin port:
mockd serve --admin-port 4290Available Metrics
Section titled “Available Metrics”# Server uptimemockd_uptime_seconds 3600
# HTTP request countersmockd_http_requests_total{method="GET",path="/api/users",status="200"} 42
# Request latency histogrammockd_http_request_duration_seconds_bucket{le="0.01"} 100mockd_http_request_duration_seconds_bucket{le="0.1"} 150mockd_http_request_duration_seconds_bucket{le="+Inf"} 155
# Go runtime metricsgo_goroutines 12go_memstats_heap_alloc_bytes 4194304Prometheus Configuration
Section titled “Prometheus Configuration”scrape_configs: - job_name: 'mockd' static_configs: - targets: ['localhost:4290'] metrics_path: /metrics scrape_interval: 15sGrafana Dashboard
Section titled “Grafana Dashboard”Example Grafana queries:
# Request raterate(mockd_http_requests_total[5m])
# Error ratesum(rate(mockd_http_requests_total{status=~"5.."}[5m])) / sum(rate(mockd_http_requests_total[5m]))
# P95 latencyhistogram_quantile(0.95, rate(mockd_http_request_duration_seconds_bucket[5m]))Loki Log Aggregation
Section titled “Loki Log Aggregation”Send mockd logs to Grafana Loki for centralized log aggregation.
Enabling Loki
Section titled “Enabling Loki”mockd serve --loki-endpoint http://localhost:3100/loki/api/v1/pushLog Format
Section titled “Log Format”Logs are sent with the following labels:
| Label | Description |
|---|---|
app | Always mockd |
level | Log level (debug, info, warn, error) |
component | Component name (server, admin, engine) |
Loki Configuration
Section titled “Loki Configuration”Ensure Loki is running and accessible:
services: loki: image: grafana/loki:2.9.0 ports: - "3100:3100" command: -config.file=/etc/loki/local-config.yamlQuerying Logs in Grafana
Section titled “Querying Logs in Grafana”# All mockd logs{app="mockd"}
# Errors only{app="mockd", level="error"}
# Request logs{app="mockd"} |= "request"
# Filter by path{app="mockd"} | json | path="/api/users"Log Batching
Section titled “Log Batching”Logs are batched for efficiency:
- Batch size: 100 entries or 1 second (whichever comes first)
- Automatic retry on failure
- Graceful shutdown flushes pending logs
OpenTelemetry Tracing
Section titled “OpenTelemetry Tracing”Send distributed traces to any OpenTelemetry-compatible backend (Jaeger, Zipkin, Tempo, etc.).
Enabling Tracing
Section titled “Enabling Tracing”mockd serve --otlp-endpoint http://localhost:4318/v1/tracesTrace Sampling
Section titled “Trace Sampling”Control the sampling rate (default: 100%):
# Sample 10% of tracesmockd serve --otlp-endpoint http://localhost:4318/v1/traces --trace-sampler 0.1Trace Attributes
Section titled “Trace Attributes”Each span includes:
| Attribute | Description |
|---|---|
http.method | HTTP method |
http.url | Request URL |
http.status_code | Response status code |
mockd.mock_id | Matched mock ID |
mockd.matched | Whether a mock matched |
Jaeger Setup
Section titled “Jaeger Setup”services: jaeger: image: jaegertracing/all-in-one:1.50 ports: - "16686:16686" # UI - "4318:4318" # OTLP HTTP environment: - COLLECTOR_OTLP_ENABLED=truemockd serve --otlp-endpoint http://localhost:4318/v1/traces# View traces at: http://localhost:16686Grafana Tempo Setup
Section titled “Grafana Tempo Setup”services: tempo: image: grafana/tempo:latest command: ["-config.file=/etc/tempo.yaml"] ports: - "4318:4318"Combined Setup
Section titled “Combined Setup”Run mockd with full observability:
mockd serve \ --log-level debug \ --log-format json \ --loki-endpoint http://localhost:3100/loki/api/v1/push \ --otlp-endpoint http://localhost:4318/v1/traces \ --trace-sampler 1.0Docker Compose Example
Section titled “Docker Compose Example”version: '3.8'
services: mockd: image: ghcr.io/getmockd/mockd:latest ports: - "4280:4280" - "4290:4290" command: > serve --loki-endpoint http://loki:3100/loki/api/v1/push --otlp-endpoint http://tempo:4318/v1/traces depends_on: - loki - tempo
loki: image: grafana/loki:2.9.0 ports: - "3100:3100"
tempo: image: grafana/tempo:latest ports: - "4318:4318"
prometheus: image: prom/prometheus:latest ports: - "9090:9090" volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml
grafana: image: grafana/grafana:latest ports: - "3000:3000" environment: - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=AdminRequest Streaming
Section titled “Request Streaming”For real-time request monitoring, use the SSE endpoint:
curl -N http://localhost:4290/requests/streamSee Admin API Reference for details.
See Also
Section titled “See Also”- Admin API Reference - Metrics and streaming endpoints
- CLI Reference - Logging and tracing flags
- Troubleshooting - Debugging issues