Skip to content

Server-Sent Events (SSE) and Streaming

mockd supports Server-Sent Events (SSE) and HTTP chunked transfer encoding for simulating streaming APIs like AI chat completions, real-time feeds, and large file downloads.

Create SSE mocks directly from the command line:

Terminal window
# Basic SSE with custom events
mockd add --path /events --sse \
--sse-event 'connected:{"status":"ok"}' \
--sse-event 'update:{"count":1}' \
--sse-event 'update:{"count":2}' \
--sse-delay 500
# OpenAI-compatible streaming
mockd add -m POST --path /v1/chat/completions --sse --sse-template openai-chat
# Notification stream template
mockd add --path /notifications --sse --sse-template notification-stream
# Infinite keepalive stream
mockd add --path /stream --sse \
--sse-event 'ping:{}' \
--sse-delay 1000 \
--sse-repeat 0
# SSE with keepalive pings every 15 seconds
mockd add --path /long-poll --sse \
--sse-event 'data:{"value":1}' \
--sse-keepalive 15000

CLI SSE Flags:

FlagDescriptionDefault
--sseEnable SSE streaming
--sse-eventEvent (type:data), repeatable
--sse-delayDelay between events (ms)100
--sse-templateBuilt-in template
--sse-repeatRepeat count (0 = infinite)1
--sse-keepaliveKeepalive interval (ms)0
{
"id": "basic-sse",
"matcher": { "method": "GET", "path": "/events" },
"sse": {
"events": [
{ "data": "Hello" },
{ "data": "World" }
],
"timing": { "fixedDelay": 1000 }
}
}
{
"id": "openai-mock",
"matcher": { "method": "POST", "path": "/v1/chat/completions" },
"sse": {
"template": "openai-chat",
"templateParams": {
"tokens": ["Hello", "!", " How", " can", " I", " help", "?"],
"model": "gpt-4",
"finishReason": "stop",
"includeDone": true,
"delayPerToken": 50
}
}
}

Define events to send to clients:

{
"sse": {
"events": [
{
"type": "message",
"data": "Event payload",
"id": "event-1",
"retry": 3000
}
]
}
}
FieldDescription
typeEvent type name (optional, for client filtering)
dataEvent payload (string or JSON object)
idEvent ID (for Last-Event-ID resumption)
retryReconnection interval in milliseconds
commentSSE comment (not delivered as event)

Control event delivery timing:

{
"sse": {
"timing": {
"initialDelay": 100,
"fixedDelay": 500,
"randomDelay": { "min": 100, "max": 500 },
"burst": { "count": 5, "interval": 10, "pause": 1000 },
"perEventDelays": [100, 200, 500]
}
}
}
FieldDescription
initialDelayDelay before first event (ms)
fixedDelayConstant delay between events (ms)
randomDelayRandom delay range (min/max ms)
burstSend events in bursts (count/interval/pause in ms)
perEventDelaysSpecific delay for each event

Control connection behavior:

{
"sse": {
"lifecycle": {
"keepaliveInterval": 15,
"timeout": 300,
"maxEvents": 100,
"connectionTimeout": 60,
"termination": {
"type": "graceful",
"finalEvent": { "type": "close", "data": "Stream ended" },
"closeDelay": 0
}
}
}
}
FieldDescription
keepaliveIntervalKeepalive ping interval (seconds, min 5)
timeoutConnection timeout (seconds)
maxEventsMaximum events before closing
connectionTimeoutMaximum stream duration (seconds)
termination.typeTermination type: “graceful”, “abrupt”, “error”
termination.finalEventEvent to send on graceful close
termination.closeDelayDelay in ms before closing after final event

OpenAI Chat Completions streaming format:

{
"sse": {
"template": "openai-chat",
"templateParams": {
"tokens": ["Hello", " World"],
"model": "gpt-4",
"finishReason": "stop",
"includeDone": true,
"delayPerToken": 50
}
}
}

Real-time notification stream:

{
"sse": {
"template": "notification-stream",
"templateParams": {
"notifications": [
{ "type": "alert", "message": "System update" }
],
"includeTimestamp": true,
"includeId": true,
"eventType": "notification"
}
}
}

Use placeholders in event data for dynamic values:

PlaceholderDescriptionExample
$random:min:maxRandom integer$random:1:100
$uuidUUID v4550e8400-e29b-41d4-a716-446655440000
$timestampISO 8601 timestamp2024-01-15T10:30:00Z
$pick:a,b,cRandom choice$pick:red,green,blue

Example:

{
"data": {
"id": "$uuid",
"value": "$random:1:100",
"status": "$pick:active,pending,complete",
"timestamp": "$timestamp"
}
}

For non-SSE streaming (file downloads, NDJSON):

{
"chunked": {
"data": "Large content to stream in chunks...",
"chunkSize": 1024,
"chunkDelay": 100
}
}
{
"chunked": {
"format": "ndjson",
"ndjsonItems": [
{ "id": 1, "name": "Alice" },
{ "id": 2, "name": "Bob" }
],
"chunkDelay": 100
}
}
GET /sse/connections
GET /sse/connections/{id}
DELETE /sse/connections/{id}
GET /sse/stats
GET /mocks/{id}/sse/connections
DELETE /mocks/{id}/sse/connections
GET /mocks/{id}/sse/buffer
DELETE /mocks/{id}/sse/buffer
Terminal window
# Basic SSE
curl -N -H "Accept: text/event-stream" http://localhost:4280/events
# OpenAI streaming
curl -N -X POST \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{"stream": true, "messages": [{"role": "user", "content": "Hi"}]}' \
http://localhost:4280/v1/chat/completions
# Chunked download
curl -N http://localhost:4280/download/file
# NDJSON stream
curl -N http://localhost:4280/api/logs/stream
const source = new EventSource('/events');
source.onmessage = (event) => {
console.log('Message:', event.data);
};
source.addEventListener('custom-type', (event) => {
console.log('Custom event:', event.data);
});
source.onerror = (error) => {
console.error('Error:', error);
};