Integration Testing Example
This guide shows how to use mockd in integration tests for various languages and frameworks.
Overview
Section titled “Overview”mockd is ideal for integration testing because:
- Isolation: Tests don’t depend on external services
- Speed: No network latency to real APIs
- Predictability: Responses are always consistent
- Control: Easy to simulate errors and edge cases
Test Setup Pattern
Section titled “Test Setup Pattern”1. Start mockd Before Tests
Section titled “1. Start mockd Before Tests”# Start in backgroundmockd start --config test-mocks.json &MOCKD_PID=$!
# Run testsnpm test
# Cleanupkill $MOCKD_PID2. Reset State Between Tests
Section titled “2. Reset State Between Tests”# Reset a specific resource to its seed datacurl -X POST http://localhost:4290/state/resources/users/reset
# Or clear a resource (remove all items, no seed data restored)curl -X DELETE http://localhost:4290/state/resources/users3. Point Application to mockd
Section titled “3. Point Application to mockd”API_BASE_URL=http://localhost:4280 npm testJavaScript / Node.js
Section titled “JavaScript / Node.js”Jest Setup
Section titled “Jest Setup”jest.setup.js:
const { spawn } = require('child_process');
let mockdProcess;
beforeAll(async () => { // Start mockd mockdProcess = spawn('mockd', ['start', '--config', 'test-mocks.json'], { stdio: 'pipe' });
// Wait for server to be ready await waitForServer('http://localhost:4280/health');});
afterAll(() => { if (mockdProcess) { mockdProcess.kill(); }});
beforeEach(async () => { // Reset stateful resources to seed data await fetch('http://localhost:4290/state/resources/users/reset', { method: 'POST' });});
async function waitForServer(url, timeout = 5000) { const start = Date.now(); while (Date.now() - start < timeout) { try { await fetch(url); return; } catch { await new Promise(r => setTimeout(r, 100)); } } throw new Error('Server did not start');}Example Tests
Section titled “Example Tests”const API = process.env.API_BASE_URL || 'http://localhost:4280';
describe('User Service', () => { test('fetches user by ID', async () => { const response = await fetch(`${API}/api/users/1`); const user = await response.json();
expect(response.status).toBe(200); expect(user).toEqual({ id: 1, name: 'Alice', email: 'alice@example.com' }); });
test('handles user not found', async () => { const response = await fetch(`${API}/api/users/999`);
expect(response.status).toBe(404); });
test('creates new user', async () => { const response = await fetch(`${API}/api/users`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ name: 'Charlie', email: 'charlie@example.com' }) });
expect(response.status).toBe(201);
const user = await response.json(); expect(user.id).toBeDefined(); expect(user.name).toBe('Charlie'); });});Testing Error Scenarios
Section titled “Testing Error Scenarios”describe('Error Handling', () => { test('handles server errors gracefully', async () => { // Add temporary mock for error await fetch('http://localhost:4290/mocks', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ type: 'http', http: { matcher: { method: 'GET', path: '/api/flaky' }, response: { statusCode: 500, body: '{"error": "Internal error"}' } } }) });
const response = await fetch(`${API}/api/flaky`);
expect(response.status).toBe(500); // Test your app's error handling });
test('handles timeout', async () => { await fetch('http://localhost:4290/mocks', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ type: 'http', http: { matcher: { method: 'GET', path: '/api/slow' }, response: { statusCode: 200, delayMs: 10000, body: '{}' } } }) });
const controller = new AbortController(); const timeout = setTimeout(() => controller.abort(), 1000);
await expect( fetch(`${API}/api/slow`, { signal: controller.signal }) ).rejects.toThrow();
clearTimeout(timeout); });});Python
Section titled “Python”pytest Setup
Section titled “pytest Setup”conftest.py:
import subprocessimport timeimport requestsimport pytest
@pytest.fixture(scope="session")def mockd_server(): """Start mockd server for the test session.""" proc = subprocess.Popen( ["mockd", "start", "--config", "test-mocks.json"], stdout=subprocess.PIPE, stderr=subprocess.PIPE )
# Wait for server for _ in range(50): try: requests.get("http://localhost:4280/health") break except requests.ConnectionError: time.sleep(0.1) else: raise RuntimeError("mockd failed to start")
yield "http://localhost:4280"
proc.terminate() proc.wait()
@pytest.fixture(autouse=True)def reset_state(): """Reset mockd stateful resources before each test.""" requests.post("http://localhost:4290/state/resources/users/reset") requests.post("http://localhost:4290/state/resources/tasks/reset")Example Tests
Section titled “Example Tests”import requestsimport pytest
def test_get_users(mockd_server): response = requests.get(f"{mockd_server}/api/users")
assert response.status_code == 200 users = response.json() assert len(users) >= 1
def test_create_user(mockd_server): response = requests.post( f"{mockd_server}/api/users", json={"name": "Test User", "email": "test@example.com"} )
assert response.status_code == 201 user = response.json() assert "id" in user assert user["name"] == "Test User"
def test_user_not_found(mockd_server): response = requests.get(f"{mockd_server}/api/users/99999")
assert response.status_code == 404
class TestStatefulOperations: def test_crud_workflow(self, mockd_server): # Create create_resp = requests.post( f"{mockd_server}/api/tasks", json={"title": "Test task", "status": "todo"} ) assert create_resp.status_code == 201 task_id = create_resp.json()["id"]
# Read get_resp = requests.get(f"{mockd_server}/api/tasks/{task_id}") assert get_resp.json()["title"] == "Test task"
# Update requests.patch( f"{mockd_server}/api/tasks/{task_id}", json={"status": "done"} ) get_resp = requests.get(f"{mockd_server}/api/tasks/{task_id}") assert get_resp.json()["status"] == "done"
# Delete delete_resp = requests.delete(f"{mockd_server}/api/tasks/{task_id}") assert delete_resp.status_code == 204Testing Setup
Section titled “Testing Setup”package integration_test
import ( "encoding/json" "net/http" "os" "os/exec" "strings" "testing" "time")
var baseURL = "http://localhost:4280"var adminURL = "http://localhost:4290"
func TestMain(m *testing.M) { // Start mockd cmd := exec.Command("mockd", "start", "--config", "test-mocks.json") if err := cmd.Start(); err != nil { panic(err) }
// Wait for ready waitForServer(baseURL + "/health")
// Run tests code := m.Run()
// Cleanup cmd.Process.Kill() os.Exit(code)}
func waitForServer(url string) { for i := 0; i < 50; i++ { if _, err := http.Get(url); err == nil { return } time.Sleep(100 * time.Millisecond) } panic("server did not start")}
func resetState(t *testing.T) { t.Helper() req, _ := http.NewRequest("POST", adminURL+"/state/resources/users/reset", nil) http.DefaultClient.Do(req) req, _ = http.NewRequest("POST", adminURL+"/state/resources/tasks/reset", nil) http.DefaultClient.Do(req)}Example Tests
Section titled “Example Tests”func TestGetUsers(t *testing.T) { resetState(t)
resp, err := http.Get(baseURL + "/api/users") if err != nil { t.Fatal(err) } defer resp.Body.Close()
if resp.StatusCode != 200 { t.Errorf("expected 200, got %d", resp.StatusCode) }
var users []map[string]interface{} json.NewDecoder(resp.Body).Decode(&users)
if len(users) == 0 { t.Error("expected users") }}
func TestCreateTask(t *testing.T) { resetState(t)
body := strings.NewReader(`{"title": "Test", "status": "todo"}`) resp, err := http.Post(baseURL+"/api/tasks", "application/json", body) if err != nil { t.Fatal(err) } defer resp.Body.Close()
if resp.StatusCode != 201 { t.Errorf("expected 201, got %d", resp.StatusCode) }
var task map[string]interface{} json.NewDecoder(resp.Body).Decode(&task)
if _, ok := task["id"]; !ok { t.Error("expected id field") }}Headless Engine for CI (mockd engine)
Section titled “Headless Engine for CI (mockd engine)”For CI/CD pipelines where you don’t need an admin API, use mockd engine — it’s lighter, has no dependencies, and can auto-assign ports to avoid conflicts in parallel jobs.
# Auto-assign a port and capture itmockd engine --config test-mocks.yaml --port 0 --print-url &sleep 1MOCKD_URL=$(curl -s http://localhost:4280 2>/dev/null && echo "http://localhost:4280")
# Run tests against the engineAPI_BASE_URL=$MOCKD_URL pytest tests/kill %1Why mockd engine over mockd start?
- No admin API → smaller attack surface, fewer ports
- No PID files or disk persistence → clean ephemeral containers
--port 0auto-assigns → no port conflicts in parallel CI jobs--print-urloutputs the URL for easy programmatic capture
Seeded Responses for Deterministic Tests
Section titled “Seeded Responses for Deterministic Tests”Use ?_mockd_seed=<number> to make faker functions and random values deterministic:
test('returns consistent user data across runs', async () => { // Same seed = same faker output every time const resp1 = await fetch(`${API}/api/random-user?_mockd_seed=42`); const resp2 = await fetch(`${API}/api/random-user?_mockd_seed=42`);
const user1 = await resp1.json(); const user2 = await resp2.json();
// Identical — same seed produces same faker.name, faker.email, uuid, etc. expect(user1).toEqual(user2);});Or set seed in the config for always-deterministic responses:
mocks: - id: test-user type: http http: matcher: { method: GET, path: /api/test-user } response: statusCode: 200 seed: 42 body: '{"name": "{{faker.name}}", "email": "{{faker.email}}"}'This eliminates flaky tests caused by random data while still using realistic faker output.
Docker Compose
Section titled “Docker Compose”For CI/CD environments:
version: '3.8'
services: mockd: image: ghcr.io/getmockd/mockd:latest ports: - "4280:4280" - "4290:4290" volumes: - ./test-mocks.json:/mocks/config.json command: start --config /mocks/config.json
app-tests: build: . depends_on: - mockd environment: - API_BASE_URL=http://mockd:4280 command: npm testGitHub Actions
Section titled “GitHub Actions”.github/workflows/test.yml:
name: Integration Tests
on: [push, pull_request]
jobs: test: runs-on: ubuntu-latest
steps: - uses: actions/checkout@v4
- name: Install mockd run: | curl -sSL https://github.com/getmockd/mockd/releases/latest/download/mockd-linux-amd64 -o mockd chmod +x mockd sudo mv mockd /usr/local/bin/
- name: Start mockd run: | # Option A: Full server with admin API mockd start --config test-mocks.json & sleep 2
# Option B: Headless engine (lighter, no admin API) # mockd engine --config test-mocks.json --port 0 --print-url > mockd-url.txt & # sleep 1
- name: Run tests run: npm test env: API_BASE_URL: http://localhost:42801. Seed Data for Tests
Section titled “1. Seed Data for Tests”tables: - name: users seedData: - id: "1" name: "Test User" email: "test@example.com"
mocks: - id: list-users type: http http: matcher: { method: GET, path: /api/users } response: { statusCode: 200 } - id: create-user type: http http: matcher: { method: POST, path: /api/users } response: { statusCode: 201 } - id: get-user type: http http: matcher: { method: GET, path: /api/users/{id} } response: { statusCode: 200 }
extend: - { mock: list-users, table: users, action: list } - { mock: create-user, table: users, action: create } - { mock: get-user, table: users, action: get }2. Test Different Scenarios
Section titled “2. Test Different Scenarios”Create multiple config files:
mocks-success.json- Happy pathmocks-errors.json- Error scenariosmocks-slow.json- Timeout testing
3. Parallel Test Safety
Section titled “3. Parallel Test Safety”Reset state in each test to ensure isolation:
beforeEach(async () => { await fetch('http://localhost:4290/state/resources/users/reset', { method: 'POST' });});4. Dynamic Mocks for Edge Cases
Section titled “4. Dynamic Mocks for Edge Cases”Add mocks at runtime for specific test scenarios:
test('handles rate limiting', async () => { await fetch('http://localhost:4290/mocks', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ type: 'http', http: { matcher: { method: 'GET', path: '/api/limited' }, response: { statusCode: 429, headers: { 'Retry-After': '60' }, body: '{"error": "Rate limited"}' } } }) });
// Test your rate limit handling});Mock Verification
Section titled “Mock Verification”After running your tests, verify that your code made the expected API calls. mockd tracks every request matched to a mock, so you can assert call counts and inspect invocation details.
Assert Call Counts
Section titled “Assert Call Counts”test('payment endpoint is called exactly once', async () => { // Create mock const res = await fetch('http://localhost:4290/mocks', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ type: 'http', http: { matcher: { method: 'POST', path: '/api/payments' }, response: { statusCode: 201, body: '{"id": "pay_123"}' } } }) }); const { id: mockId } = await res.json();
// Run your application code... await myApp.processOrder({ amount: 49.99 });
// Verify: payment endpoint called exactly once const verify = await fetch(`http://localhost:4290/mocks/${mockId}/verify`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ exactly: 1 }) }); const result = await verify.json(); expect(result.passed).toBe(true);});Inspect Invocations
Section titled “Inspect Invocations”# View every request that hit a specific mockcurl http://localhost:4290/mocks/http_a1b2c3d4/invocationsReturns timestamps, request headers, bodies, and matched response details for each call.
Reset Between Tests
Section titled “Reset Between Tests”beforeEach(async () => { // Reset all verification data (call counts + invocation history) await fetch('http://localhost:4290/verify', { method: 'DELETE' });});Verification API Reference
Section titled “Verification API Reference”| Endpoint | Method | Description |
|---|---|---|
/mocks/{id}/verify | GET | Get call count and last-called timestamp |
/mocks/{id}/verify | POST | Assert call count (exactly, atLeast, atMost, never) |
/mocks/{id}/invocations | GET | List all request/response pairs |
/mocks/{id}/invocations | DELETE | Reset invocations for one mock |
/verify | DELETE | Reset all verification data |
For full details, see the Mock Verification guide.
Next Steps
Section titled “Next Steps”- Basic Mocks - Simple mock examples
- CRUD API - Stateful API example
- Mock Verification - Full verification guide
- Admin API - Runtime management