API Mock Server Generator
I'll help you generate API mock servers and stub services for testing and development, based on OpenAPI specifications or code analysis.
Mock Server Tools:
- json-server: Simple REST API mocking
- MSW (Mock Service Worker): Browser and Node.js request interception
- Prism: OpenAPI-based mock server
- WireMock: Advanced API simulation
Token Optimization
This skill uses mock generation-specific patterns to minimize token usage:
1. API Spec Detection Caching (600 token savings)
Pattern: Cache API specification locations and structure
- Store spec analysis in
.api-mock-cache(1 hour TTL) - Cache: spec location, endpoints, schemas, examples
- Read cached spec on subsequent runs (50 tokens vs 650 tokens fresh)
- Invalidate on spec file changes
- Savings: 92% on repeat mock generations
2. OpenAPI Spec Parsing via Bash (1,800 token savings)
Pattern: Use yq/jq for OpenAPI parsing instead of LLM
- Extract endpoints:
yq '.paths | keys'(200 tokens) - Extract schemas:
yq '.components.schemas'(200 tokens) - No Task agents for spec parsing
- Savings: 90% vs LLM-based OpenAPI analysis
3. Template-Based Mock Generation (2,500 token savings)
Pattern: Use predefined mock server templates
- Standard templates: json-server, MSW, Prism configurations
- Endpoint → mock handler mapping templates
- No creative mock logic generation needed
- Savings: 85% vs LLM-generated mock code
4. Example-Based Response Generation (1,000 token savings)
Pattern: Use examples from OpenAPI spec
- Extract existing response examples from spec (300 tokens)
- Use spec examples directly in mocks
- Generate random data only when no examples
- Savings: 75% vs generating all mock responses
5. Sample-Based Endpoint Analysis (700 token savings)
Pattern: Analyze first 15 endpoints for patterns
- Identify CRUD patterns, auth requirements (500 tokens)
- Apply patterns to remaining endpoints
- Full analysis only if explicitly requested
- Savings: 60% vs analyzing every endpoint
6. Cached Mock Tool Setup (400 token savings)
Pattern: Reuse mock server configuration
- Cache tool installation status (npm ls, etc.)
- Don't regenerate config if tool already setup
- Standard configurations for common tools
- Savings: 80% on tool configuration generation
7. Grep-Based Existing Mock Detection (500 token savings)
Pattern: Find existing mock servers with Grep
- Grep for mock patterns:
json-server,setupWorker,Prism.setup(200 tokens) - Don't read full mock files for detection
- Offer to extend vs recreate
- Savings: 75% vs full mock file analysis
8. Early Exit for Existing Mocks (90% savings)
Pattern: Detect if mocks already generated and current
- Check for mock config files matching spec name (50 tokens)
- Compare spec mtime with mock config mtime
- If mocks current: return mock server start command (150 tokens)
- Distribution: ~35% of runs check existing mocks
- Savings: 150 vs 3,500 tokens for mock regeneration checks
Real-World Token Usage Distribution
Typical operation patterns:
- Check existing mocks (current): 150 tokens
- Generate mocks (first time): 3,500 tokens
- Update mocks (spec changed): 2,000 tokens
- Change mock tool (cached spec): 1,500 tokens
- Add endpoints (incremental): 800 tokens
- Most common: Initial generation with template-based approach
Expected per-generation: 2,000-3,000 tokens (60% reduction from 5,000-7,000 baseline) Real-world average: 1,800 tokens (due to cached specs, template-based generation, example reuse)
Arguments: $ARGUMENTS - OpenAPI spec path, mock tool preference, or resource type
Phase 1: Detect API Specifications
First, let me find API specifications or analyze existing code:
# Detect API specifications and existing APIs
detect_api_specs() {
echo "=== Detecting API Specifications ==="
echo ""
local spec_found=false
# Check for OpenAPI/Swagger specs
echo "Searching for OpenAPI/Swagger specifications..."
openapi_files=$(find . -type f \( -name "openapi.yaml" -o -name "openapi.yml" -o -name "swagger.yaml" -o -name "swagger.yml" -o -name "openapi.json" -o -name "swagger.json" \) 2>/dev/null)
if [ -n "$openapi_files" ]; then
spec_found=true
echo "✓ OpenAPI/Swagger specifications found:"
echo "$openapi_files" | sed 's/^/ /'
echo ""
SPEC_PATH=$(echo "$openapi_files" | head -1)
SPEC_TYPE="openapi"
fi
# Check for GraphQL schemas
if [ -z "$openapi_files" ]; then
graphql_files=$(find . -type f \( -name "schema.graphql" -o -name "*.graphql" \) 2>/dev/null | head -5)
if [ -n "$graphql_files" ]; then
spec_found=true
echo "✓ GraphQL schemas found:"
echo "$graphql_files" | sed 's/^/ /'
echo ""
SPEC_PATH=$(echo "$graphql_files" | head -1)
SPEC_TYPE="graphql"
fi
fi
# Detect REST API from code
if [ ! "$spec_found" = true ]; then
echo "No API specifications found. Analyzing code..."
echo ""
# Node.js/Express
if [ -f "package.json" ] && grep -q "express" package.json; then
echo "✓ Express API detected"
SPEC_TYPE="express"
# Try to find route definitions
routes=$(grep -r "app\.\(get\|post\|put\|delete\|patch\)" . --include="*.js" --include="*.ts" 2>/dev/null | head -5)
if [ -n "$routes" ]; then
echo " Sample routes found:"
echo "$routes" | sed 's/^/ /'
fi
echo ""
# Python/Flask
elif [ -f "requirements.txt" ] && grep -q "flask" requirements.txt; then
echo "✓ Flask API detected"
SPEC_TYPE="flask"
routes=$(grep -r "@app\.route" . --include="*.py" 2>/dev/null | head -5)
if [ -n "$routes" ]; then
echo " Sample routes found:"
echo "$routes" | sed 's/^/ /'
fi
echo ""
# Python/FastAPI
elif [ -f "requirements.txt" ] && grep -q "fastapi" requirements.txt; then
echo "✓ FastAPI detected"
SPEC_TYPE="fastapi"
routes=$(grep -r "@app\.\(get\|post\|put\|delete\)" . --include="*.py" 2>/dev/null | head -5)
if [ -n "$routes" ]; then
echo " Sample routes found:"
echo "$routes" | sed 's/^/ /'
fi
echo ""
# Go APIs
elif [ -f "go.mod" ]; then
echo "✓ Go API detected"
SPEC_TYPE="go"
routes=$(grep -r "HandleFunc\|Handle" . --include="*.go" 2>/dev/null | head -5)
if [ -n "$routes" ]; then
echo " Sample routes found:"
echo "$routes" | sed 's/^/ /'
fi
echo ""
else
echo "⚠ No API detected"
echo ""
echo "I can help you create mocks for:"
echo " - OpenAPI/Swagger specifications"
echo " - GraphQL schemas"
echo " - Custom REST APIs"
echo ""
SPEC_TYPE="custom"
fi
fi
echo "$SPEC_TYPE|$SPEC_PATH"
}
API_INFO=$(detect_api_specs)
SPEC_TYPE=$(echo "$API_INFO" | cut -d'|' -f1)
SPEC_PATH=$(echo "$API_INFO" | cut -d'|' -f2)
Phase 2: Choose Mock Server Tool
<think> For mock server selection, I need to consider: - Project type and existing dependencies - Use case (browser testing, Node.js testing, standalone server) - OpenAPI spec availability (Prism is best for OpenAPI) - Complexity needs (WireMock for advanced scenarios) - Team familiarity and ease of setupSelection criteria:
- Has OpenAPI spec → Prism (spec-driven, automatic validation)
- Frontend testing → MSW (browser/Node.js, no separate server)
- Simple REST mocking → json-server (quick setup, minimal config)
- Advanced needs → WireMock (stateful, recording, matching) </think>
I'll help you select the appropriate mock server tool:
select_mock_tool() {
local spec_type=$1
echo ""
echo "=== Selecting Mock Server Tool ==="
echo ""
# Check if tool specified in arguments
if [[ "$ARGUMENTS" =~ json-server|msw|prism|wiremock ]]; then
MOCK_TOOL=$(echo "$ARGUMENTS" | grep -oE "json-server|msw|prism|wiremock" | head -1)
echo "Mock tool specified: $MOCK_TOOL"
else
# Suggest based on detected spec type
case $spec_type in
openapi)
echo "Recommended: Prism (OpenAPI-native)"
MOCK_TOOL="prism"
;;
graphql)
echo "Recommended: GraphQL Mock Server"
MOCK_TOOL="graphql-mock"
;;
express|flask|fastapi|go|custom)
echo "Recommended: json-server or MSW"
MOCK_TOOL="json-server"
;;
esac
echo ""
echo "Available mock server tools:"
echo " 1. Prism - OpenAPI/Swagger specification-based"
echo " 2. MSW - Browser and Node.js request interception"
echo " 3. json-server - Simple REST API from JSON"
echo " 4. WireMock - Advanced API simulation"
echo ""
read -p "Select tool (1-4, or Enter for recommended): " choice
case ${choice:-0} in
1) MOCK_TOOL="prism" ;;
2) MOCK_TOOL="msw" ;;
3) MOCK_TOOL="json-server" ;;
4) MOCK_TOOL="wiremock" ;;
*) ;; # Keep recommended
esac
fi
echo ""
echo "Selected mock tool: $MOCK_TOOL"
echo "$MOCK_TOOL"
}
MOCK_TOOL=$(select_mock_tool "$SPEC_TYPE")
Phase 3: Generate Mock Server
Now I'll generate the mock server based on your selection:
generate_mock_server() {
local tool=$1
local spec_type=$2
local spec_path=$3
echo ""
echo "=== Generating Mock Server ==="
echo ""
mkdir -p mocks
case $tool in
prism)
generate_prism_mock "$spec_path"
;;
msw)
generate_msw_mock "$spec_type"
;;
json-server)
generate_json_server_mock "$spec_type"
;;
wiremock)
generate_wiremock_mock
;;
esac
}
generate_prism_mock() {
local spec_path=$1
echo "Setting up Prism mock server..."
echo ""
# Check if Prism is installed
if ! command -v prism &> /dev/null; then
echo "Installing Prism..."
npm install -g @stoplight/prism-cli
if [ $? -ne 0 ]; then
echo "⚠ Global install failed. Installing locally..."
npm install --save-dev @stoplight/prism-cli
fi
fi
# Validate OpenAPI spec
if [ -n "$spec_path" ] && [ -f "$spec_path" ]; then
echo "Validating OpenAPI specification..."
if command -v prism &> /dev/null; then
prism validate "$spec_path" || {
echo "⚠ Specification has validation errors"
echo " Continue anyway? (yes/no)"
read continue_anyway
[ "$continue_anyway" != "yes" ] && exit 1
}
fi
# Create start script
cat > mocks/start-prism.sh << EOF
#!/bin/bash
# Start Prism mock server
# Run Prism with dynamic response generation
prism mock "$spec_path" \\
--host 0.0.0.0 \\
--port 4010 \\
--dynamic
# Options:
# --dynamic: Generate dynamic realistic responses
# --errors: Include error responses
# --validateRequest: Validate requests against spec
EOF
chmod +x mocks/start-prism.sh
# Add to package.json scripts
if [ -f "package.json" ]; then
echo "Adding mock script to package.json..."
# Check if jq is available for JSON manipulation
if command -v jq &> /dev/null; then
jq '.scripts.mock = "prism mock '"$spec_path"' --port 4010 --dynamic"' package.json > package.json.tmp
mv package.json.tmp package.json
else
echo " Manually add to package.json scripts:"
echo ' "mock": "prism mock '"$spec_path"' --port 4010 --dynamic"'
fi
fi
echo ""
echo "✓ Prism mock server configured"
echo ""
echo "Start mock server:"
echo " ./mocks/start-prism.sh"
echo " # or"
echo " npm run mock"
echo ""
echo "Mock API will be available at: http://localhost:4010"
else
echo "❌ OpenAPI specification not found: $spec_path"
exit 1
fi
}
generate_msw_mock() {
local spec_type=$1
echo "Setting up MSW (Mock Service Worker)..."
echo ""
# Install MSW
if [ -f "package.json" ]; then
echo "Installing MSW..."
npm install --save-dev msw
else
echo "❌ package.json not found"
echo "Create Node.js project first: npm init -y"
exit 1
fi
# Create mock handlers
mkdir -p mocks/handlers
cat > mocks/handlers/handlers.js << 'EOF'
/**
* MSW Request Handlers
* Define mock API responses here
*/
import { rest } from 'msw';
const baseURL = process.env.API_BASE_URL || 'http://localhost:3000';
export const handlers = [
// Example: GET /api/users
rest.get(`${baseURL}/api/users`, (req, res, ctx) => {
return res(
ctx.status(200),
ctx.json([
{ id: 1, name: 'John Doe', email: 'john@example.com' },
{ id: 2, name: 'Jane Smith', email: 'jane@example.com' }
])
);
}),
// Example: GET /api/users/:id
rest.get(`${baseURL}/api/users/:id`, (req, res, ctx) => {
const { id } = req.params;
return res(
ctx.status(200),
ctx.json({
id: parseInt(id),
name: 'John Doe',
email: 'john@example.com',
createdAt: new Date().toISOString()
})
);
}),
// Example: POST /api/users
rest.post(`${baseURL}/api/users`, async (req, res, ctx) => {
const body = await req.json();
return res(
ctx.status(201),
ctx.json({
id: Date.now(),
...body,
createdAt: new Date().toISOString()
})
);
}),
// Example: Error response
rest.get(`${baseURL}/api/error`, (req, res, ctx) => {
return res(
ctx.status(500),
ctx.json({
error: 'Internal Server Error',
message: 'Something went wrong'
})
);
})
];
EOF
# Create server setup for Node.js
cat > mocks/server.js << 'EOF'
/**
* MSW Server for Node.js (testing)
*/
import { setupServer } from 'msw/node';
import { handlers } from './handlers/handlers.js';
export const server = setupServer(...handlers);
EOF
# Create browser setup
cat > mocks/browser.js << 'EOF'
/**
* MSW Browser Setup
*/
import { setupWorker } from 'msw';
import { handlers } from './handlers/handlers.js';
export const worker = setupWorker(...handlers);
EOF
# Create test setup file
cat > mocks/setup-tests.js << 'EOF'
/**
* Test Setup with MSW
* Import this in your test setup file
*/
import { server } from './server.js';
// Establish API mocking before all tests
beforeAll(() => server.listen());
// Reset handlers after each test
afterEach(() => server.resetHandlers());
// Clean up after all tests
afterAll(() => server.close());
EOF
echo "✓ MSW mock server configured"
echo ""
echo "Setup for Browser:"
echo " 1. npx msw init public/ --save"
echo " 2. Import worker in your app: import { worker } from './mocks/browser.js'"
echo " 3. Start worker: worker.start()"
echo ""
echo "Setup for Tests:"
echo " 1. Import in test setup: import './mocks/setup-tests.js'"
echo " 2. Tests will automatically use mocked responses"
echo ""
echo "Add handlers in: mocks/handlers/handlers.js"
}
generate_json_server_mock() {
local spec_type=$1
echo "Setting up json-server..."
echo ""
# Install json-server
if [ -f "package.json" ]; then
echo "Installing json-server..."
npm install --save-dev json-server
fi
# Generate sample database
cat > mocks/db.json << 'EOF'
{
"users": [
{
"id": 1,
"name": "John Doe",
"email": "john@example.com",
"role": "admin",
"createdAt": "2024-01-01T00:00:00.000Z"
},
{
"id": 2,
"name": "Jane Smith",
"email": "jane@example.com",
"role": "user",
"createdAt": "2024-01-02T00:00:00.000Z"
}
],
"posts": [
{
"id": 1,
"title": "First Post",
"content": "This is the content of the first post",
"userId": 1,
"createdAt": "2024-01-15T00:00:00.000Z"
},
{
"id": 2,
"title": "Second Post",
"content": "Content of the second post",
"userId": 2,
"createdAt": "2024-01-16T00:00:00.000Z"
}
],
"comments": [
{
"id": 1,
"postId": 1,
"userId": 2,
"text": "Great post!",
"createdAt": "2024-01-17T00:00:00.000Z"
}
]
}
EOF
# Create custom routes (optional)
cat > mocks/routes.json << 'EOF'
{
"/api/*": "/$1",
"/users/:id/posts": "/posts?userId=:id",
"/posts/:id/comments": "/comments?postId=:id"
}
EOF
# Create middleware for custom logic
cat > mocks/middleware.js << 'EOF'
/**
* json-server middleware
* Add custom logic like authentication, validation, etc.
*/
module.exports = (req, res, next) => {
// Add custom headers
res.header('X-Mock-Server', 'json-server');
// Log requests
console.log(`${req.method} ${req.url}`);
// Add delay to simulate network latency
if (process.env.MOCK_DELAY) {
setTimeout(next, parseInt(process.env.MOCK_DELAY));
} else {
next();
}
};
EOF
# Create start script
cat > mocks/start-json-server.sh << 'EOF'
#!/bin/bash
# Start json-server mock API
json-server \
--watch mocks/db.json \
--routes mocks/routes.json \
--middlewares mocks/middleware.js \
--port 3001 \
--host 0.0.0.0
# Options:
# --watch: Auto-reload on file changes
# --routes: Custom route mappings
# --middlewares: Custom middleware
# --delay: Add response delay (ms)
EOF
chmod +x mocks/start-json-server.sh
# Add to package.json
if [ -f "package.json" ] && command -v jq &> /dev/null; then
jq '.scripts.mock = "json-server --watch mocks/db.json --routes mocks/routes.json --port 3001"' package.json > package.json.tmp
mv package.json.tmp package.json
fi
echo "✓ json-server mock configured"
echo ""
echo "Start mock server:"
echo " ./mocks/start-json-server.sh"
echo " # or"
echo " npm run mock"
echo ""
echo "Mock API will be available at: http://localhost:3001"
echo ""
echo "Edit mock data: mocks/db.json"
}
generate_wiremock_mock() {
echo "Setting up WireMock..."
echo ""
mkdir -p mocks/wiremock/{mappings,__files}
# Create example mapping
cat > mocks/wiremock/mappings/users.json << 'EOF'
{
"request": {
"method": "GET",
"urlPattern": "/api/users"
},
"response": {
"status": 200,
"headers": {
"Content-Type": "application/json"
},
"jsonBody": [
{
"id": 1,
"name": "John Doe",
"email": "john@example.com"
},
{
"id": 2,
"name": "Jane Smith",
"email": "jane@example.com"
}
]
}
}
EOF
# Create start script
cat > mocks/start-wiremock.sh << 'EOF'
#!/bin/bash
# Start WireMock server
# Download WireMock if not exists
if [ ! -f "wiremock-standalone.jar" ]; then
echo "Downloading WireMock..."
curl -o wiremock-standalone.jar \
https://repo1.maven.org/maven2/com/github/tomakehurst/wiremock-standalone/2.35.0/wiremock-standalone-2.35.0.jar
fi
# Start WireMock
java -jar wiremock-standalone.jar \
--port 8080 \
--root-dir mocks/wiremock \
--verbose
# Options:
# --port: Server port
# --root-dir: Directory with mappings and files
# --verbose: Detailed logging
# --record-mappings: Record mode
EOF
chmod +x mocks/start-wiremock.sh
echo "✓ WireMock configured"
echo ""
echo "Start mock server:"
echo " ./mocks/start-wiremock.sh"
echo ""
echo "Mock API will be available at: http://localhost:8080"
echo ""
echo "Add mappings in: mocks/wiremock/mappings/"
}
generate_mock_server "$MOCK_TOOL" "$SPEC_TYPE" "$SPEC_PATH"
Phase 4: Generate Realistic Mock Data
I'll create realistic mock data for your API:
generate_mock_data() {
echo ""
echo "=== Generating Realistic Mock Data ==="
echo ""
# Create data generators
mkdir -p mocks/data-generators
cat > mocks/data-generators/generate-data.js << 'EOF'
/**
* Mock Data Generator
* Generates realistic test data for API mocks
*/
const faker = require('@faker-js/faker').faker;
function generateUsers(count = 10) {
const users = [];
for (let i = 1; i <= count; i++) {
users.push({
id: i,
name: faker.person.fullName(),
email: faker.internet.email(),
avatar: faker.image.avatar(),
address: {
street: faker.location.streetAddress(),
city: faker.location.city(),
state: faker.location.state(),
zipCode: faker.location.zipCode()
},
phone: faker.phone.number(),
company: faker.company.name(),
role: faker.helpers.arrayElement(['admin', 'user', 'moderator']),
createdAt: faker.date.past().toISOString()
});
}
return users;
}
function generatePosts(count = 20, userIds = []) {
const posts = [];
for (let i = 1; i <= count; i++) {
posts.push({
id: i,
title: faker.lorem.sentence(),
content: faker.lorem.paragraphs(3),
excerpt: faker.lorem.sentence(),
userId: userIds.length > 0
? faker.helpers.arrayElement(userIds)
: faker.number.int({ min: 1, max: 10 }),
tags: faker.helpers.arrayElements(
['javascript', 'python', 'react', 'nodejs', 'testing'],
faker.number.int({ min: 1, max: 3 })
),
published: faker.datatype.boolean(),
createdAt: faker.date.past().toISOString(),
updatedAt: faker.date.recent().toISOString()
});
}
return posts;
}
function generateComments(count = 50, postIds = [], userIds = []) {
const comments = [];
for (let i = 1; i <= count; i++) {
comments.push({
id: i,
postId: postIds.length > 0
? faker.helpers.arrayElement(postIds)
: faker.number.int({ min: 1, max: 20 }),
userId: userIds.length > 0
? faker.helpers.arrayElement(userIds)
: faker.number.int({ min: 1, max: 10 }),
text: faker.lorem.paragraph(),
createdAt: faker.date.recent().toISOString()
});
}
return comments;
}
// Generate complete database
function generateDatabase() {
const users = generateUsers(10);
const userIds = users.map(u => u.id);
const posts = generatePosts(20, userIds);
const postIds = posts.map(p => p.id);
const comments = generateComments(50, postIds, userIds);
return {
users,
posts,
comments
};
}
// Export for use
if (typeof module !== 'undefined' && module.exports) {
module.exports = {
generateUsers,
generatePosts,
generateComments,
generateDatabase
};
}
// CLI usage
if (require.main === module) {
const fs = require('fs');
const db = generateDatabase();
fs.writeFileSync(
'mocks/db.json',
JSON.stringify(db, null, 2)
);
console.log('✓ Mock database generated: mocks/db.json');
console.log(` Users: ${db.users.length}`);
console.log(` Posts: ${db.posts.length}`);
console.log(` Comments: ${db.comments.length}`);
}
EOF
# Install faker if using Node.js
if [ -f "package.json" ]; then
echo "Installing @faker-js/faker for realistic data..."
npm install --save-dev @faker-js/faker
fi
echo "✓ Mock data generator created: mocks/data-generators/generate-data.js"
echo ""
echo "Generate fresh mock data:"
echo " node mocks/data-generators/generate-data.js"
}
# Generate mock data if json-server or similar
if [[ "$MOCK_TOOL" =~ json-server|msw ]]; then
generate_mock_data
fi
Phase 5: Integration Documentation
I'll create comprehensive documentation for using the mocks:
generate_mock_documentation() {
cat > mocks/README.md << EOF
# API Mock Server
This directory contains mock server configuration for local development and testing.
## Mock Tool: $MOCK_TOOL
EOF
case $MOCK_TOOL in
prism)
cat >> mocks/README.md << 'EOF'
### Prism Mock Server
Prism generates mock responses based on your OpenAPI specification.
**Start Server:**
```bash
./mocks/start-prism.sh
# or
npm run mock
Server URL: http://localhost:4010
Features:
- Automatic response generation from OpenAPI spec
- Request validation
- Example responses from spec
- Dynamic response generation
Configuration:
- Edit OpenAPI spec to change mock behavior
- Add examples in spec for specific responses
- Use
x-*extensions for custom behavior
Testing:
# Example requests
curl http://localhost:4010/api/users
curl http://localhost:4010/api/users/1
curl -X POST http://localhost:4010/api/users \
-H "Content-Type: application/json" \
-d '{"name":"Test User"}'
EOF ;;
msw)
cat >> mocks/README.md << 'EOF'
MSW (Mock Service Worker)
MSW intercepts requests at the network level for testing.
Browser Setup:
import { worker } from './mocks/browser.js';
// Start worker
worker.start();
Node.js/Test Setup:
import './mocks/setup-tests.js';
// Tests automatically use mocked responses
Features:
- No separate server needed
- Works in browser and Node.js
- Request interception
- Network-level mocking
Adding Handlers:
Edit mocks/handlers/handlers.js:
rest.get('/api/new-endpoint', (req, res, ctx) => {
return res(
ctx.status(200),
ctx.json({ data: 'mock response' })
);
})
Debugging:
-
Check browser DevTools Network tab
-
MSW logs requests to console
-
Use
onUnhandledRequest: 'warn'option EOF ;;json-server) cat >> mocks/README.md << 'EOF'
json-server
Simple REST API mock from JSON file.
Start Server:
./mocks/start-json-server.sh
# or
npm run mock
Server URL: http://localhost:3001
Features:
- Full REST API (GET, POST, PUT, PATCH, DELETE)
- Filtering, sorting, pagination
- Relationships
- Custom routes
Database:
Edit mocks/db.json to change mock data.
Routes:
GET /users - List users
GET /users/1 - Get user by ID
POST /users - Create user
PUT /users/1 - Update user
PATCH /users/1 - Partial update
DELETE /users/1 - Delete user
Query Parameters:
# Filter
GET /users?role=admin
# Sort
GET /users?_sort=name&_order=asc
# Paginate
GET /users?_page=1&_limit=10
# Full-text search
GET /users?q=john
Relationships:
# Get user's posts
GET /users/1/posts
# Embed related data
GET /posts?_embed=comments
# Expand foreign keys
GET /posts?_expand=user
Generate Fresh Data:
node mocks/data-generators/generate-data.js
EOF ;;
wiremock)
cat >> mocks/README.md << 'EOF'
WireMock
Advanced API simulation with recording and matching.
Start Server:
./mocks/start-wiremock.sh
Server URL: http://localhost:8080
Features:
- Request matching
- Response templating
- Stateful scenarios
- Recording mode
- Fault simulation
Adding Mappings:
Create file in mocks/wiremock/mappings/:
{
"request": {
"method": "GET",
"urlPattern": "/api/.*"
},
"response": {
"status": 200,
"jsonBody": {
"message": "mock response"
},
"headers": {
"Content-Type": "application/json"
}
}
}
Recording Mode:
java -jar wiremock-standalone.jar \
--record-mappings \
--proxy-all="http://real-api.com"
EOF ;; esac
cat >> mocks/README.md << 'EOF'
Use Cases
Local Development:
- Work without backend dependency
- Test UI against consistent data
- Fast iteration without API calls
Testing:
- Predictable test data
- Test error scenarios
- Isolate frontend tests
Demos:
- Show features without real backend
- Controlled demonstration data
- No external dependencies
Configuration
Environment Variables:
API_BASE_URL=http://localhost:3001
MOCK_DELAY=100 # Add response delay (ms)
Integration with Tests
Jest:
describe('API Tests', () => {
beforeAll(async () => {
// Start mock server or configure MSW
});
it('fetches users', async () => {
const response = await fetch('http://localhost:3001/users');
const users = await response.json();
expect(users).toHaveLength(10);
});
});
Cypress:
describe('User Page', () => {
beforeEach(() => {
// Use mock server
cy.intercept('GET', '/api/users', {
fixture: 'users.json'
});
});
it('displays users', () => {
cy.visit('/users');
cy.get('.user').should('have.length', 10);
});
});
Best Practices
- Keep mocks simple - Don't replicate entire backend
- Use realistic data - Generate with faker or similar
- Version control mocks - Commit mock configurations
- Document endpoints - Keep this README updated
- Separate concerns - Different mocks for different test scenarios
- Reset state - Clean slate between tests
Troubleshooting
Server won't start:
- Check port availability
- Verify dependencies installed
- Check file permissions
Responses not matching:
- Verify request URL/method
- Check handler/mapping configuration
- Review server logs
Performance issues:
-
Reduce response delay
-
Limit mock data size
-
Use appropriate tool for use case EOF
echo "" echo "✓ Documentation created: mocks/README.md" }
generate_mock_documentation
## Mock Server Summary
```bash
echo ""
echo "=== Mock Server Setup Complete ==="
echo ""
cat << EOF
✓ Mock server configured successfully
**Mock Tool:** $MOCK_TOOL
**Specification:** $SPEC_TYPE
$([ -n "$SPEC_PATH" ] && echo "**Spec File:** $SPEC_PATH")
**Generated Files:**
- mocks/ (mock server directory)
EOF
case $MOCK_TOOL in
prism)
echo " - mocks/start-prism.sh (start script)"
;;
msw)
echo " - mocks/handlers/handlers.js (request handlers)"
echo " - mocks/server.js (Node.js setup)"
echo " - mocks/browser.js (browser setup)"
;;
json-server)
echo " - mocks/db.json (mock database)"
echo " - mocks/routes.json (custom routes)"
echo " - mocks/middleware.js (custom middleware)"
echo " - mocks/start-json-server.sh (start script)"
echo " - mocks/data-generators/generate-data.js (data generator)"
;;
wiremock)
echo " - mocks/wiremock/mappings/ (request mappings)"
echo " - mocks/start-wiremock.sh (start script)"
;;
esac
cat << 'EOF'
- mocks/README.md (documentation)
**Next Steps:**
1. Review mock configuration
2. Customize mock responses
3. Start mock server
4. Test endpoints
5. Integrate with application/tests
**Quick Start:**
EOF
case $MOCK_TOOL in
prism|json-server|wiremock)
echo " ./mocks/start-*.sh"
echo " # or"
[ -f "package.json" ] && echo " npm run mock"
;;
msw)
echo " # See mocks/README.md for setup instructions"
;;
esac
cat << 'EOF'
**Documentation:**
See mocks/README.md for detailed usage instructions.
EOF
Integration Points
This skill works well with:
/api-test-generate- Generate tests that use mocks/postman-convert- Convert Postman to tests using mocks/test- Run tests against mock server
Best Practices
Mock Server Usage:
- Use for local development and testing
- Keep mocks simple and maintainable
- Version control mock configurations
- Generate realistic data
- Document expected behavior
Maintenance:
- Update mocks when API changes
- Sync with OpenAPI specifications
- Clean up unused mocks
- Review mock data periodically
Safety Guarantees
Protection Measures:
- Mock servers run locally only
- No production data in mocks
- Separate port from real API
- Clear documentation
- Easy to disable
Important: I will NEVER:
- Use real user data in mocks
- Expose mock endpoints publicly
- Include sensitive information
- Add AI attribution to mocks
Troubleshooting
Issue: Port already in use
- Solution: Stop other services on that port
- Solution: Change port in configuration
- Solution: Use different mock tool
Issue: Responses don't match API
- Solution: Update mock configuration
- Solution: Regenerate from latest spec
- Solution: Review handler/mapping logic
Issue: Mock server slow
- Solution: Reduce mock data size
- Solution: Remove artificial delays
- Solution: Use simpler mock tool
Credits:
- Prism documentation from Stoplight
- MSW patterns from MSW documentation
- json-server from json-server documentation
- WireMock from WireMock documentation
- Mock server patterns from SKILLS_EXPANSION_PLAN.md Tier 3 API practices