olakai-create-agent

Create AI Agent with Olakai Monitoring

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "olakai-create-agent" with this command: npx skills add olakai-ai/olakai-skills/olakai-ai-olakai-skills-olakai-create-agent

Create AI Agent with Olakai Monitoring

This skill guides you through creating a new AI agent that is fully integrated with Olakai for monitoring, analytics, and governance.

Prerequisites

Before starting, ensure:

  • Olakai CLI installed: npm install -g olakai-cli

  • CLI authenticated: olakai login

  • API key for SDK (generated per-agent via CLI - see Step 2.1)

Why Custom KPIs Are Essential

Olakai's core value is tracking business-specific KPIs for your AI agents. Without KPIs, you're just logging events - not gaining actionable insights.

What you can measure with KPIs:

  • Business outcomes (items processed, success rates, revenue impact)

  • Operational metrics (step counts, retry rates, execution time)

  • Quality indicators (error rates, user satisfaction signals)

Without KPIs configured:

  • ❌ No dashboard metrics beyond basic token counts

  • ❌ No aggregated performance views

  • ❌ No alerting thresholds

  • ❌ No ROI calculations

⚠️ Every agent should have 2-4 KPIs that answer: "How do I know this agent is performing well?"

⚠️ KPIs created here belong to this specific agent only. If you later create additional agents, each one needs its own KPI definitions — KPIs cannot be shared or reused across agents.

Understanding the customData → KPI Pipeline

Before diving into implementation, understand how data flows through Olakai:

SDK customData → CustomDataConfig (Schema) → Context Variable → KPI Formula → kpiData

How It Works

  • customData (SDK): Raw JSON you send with each event

  • CustomDataConfig (Platform): Schema defining which fields are processed

  • Context Variables: CustomDataConfig fields become available for formulas

  • KPI Formula: Expression that computes a metric (e.g., SuccessRate * 100 )

  • kpiData (Response): Computed KPI values returned with each event

Critical Rules

Rule Consequence

Only CustomDataConfig fields become variables Unregistered customData fields are NOT usable in KPIs

Formula evaluation is case-insensitive stepCount , STEPCOUNT , StepCount all work in formulas

NUMBER configs need numeric values Don't send "5" (string), send 5 (number)

KPIs are unique per agent Each KPI belongs to exactly one agent — create separately for each, even with identical formulas

Built-in Context Variables (Always Available)

Variable Type Description

Prompt

string The prompt text sent to the LLM

Response

string The LLM response text

Documents count

number Number of attached documents

PII detected

boolean Whether PII was detected

PHI detected

boolean Whether PHI was detected

CODE detected

boolean Whether code was detected

SECRET detected

boolean Whether secrets were detected

Step 1: Design the Agent Architecture

1.1 Determine Agent Type

Agentic AI (Multi-step autonomous workflows):

  • Research agents, document processors, data pipelines

  • Track as SINGLE events aggregating all internal LLM calls

  • Focus on workflow-level metrics (total tokens, total time, success/failure)

Assistive AI (Interactive chatbots/copilots):

  • Customer support bots, coding assistants, Q&A systems

  • Track EACH interaction as separate events

  • Focus on conversation-level metrics (per-message tokens, response quality)

1.2 Design Your Metrics Schema (CRITICAL)

Design your metrics BEFORE writing any SDK code. This ensures only meaningful data is sent and tracked.

Step A: Identify Business Questions

What do stakeholders need to know about this agent?

  • "How many items does it process per run?"

  • "What's the success/failure rate?"

  • "How efficient is each execution?"

Step B: Map Questions to Metrics

Business Question Field Name Type KPI Formula Aggregation

Throughput ItemsProcessed NUMBER ItemsProcessed

SUM

Reliability SuccessRate NUMBER SuccessRate * 100

AVERAGE

Error count SuccessRate NUMBER IF(SuccessRate < 1, 1, 0)

SUM

Workflow ID ExecutionId STRING (for filtering only)

Step C: Plan Your customData Structure

// ONLY include fields you'll register as CustomDataConfigs customData: { // Business metrics (will become KPIs) ItemsProcessed: number, // Count of items handled SuccessRate: number, // 0-1 success ratio

// Performance metrics (will become KPIs) StepCount: number, // Number of workflow steps

// Identification (for filtering, not KPIs) ExecutionId: string, // Correlation ID }

⚠️ IMPORTANT: Only include fields you will register as CustomDataConfigs. Unregistered fields are stored but cannot be used in KPIs - they're effectively wasted data.

What NOT to Include in customData

The Olakai platform automatically tracks these fields - do NOT duplicate them in customData:

Already Tracked Where Don't Send As customData

Session ID Main payload ❌ sessionId

Agent ID API key association ❌ agentId

User email userEmail parameter ❌ email , userEmail

Timestamp Event metadata ❌ timestamp , createdAt

Request time requestTime parameter ❌ duration , latency

Token count tokens parameter ❌ tokenCount , totalTokens

Model Auto-detected ❌ model , modelName

Provider Wrapped client config ❌ provider

customData is ONLY for:

  • KPI variables - Fields you'll use in formula calculations (e.g., ItemsProcessed , SuccessRate )

  • Tagging/filtering - Fields you'll filter by in queries (e.g., Department , ProjectId )

❌ BAD: Sending redundant data

customData: { sessionId: session.id, // ❌ Already tracked agentId: agentConfig.id, // ❌ Already tracked userEmail: user.email, // ❌ Pass via userEmail param instead timestamp: Date.now(), // ❌ Already tracked ItemsProcessed: 10, // ✅ Needed for KPI }

✅ GOOD: Only KPI-relevant data

customData: { ItemsProcessed: 10, // ✅ Used in KPI formula SuccessRate: 1.0, // ✅ Used in KPI formula ExecutionId: uuid, // ✅ For correlation/filtering }

Step 2: Configure Olakai Platform

2.1 Create a Workflow (Required)

⚠️ Every agent MUST belong to a workflow, even if it's the only agent in that workflow.

Why workflows are required:

  • Enable future multi-agent expansion without restructuring

  • Provide workflow-level aggregation for KPIs

  • Establish proper organizational hierarchy

  • Support workflow-level governance policies

Create the workflow first

olakai workflows create --name "Your Workflow Name" --json

Save the workflow ID for agent association

Output: { "id": "wfl_xxx...", "name": "Your Workflow Name" }

2.2 Create the Agent in Olakai

Create the agent associated with the workflow

olakai agents create
--name "Your Agent Name"
--description "What this agent does"
--workflow WORKFLOW_ID
--with-api-key
--json

Returns agent details including apiKey for SDK use:

{

"id": "cmkbteqn501kyjy4yu6p6xrrx",

"name": "Your Agent Name",

"workflowId": "wfl_xxx...",

"apiKey": "sk_agent_xxxxx..." <-- Use this in your SDK

}

To retrieve an existing agent's API key:

olakai agents get AGENT_ID --json | jq '.apiKey'

Workflow → Agent Hierarchy:

Workflow: "Customer Support Pipeline" ├── Agent: "Ticket Classifier" ├── Agent: "Response Generator" └── Agent: "Quality Checker"

Workflow: "Document Processing" └── Agent: "Document Summarizer" ← Even single agents need a workflow

2.3 Create Custom Data Configurations (BEFORE Writing SDK Code)

⚠️ This step MUST be completed before Step 3 (SDK Integration). Only fields registered here can be used in KPI formulas. Design the schema first, then code to it.

⚠️ ONLY create configs for data you'll use in KPIs or for filtering. Don't create configs for data already tracked (sessionId, timestamps, tokens) or "nice to have" fields. Each config should answer: "Will I use this in a KPI formula?" or "Will I filter/group by this?"

For each custom metric from Step 1.2, create a CustomDataConfig:

Replace YOUR_AGENT_ID with the actual agent ID from step 2.1

For numeric metrics (can be used in KPI calculations)

olakai custom-data create --agent-id YOUR_AGENT_ID --name "ItemsProcessed" --type NUMBER --description "Count of items processed per run" olakai custom-data create --agent-id YOUR_AGENT_ID --name "SuccessRate" --type NUMBER --description "Success ratio 0-1" olakai custom-data create --agent-id YOUR_AGENT_ID --name "StepCount" --type NUMBER --description "Number of workflow steps executed"

For string metrics (for filtering/grouping, not calculations)

olakai custom-data create --agent-id YOUR_AGENT_ID --name "ExecutionId" --type STRING --description "Correlation ID for the execution"

Verify all configs are created for this agent

olakai custom-data list --agent-id YOUR_AGENT_ID

What this enables:

  • ✅ These field names become context variables in KPI formulas for this agent

  • ✅ Values sent in SDK customData with these names are processed

  • ❌ Any customData field NOT listed here is ignored for KPI purposes

2.4 Create KPI Definitions

⚠️ Both CustomDataConfigs and KPIs are created for THIS agent only. Each config is bound to one agent. If multiple agents need the same fields, create the CustomDataConfig and KPI separately for each agent.

Quick Start with Templates

Instead of writing KPI formulas from scratch, you can use predefined classifier templates via the CLI or the dashboard UI.

List available templates:

olakai kpis templates

Create a classifier KPI from a template:

Create a classifier KPI using a pre-built template

olakai kpis create --name "User Satisfaction"
--calculator-id classifier --template-id sentiment_scorer
--scope CHAT --agent-id $AGENT_ID

Create a time-saved estimator

olakai kpis create --name "Time Saved"
--calculator-id classifier --template-id time_saved_estimator
--scope CHAT --agent-id $AGENT_ID

Or via the dashboard UI:

  • Navigate to your agent's KPI settings in the Olakai dashboard

  • Click Create KPI → Use Template

  • Choose a template:

  • Sentiment Scorer — Analyzes session sentiment on a configurable scale

  • Time Saved Estimator — Estimates minutes saved vs. manual execution

  • Templates work at the CHAT scope — they analyze the entire conversation/workflow run

Templates are a great starting point. You can always add custom formula-based KPIs alongside them.

Custom Formula KPIs

Define KPIs that use your custom data:

Simple variable KPIs

olakai kpis create
--name "Items Processed"
--agent-id YOUR_AGENT_ID
--calculator-id formula
--formula "ItemsProcessed"
--unit "items"
--aggregation SUM

Calculated KPIs

olakai kpis create
--name "Success Rate"
--agent-id YOUR_AGENT_ID
--calculator-id formula
--formula "SuccessRate * 100"
--unit "%"
--aggregation AVERAGE

Conditional KPIs

olakai kpis create
--name "Error Count"
--agent-id YOUR_AGENT_ID
--calculator-id formula
--formula "IF(SuccessRate < 1, 1, 0)"
--unit "errors"
--aggregation SUM

Validate formulas before creating

olakai kpis validate --formula "ItemsProcessed" --agent-id YOUR_AGENT_ID

Step 3: Implement SDK Integration

3.1 TypeScript Implementation (Recommended)

Install dependencies:

npm install @olakai/sdk openai

Basic wrapped client setup:

import { OlakaiSDK } from "@olakai/sdk"; import OpenAI from "openai";

// Initialize Olakai const olakai = new OlakaiSDK({ apiKey: process.env.OLAKAI_API_KEY!, debug: process.env.NODE_ENV === "development", }); await olakai.init();

// Wrap your LLM client const openai = olakai.wrap( new OpenAI({ apiKey: process.env.OPENAI_API_KEY }), { provider: "openai", defaultContext: { task: "Your Task Category", // e.g., "Data Processing & Analysis" }, } );

// Use wrapped client - monitoring happens automatically const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: userPrompt }], });

Agentic workflow with manual event tracking:

taskExecutionId — Cross-Agent Task Correlation

Generate ONE taskExecutionId per task and share it across all agents in a multi-agent workflow. This is how Olakai links work done by different agents into a single logical task for analytics. Without it, each agent's events are isolated by session and you lose visibility into the full end-to-end task. The orchestrator should generate the ID and pass it to every agent it invokes.

async function runAgent(input: string): Promise<string> { const startTime = Date.now(); const executionId = crypto.randomUUID(); const taskExecutionId = crypto.randomUUID(); // Share across all agents in a multi-agent workflow let totalTokens = 0; let stepCount = 0; let itemsProcessed = 0;

try { // Step 1: Planning stepCount++; const plan = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: Plan: ${input} }], }); totalTokens += plan.usage?.total_tokens ?? 0;

// Step 2: Execution (example: process multiple items)
const items = parseItems(plan.choices[0].message.content);
for (const item of items) {
  stepCount++;
  const result = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: `Process: ${item}` }],
  });
  totalTokens += result.usage?.total_tokens ?? 0;
  itemsProcessed++;
}

// Step 3: Summarize
stepCount++;
const summary = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Summarize results" }],
});
totalTokens += summary.usage?.total_tokens ?? 0;

const finalResponse = summary.choices[0].message.content ?? "";

// Track the complete workflow as a single event
// ⚠️ IMPORTANT: Only send fields that have CustomDataConfigs (from Step 2.2)
olakai.event({
  prompt: input,
  response: finalResponse,
  tokens: totalTokens,
  requestTime: Date.now() - startTime,
  taskExecutionId,  // Links events across agents in the same task
  task: "Data Processing &#x26; Analysis",
  customData: {
    // Only include fields registered in Step 2.2
    ExecutionId: executionId,
    StepCount: stepCount,
    ItemsProcessed: itemsProcessed,
    SuccessRate: 1.0,
    // ❌ DON'T add unregistered fields - they can't be used in KPIs
  },
});

return finalResponse;

} catch (error) { // Track failed execution - same fields, different values olakai.event({ prompt: input, response: Error: ${error instanceof Error ? error.message : "Unknown"}, tokens: totalTokens, requestTime: Date.now() - startTime, taskExecutionId, task: "Data Processing & Analysis", customData: { ExecutionId: executionId, StepCount: stepCount, ItemsProcessed: itemsProcessed, SuccessRate: 0, // 0 indicates failure }, }); throw error; } }

3.2 Python Implementation

Install dependencies:

pip install olakai-sdk openai

Auto-instrumentation setup:

import os from olakaisdk import olakai_config, instrument_openai, olakai_context, olakai_event, OlakaiEventParams from openai import OpenAI

Initialize Olakai

olakai_config(os.getenv("OLAKAI_API_KEY")) instrument_openai()

Create OpenAI client (automatically instrumented)

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

For assistive AI - use context manager

with olakai_context(userEmail="user@example.com", task="Customer Support"): response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": user_message}] )

Manual event tracking for agentic workflows:

import time import uuid

def run_agent(input_text: str) -> str: start_time = time.time() execution_id = str(uuid.uuid4()) task_execution_id = str(uuid.uuid4()) # Share across all agents in a multi-agent workflow total_tokens = 0 step_count = 0 items_processed = 0

try:
    # Your workflow steps here...
    step_count += 1
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": input_text}]
    )
    total_tokens += response.usage.total_tokens

    final_response = response.choices[0].message.content

    # Track successful execution
    # ⚠️ Only send fields registered as CustomDataConfigs
    olakai_event(OlakaiEventParams(
        prompt=input_text,
        response=final_response,
        tokens=total_tokens,
        requestTime=int((time.time() - start_time) * 1000),
        taskExecutionId=task_execution_id,
        task="Data Processing &#x26; Analysis",
        customData={
            "ExecutionId": execution_id,
            "StepCount": step_count,
            "ItemsProcessed": items_processed,
            "SuccessRate": 1.0,
        }
    ))

    return final_response

except Exception as e:
    # Track failed execution - same fields, different values
    olakai_event(OlakaiEventParams(
        prompt=input_text,
        response=f"Error: {str(e)}",
        tokens=total_tokens,
        requestTime=int((time.time() - start_time) * 1000),
        taskExecutionId=task_execution_id,
        task="Data Processing &#x26; Analysis",
        customData={
            "ExecutionId": execution_id,
            "StepCount": step_count,
            "ItemsProcessed": items_processed,
            "SuccessRate": 0,  # 0 indicates failure
        }
    ))
    raise

3.3 REST API Direct Integration

For other languages or custom integrations:

⚠️ customData fields must match registered CustomDataConfigs exactly

curl -X POST "https://app.olakai.ai/api/monitoring/prompt"
-H "Content-Type: application/json"
-H "x-api-key: YOUR_API_KEY"
-d '{ "prompt": "User input here", "response": "Agent response here", "app": "your-agent-name", "task": "Data Processing & Analysis", "tokens": 1500, "requestTime": 5000, "customData": { "ExecutionId": "abc-123", "StepCount": 5, "ItemsProcessed": 10, "SuccessRate": 1.0 } }'

Step 4: Test-Validate-Iterate Cycle

CRITICAL: Always validate your implementation by running a test and inspecting the actual event data. Do not assume configuration is correct - verify it.

4.1 Run Your Agent (Generate Test Event)

Execute your agent with test data to generate at least one monitoring event:

// Run your agent const result = await runAgent("Test input for validation"); console.log("Agent completed, checking Olakai...");

4.2 Fetch and Inspect the Event

List recent activity for your agent

olakai activity list --agent-id YOUR_AGENT_ID --limit 1 --json

Get the full event details including customData and kpiData

olakai activity get EVENT_ID --json

4.3 Validate Each Component

Check customData is present and correct:

olakai activity get EVENT_ID --json | jq '.customData'

Expected output:

{ "ExecutionId": "abc-123", "StepCount": 5, "ItemsProcessed": 10, "SuccessRate": 1.0 }

If fields are missing: SDK isn't sending them. Check your customData object in the event call.

Check KPIs are numeric (not strings):

olakai activity get EVENT_ID --json | jq '.kpiData'

CORRECT - numeric values:

{ "Items Processed": 10, "Success Rate": 100 }

WRONG - string values (indicates broken formula):

{ "Items Processed": "itemsProcessed", "Success Rate": "SuccessRate" }

If KPIs show strings: The formula is stored incorrectly. Fix with:

olakai kpis update KPI_ID --formula "YourVariable"

Check KPIs show values (not null):

If KPIs show null :

  • Verify customData contains the field: jq '.customData.YourField'

  • Verify CustomDataConfig exists: olakai custom-data list

  • Verify field name case matches exactly (case-sensitive!)

4.4 Iterate Until Correct

Repeat the cycle until all validations pass:

┌─────────────────────────────────────────────────────────┐ │ 1. Run agent (generate event) │ │ ↓ │ │ 2. Fetch event: olakai activity get ID --json │ │ ↓ │ │ 3. Check customData present? │ │ NO → Fix SDK code, goto 1 │ │ ↓ │ │ 4. Check kpiData numeric (not strings)? │ │ NO → Fix formula: olakai kpis update ID --formula │ │ goto 1 │ │ ↓ │ │ 5. Check kpiData not null? │ │ NO → Create CustomDataConfig or fix field name │ │ goto 1 │ │ ↓ │ │ ✅ All validations pass - implementation complete │ └─────────────────────────────────────────────────────────┘

4.5 Example Validation Session

1. Run your agent (generates event)

$ node my-agent.js "Test task" Agent completed successfully

2. Get the latest event

$ olakai activity list --agent-id cmkxxx --limit 1 --json | jq '.prompts[0].id' "cmkeyyy"

3. Inspect the event

$ olakai activity get cmkeyyy --json | jq '{customData, kpiData}' { "customData": { "StepCount": 3, "ItemsProcessed": 5, "SuccessRate": 1 }, "kpiData": { "Steps Executed": 3, # ✅ Numeric "Items Processed": 5, # ✅ Numeric "Success Rate": 100 # ✅ Numeric (formula: SuccessRate * 100) } }

✅ All good! Implementation is correct.

Step 5: Production Checklist

Before deploying to production:

  • API key stored securely in environment variables

  • Error handling wraps all LLM calls

  • Failed executions still report events (with successRate: 0)

  • All custom data fields have corresponding CustomDataConfig entries

  • KPI formulas validated and showing numeric values (not strings)

  • SDK configured with appropriate retries and timeouts

  • Sensitive data redaction enabled if needed

KPI Formula Reference

Supported Operators

Category Operators

Arithmetic + , - , * , /

Comparison < , <= , = , <> , >= , >

Logical AND , OR , NOT

Conditional IF(condition, true_val, false_val) , MAP(value, match1, out1, default)

Math ABS , MAX , MIN , AVERAGE , TRUNC

Null handling ISNA(value) , ISDEFINED(value) , NA()

Common Formula Patterns

Simple variable passthrough

--formula "ItemsProcessed"

Percentage conversion (0-1 to 0-100)

--formula "SuccessRate * 100"

Conditional counting (count failures)

--formula "IF(SuccessRate < 1, 1, 0)"

Boolean to number conversion

--formula "IF(PII detected, 1, 0)"

Null-safe with default value

--formula "IF(ISDEFINED(MyField), MyField, 0)"

Compound conditions

--formula "IF(AND(StepCount > 5, SuccessRate < 0.9), 1, 0)"

Aggregation Types

Aggregation Use For Example

SUM

Totals, counts Total items processed across all runs

AVERAGE

Rates, percentages Average success rate

Task Categories Reference

Use these predefined task categories for the task field:

Category Example Subtasks

Research & Intelligence competitive intelligence, market research, legal research

Data Processing & Analysis data extraction, statistical analysis, trend identification

Content Development blog writing, technical documentation, proposal writing

Content Refinement editing, proofreading, grammar correction

Customer Experience complaint resolution, ticket triage, FAQ development

Software Development code generation, code review, debugging

Strategic Planning roadmap development, scenario planning

Quick Reference

CLI Commands

olakai login # Authenticate olakai agents create --name "Name" # Create agent olakai custom-data create --agent-id ID --name X --type NUMBER # Create custom field olakai kpis create --formula "X" --agent-id ID # Create KPI olakai activity list --agent-id ID # View events

SDK Initialization (TypeScript)

const olakai = new OlakaiSDK({ apiKey: process.env.OLAKAI_API_KEY }); await olakai.init(); const openai = olakai.wrap(new OpenAI({ apiKey }), { provider: "openai" });

SDK Initialization (Python)

olakai_config(os.getenv("OLAKAI_API_KEY")) instrument_openai()

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

generate-analytics-reports

No summary provided by upstream source.

Repository SourceNeeds Review
General

olakai-add-monitoring

No summary provided by upstream source.

Repository SourceNeeds Review
General

olakai-troubleshoot

No summary provided by upstream source.

Repository SourceNeeds Review
General

olakai-get-started

No summary provided by upstream source.

Repository SourceNeeds Review