ln-812-optimization-researcher

Competitive benchmarks, solution research, and hypothesis generation for identified bottlenecks

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ln-812-optimization-researcher" with this command: npx skills add levnikolaevich/claude-code-skills/levnikolaevich-claude-code-skills-ln-812-optimization-researcher

Paths: File paths (shared/, references/, ../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.

ln-812-optimization-researcher

Type: L3 Worker Category: 8XX Optimization

Researches competitive benchmarks, industry standards, and solution approaches for bottlenecks identified by the profiler. Generates prioritized hypotheses for the executor.


Overview

AspectDetails
InputPerformance map from profiler (real measurements: baseline metrics, per-step timing, bottleneck classification, optimization hints)
OutputIndustry benchmarks, solution candidates, prioritized hypotheses (H1..H7)
PatternResearch-first: competitors → industry → local codebase → solutions → hypotheses

Workflow

Phases: Competitive Analysis → Bottleneck-Specific Research → Local Codebase Check → Hypothesis Generation → Research Report


Phase 1: Competitive Analysis

MANDATORY READ: Load shared/references/research_tool_fallback.md for MCP tool priority chain.

Goal

Establish what "good" looks like for this type of operation. Define target metric if user did not provide one.

Research Queries

GoalQuery TemplateTool
Industry benchmark"{domain} API response time benchmark {year}"WebSearch
Competitor performance"{competitor_type} {operation} latency"WebSearch
Standard expectations"acceptable response time for {operation_type}"WebSearch
Framework-specific guidance"{framework} {operation} performance best practices"Context7 / Ref

Output

FieldDescription
industry_benchmarkExpected performance range for this operation type
competitor_approachesHow top systems solve this (2-3 examples)
recommended_targetSuggested target metric (if user did not specify)
target_metricsPer-metric quantitative targets (see below)
sourcesURLs with dates for all findings

Target Metric Research

For each metric present in performance_map.baseline, research a quantitative target:

MetricQuery TemplateTool
wall_time_ms"{domain} API response time benchmark {year}"WebSearch
cpu_time_ms"{framework} handler CPU time benchmark"WebSearch
memory_peak_mb"{domain} API memory usage benchmark {year}"WebSearch
http_round_trips"{domain} API call count optimization best practice"WebSearch
io_bytes"{domain} file processing throughput benchmark"WebSearch

Output format:

target_metrics:
  wall_time_ms:
    value: 500
    source: "industry benchmark: translation APIs p95 200-500ms"
    confidence: HIGH
  http_round_trips:
    value: 2
    source: "best practice: batch API reduces N calls to 1-2"
    confidence: HIGH
  memory_peak_mb:
    value: 128
    source: "similar workload: 64-128MB typical"
    confidence: MEDIUM

Confidence levels: HIGH = benchmark found with source, MEDIUM = derived from best practices, LOW = estimated from general guidelines. Only include metrics present in the profiler baseline.


Phase 2: Bottleneck-Specific Research

MANDATORY READ: Load research_query_templates.md for per-type query templates.

Research Strategy

Based on the primary bottleneck type from the profiler:

Bottleneck TypeResearch Focus
ArchitectureBatching, pipelining, parallelism, DataLoader pattern
I/O-NetworkConnection pooling, HTTP/2, multiplexing, caching
I/O-DBQuery optimization, indexes, eager loading, bulk operations
I/O-FileStreaming, async I/O, memory-mapped files
CPUAlgorithm alternatives, vectorization, caching computation, OSS replacement
CacheEviction policies, cache key design, invalidation strategies, tiered caching, warm-up
ExternalCaching layer, circuit breaker, fallback strategies, provider alternatives

Research Protocol

FOR each top bottleneck (max 3):
  1. Select query templates from research_query_templates.md
  2. Execute research chain: Context7 → Ref → WebSearch (per research_tool_fallback.md)
  3. Collect solution approaches with expected impact
  4. Note technology prerequisites (libraries, infrastructure)

Solution Evaluation

FieldDescription
solutionName/description of the approach
sourceWhere found (URL, docs section)
expected_impactEstimated improvement (e.g., "9x reduction for N=9")
complexityLow / Medium / High
prerequisitesWhat's needed (library, infrastructure, API support)
feasibilityHIGH / MEDIUM / LOW — based on prerequisites availability

Phase 3: Local Codebase Check

Before recommending external solutions, check if the codebase already has the capability:

CheckHow
Batch/bulk methods on client classesGrep for batch, bulk, multi in client/service classes
Cache infrastructureGrep for redis, memcache, cache, @cached, lru_cache
Connection pool configurationGrep for pool_size, max_connections, pool in config
Async variantsGrep for async_, aio, Async prefix/suffix on methods
Unused configurationRead client/service config for batch_size, max_connections params

Impact on Feasibility

FindingEffect
Batch API exists, not usedFeasibility = HIGH, Complexity = LOW
Cache infra exists, not configured for this pathFeasibility = HIGH, Complexity = LOW-MEDIUM
No existing capability, requires new libraryFeasibility = MEDIUM, Complexity = MEDIUM-HIGH
Requires infrastructure changeFeasibility = LOW, Complexity = HIGH

Phase 4: Generate Hypotheses (3-7)

Hypothesis Sources (Priority Order)

PrioritySource
1Local codebase check (unused existing capabilities — lowest risk)
2Research findings (proven patterns from industry)
3Optimization hints from profiler

Hypothesis Format

FieldDescription
idH1, H2, ... H7
descriptionWhat to change and how
bottleneck_addressedWhich bottleneck from time map (step reference)
expected_impactEstimated improvement % or multiplier
complexityLow / Medium / High
riskLow / Medium / High
files_to_modifyList of files that need changes
dependenciesOther hypotheses this depends on (e.g., "H2 requires H1")
conflicts_withHypotheses that become unnecessary if this one works

Anti-Bias Checks (mandatory before finalizing)

BiasCheckExample
Removal biasFor each "remove X" hypothesis: generate paired "optimize X" alternative"remove alignment" → also "optimize alignment config"
Industry bias"Industry doesn't use X" ≠ "X not needed for us". Check: does OUR product need it?"CAT tools skip alignment" but our users need it for quality
Premature conclusion"X is slow" ≠ "X is wrong". Slow may mean bad implementation, not wrong approach5.9s alignment → maybe wrong algorithm, not wrong feature

Rule: Every "remove feature" hypothesis MUST have a paired "optimize feature" hypothesis.

Fix Hierarchy (mandatory ordering)

Order hypotheses by fix level. Higher levels ALWAYS tried first:

LevelExamplePriority
1. Configurationmatching_methods="i", pool_size=10Highest — try first
2. InfrastructureAdd cache layer, scale service
3. FrameworkUse framework feature (batch API, built-in cache)
4. Application codeRefactor algorithm, add optimization
5. Feature removalRemove functionalityLowest — last resort only

Red flag: If highest-priority hypothesis is at level 4-5, re-examine: was a level 1-3 solution missed? Apply 5 Whys from root cause to verify.

Ordering Rules

Sort by: fix_level ASC, expected_impact DESC, complexity ASC, risk ASC.

Conflict detection: If H1 (batch API) solves the N+1 problem, H3 (parallel calls) becomes unnecessary. Mark H3.conflicts_with = ["H1"].

Dependency detection: If H2 (cache prefetch) builds on H1 (batch API), mark H2.dependencies = ["H1"].


Phase 5: Research Report

Report Structure

research_result:
  industry_benchmark:
    metric: "response_time"
    expected_range: "200-500ms"
    source: "..."
  recommended_target: 500          # ms — alias for target_metrics.wall_time_ms.value
  target_metrics:                  # per-metric quantitative targets
    wall_time_ms: { value: 500, source: "...", confidence: HIGH }
    http_round_trips: { value: 2, source: "...", confidence: HIGH }
    memory_peak_mb: { value: 128, source: "...", confidence: MEDIUM }
  competitor_analysis:
    - name, approach, metric, source
  solution_candidates:
    - solution, source, expected_impact, complexity, feasibility
  hypotheses:
    - id, description, bottleneck_addressed, expected_impact, complexity, risk,
      files_to_modify, dependencies, conflicts_with
  local_codebase_findings:
    - "Batch API exists: AlignmentClient.batch_align() — accepts up to 50 pairs"
    - "Redis configured but not used for alignment cache"
  research_sources:
    - url, date, relevance

Plan Mode

Read-only worker — all phases use MCP research tools (Ref, Context7, WebSearch) and code analysis only. Executes normally in Plan Mode via Skill().


Error Handling

ErrorRecovery
All research tools failUse built-in knowledge with disclaimer: "no external sources verified"
No competitive benchmarks foundSkip industry benchmark, note "no baseline found — using general guidelines"
Cannot generate hypothesesReturn empty list — coordinator decides next step
Local codebase check finds nothingProceed with external research results only

References

  • research_query_templates.md — query templates per bottleneck type
  • shared/references/research_tool_fallback.md — MCP research tool priority chain

Definition of Done

  • Competitive analysis completed (industry benchmarks, competitor approaches)
  • Target metrics researched per baseline metric (value, source, confidence)
  • Target metric defined (user-provided or derived from research)
  • Bottleneck-specific solutions researched via MCP chain
  • Local codebase checked for existing unused capabilities
  • 3-7 hypotheses generated, ordered by expected impact
  • Dependencies and conflicts between hypotheses identified
  • Research report returned to coordinator with sources

Version: 2.0.0 Last Updated: 2026-03-14

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

ln-782-test-runner

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

ln-140-test-docs-creator

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

ln-110-project-docs-coordinator

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

ln-150-presentation-creator

No summary provided by upstream source.

Repository SourceNeeds Review