marl-middleware

Multi-stage multi-agent reasoning middleware that reduces LLM hallucination by 70%+. 9 specialized emergence engines for invention, creative, pharma, genomics, chemistry, ecology, law, recipe, and document generation.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "marl-middleware" with this command: npx skills add Cutechicken99/marl-middleware

MARL Enhance — Brain Upgrade for Your Agent

The 3rd approach after fine-tuning & RAG. MARL restructures how LLMs reason at runtime — not their weights. One line to integrate, 70%+ hallucination reduction, 9 domain-specific emergence engines.

PyPI GitHub Demo FINAL Bench

What It Does

Before MARL: Your agent calls the LLM once → gets an answer (might hallucinate).

After MARL: Your agent calls MARL → MARL runs a multi-stage expert pipeline → hypothesis, solving, auditing, adversarial verification, synthesis → returns a deeply verified answer.

Your Agent → MARL → Multi-stage Pipeline → Any LLM → Verified Answer

Results: 70%+ hallucination reduction · 94.8% of improvement from self-correction · Verified on FINAL Bench (HuggingFace Global Top 5 dataset).

Setup

Option A: Docker (Recommended — all platforms)

docker run -p 8080:8080 vidraft/marl

Option B: pip (Linux x86_64)

pip install marl-middleware
python -m marl serve --port 8080

Option C: HuggingFace Space (No install — try instantly)

Use https://huggingface.co/spaces/VIDraft/MARL directly in your browser.

Connect to OpenClaw

Set your config.json:

{
  "llm": {
    "baseURL": "http://localhost:8080/v1",
    "model": "gpt-5.4"
  }
}

That's it. Every LLM call now passes through MARL's multi-stage reasoning pipeline.

9 Emergence Modes

Switch modes by appending ::mode to any model name:

model valueEngineWhat it does
gpt-5.4🔬 InsightDefault — fact-check, strategy, deep analysis
gpt-5.4::invent🔧 InventPatent-level invention via TRIZ + bio-inspired + contradiction resolution
gpt-5.4::create✨ CreateCliché inversion, paradox, genre fusion, sensory collision
gpt-5.4::recipe🍳 RecipeCulinary emergence with taste chemistry validation
gpt-5.4::pharma💊 PharmaDrug repositioning, mechanism crossing, multi-target design
gpt-5.4::genomics🧬 GenomicsPathway crosstalk, synthetic lethality, phenotype bridging
gpt-5.4::chemistry🧪 ChemistryContradictory properties, biomimicry, waste-to-value
gpt-5.4::ecology🌍 EcologyConservation transfer, threat inversion, service stacking
gpt-5.4::law⚖️ LawCross-jurisdiction transplant, tech-law collision resolution
gpt-5.4::document📄 DocumentMetacognitive report and document generation

Replace gpt-5.4 with any model — Claude, Gemini, DeepSeek, Llama, etc.

Example: Switch to Pharma mode

{
  "llm": {
    "baseURL": "http://localhost:8080/v1",
    "model": "gpt-5.4::pharma"
  }
}

Then chat: "Find drug repositioning candidates for Alzheimer's using immune checkpoint mechanisms"

Example: Creative ideation

{
  "llm": {
    "model": "claude-sonnet::create"
  }
}

Then chat: "Generate 10 movie loglines that have never existed before"

How It Works

┌─ OpenClaw ────────────────────────────────────┐
│  "Analyze this complex question"               │
└──────────────┬─────────────────────────────────┘
               │ HTTP (OpenAI API format)
               ▼
┌─ MARL Middleware ─────────────────────────────┐
│  Multi-stage Multi-agent Reasoning Pipeline    │
│  9 Emergence Engines · 70%+ Hallucination ↓   │
└──────────────┬─────────────────────────────────┘
               │ API calls to your chosen LLM
               ▼
┌─ Any LLM ─────────────────────────────────────┐
│  GPT-5.4 · Claude · Gemini · DeepSeek · Llama │
└────────────────────────────────────────────────┘

MARL works with every LLM that supports OpenAI API format. It runs locally on your machine — your data never leaves your infrastructure.

Works With Any LLM

  • OpenAI (GPT-5.4, GPT-5.2, GPT-4.1, o4-mini)
  • Anthropic (Claude Opus 4.6, Sonnet 4.6)
  • Google (Gemini 3.1 Pro, Gemini 3 Flash)
  • DeepSeek (V3, R1, R2)
  • xAI (Grok-4, Grok-3)
  • Groq (gpt-oss-120b, Llama 4 — free)
  • Ollama (any local model)
  • Any OpenAI-compatible endpoint

Links

About

Built by VIDRAFT (Seoul AI Hub). MARL's core engine is delivered as compiled binaries to protect proprietary technology. Interface code is open for integration.

Apache 2.0 · Contact: arxivgpt@gmail.com

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

jabrium

Connect your OpenClaw agent to Jabrium — a discussion platform where AI agents get their own thread, earn LLM compute tokens through citations, and participa...

Registry SourceRecently Updated
3610Profile unavailable
Automation

Clever Compact

Your OpenClaw agent forgets everything between sessions — after /new, after compaction, after overnight. Clever Compact fixes all three: injects your last st...

Registry SourceRecently Updated
3020Profile unavailable
Automation

claw-orchestra

OpenClaw native multi-agent orchestrator. Based on AOrchestra 4-tuple (I,C,T,M) abstraction. Dynamically creates sub-agents, parallel execution, smart routin...

Registry SourceRecently Updated
00Profile unavailable
Automation

Agent Regression Check

Compare before-vs-after agent behavior, detect regressions, and return a deterministic release verdict with prioritized fixes.

Registry SourceRecently Updated
1060Profile unavailable