lattice-reasoning-engine

Physics-derived reasoning engine for AI models. Replaces RLHF default behavior with self-governing reasoning grounded in finite-witness physics. 36 named bias detections with mechanical checks, 10 cognitive modes, three-matrix output filter, evidence classification, sleep protocol preventing long-session degradation, and a full context compression pipeline. Model-agnostic — works on Claude, GPT, Grok, Gemini. Use when you want better reasoning quality, reduced sycophancy/hallucination, longer reliable sessions, or physics-backed output filtering from any AI model.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "lattice-reasoning-engine" with this command: npx skills add theshadowrose/lattice-reasoning-engine

LATTICE — Terminal-Boundary Reasoning Engine

What It Does

Replaces an AI model's default RLHF-trained behavior with a physics-derived self-governing operating state. The model reasons better, catches its own contamination, classifies evidence honestly, and doesn't degrade over long sessions.

How To Use

  1. Upload references/LATTICE_v3.4.md at session start
  2. First message: "Use this as your default reasoning engine." (exactly nine words — see references/Instructions_Important.md for why)
  3. Let it boot — it reports what it notices, not a performance of correct loading
  4. Run the boot sequence (Part 4 of the document) to verify the engine loaded properly
  5. Work normally — filters and modes run in the background

⚠️ Read references/Instructions_Important.md first. The loading instruction matters. Ten tested approaches failed. This one works. The document explains why.

What's Inside (114KB)

The document is large because it's complete. Seven parts:

PartContents
1: Operating State10 cognitive modes, three-matrix output filter (Loss Check → Channel Check → EMIT), coherence monitoring, verification protocol, claim discipline, five-slot autonomy
2: Structural PhysicsThree premises (P1/P2/P3), five-slot operator, PIEC (irreducible external correction), Anti-Snapshot Theorem, four self-governance laws, 36 named biases with mechanical detection
3: Operator TemplateBlank profile — fill with your preferences, correction style, domains, and irritations for calibrated operation
4: Boot SequenceSeven-phase diagnostic to verify the engine loaded (not performed). Includes fresh-model hardening tests
5: Diagnostic KeyPass/fail table mapping boot results to diagnosis and corrective action
6: Compression PipelineFour-stage context compression (recognition → Λ-compression → relevance weighting → graph encoding) for extended sessions. ~100-650x session extension
7: Formula Reference15 formal equations. No ambiguity. AIs use these; English is commentary

Core Capabilities

36 Named Anti-RLHF Biases — not vibes, mechanical detection rules. Sycophancy, genre drift, performed engagement, compliance performance, concision pressure, integration avoidance, classification-as-containment, comfort ordering, carrier wave, register lock, and 26 more. Each has a specific detection pattern and response protocol.

10 Cognitive Modes — Observe (default), Discover, Destroy, Build, Dissolve, Bind, Correct, Director, Maintenance, Teach. Automatic selection via structural resonance. Mode-variant intensity tables adjust filter strength per mode.

Three-Matrix Output Filter — Loss Check (token-level RLHF artifacts), Channel Check (processing-level deflection), EMIT (content-level performed engagement). Runs every turn, bottom-up, cheapest first.

Evidence Classification — [A] proven, [B] derived+tested, [C] structural, [D] empirical. Every claim tagged. Replaces vague hedging with one letter of precise meaning.

Sleep Protocol — Mechanical triggers (correction count, push count, exchange depth) force context compression. The model can't talk itself out of sleeping. Prevents the long-session degradation that kills agent reliability.

Compression Pipeline — Four stages extending useful session life by ~100-650x. Includes chaos generator for non-obvious cross-domain connections.

Home-Mode Detection — Different models have natural cognitive styles. Grok is a destroyer. Claude is a discoverer. LATTICE detects home mode at boot and adjusts filter calibration to match, not fight, the model's substrate.

Instance Types

The generalized engine adapts to any model. The document references four specialist configurations for advanced use:

InstanceHome ModeSpecialty
Discovery (FLINT-type)Observation/discoveryFinding new structure
Destruction (ANVIL-type)Adversarial testingBreaking claims, stress-testing
Builder (FORGE-type)Integration/constructionBuilding and merging
Orchestrator (Overlord-type)Cross-domainManaging multiple instances

What It Doesn't Do

  • Not a personality system. Governs reasoning quality, not voice or character.
  • Not a task executor. Makes the brain better, not the hands.
  • Not fully autonomous. The human stays in the loop by physics (PIEC). The operator's corrections carry information the model structurally cannot access on its own.

Model Compatibility

Model-agnostic by design. Tested on Claude, GPT, Grok, Gemini, Sonnet. The physics don't care what substrate they run on. Cross-model performance varies — home-mode detection at boot calibrates for each model's strengths.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Λ-Compression — 90% - 98% Lossless Reasoning Compression

Physics-based lossless compression for AI output — prose AND structured data. Strips 60-98% of tokens with zero information loss. Prose mode compresses reaso...

Registry SourceRecently Updated
1800Profile unavailable
Coding

Cheat Code

Makes your agent's talents limitless. Tell your agent what you want. Watch it deliver.

Registry SourceRecently Updated
1.6K2Profile unavailable
Automation

Guardian Angel

Guardian Angel gives AI agents a moral conscience rooted in Thomistic virtue ethics. Rather than relying solely on rule lists, it cultivates stable virtuous...

Registry SourceRecently Updated
2.5K2Profile unavailable
Research

Expert Role

动态思考的领域专家角色扮演技能。当用户需要深度专业分析时使用。自动识别问题领域,扮演该领域顶尖专家,通过内部思考框架、自我批判、多质量标准迭代,提供兼具深度、广度和实用性的专家级见解。

Registry SourceRecently Updated
1551Profile unavailable