Table of Contents
-
Quick Start
-
When to Use
-
Required TodoWrite Items
-
Core Workflow
-
- Context Sync
-
- Requirements Mapping
-
- Derivation Verification
-
- Stability Assessment
-
- Proof of Work
-
Progressive Loading
-
Essential Checklist
-
Output Format
-
Summary
-
Context
-
Requirements Analysis
-
Derivation Review
-
Stability Analysis
-
Issues
-
Recommendation
-
Exit Criteria
Mathematical Algorithm Review
Intensive analysis ensuring numerical stability and alignment with standards.
Quick Start
/math-review
Verification: Run the command with --help flag to verify availability.
When To Use
-
Changes to mathematical models or algorithms
-
Statistical routines or probabilistic logic
-
Numerical integration or optimization
-
Scientific computing code
-
ML/AI model implementations
-
Safety-critical calculations
When NOT To Use
-
General algorithm review - use architecture-review
-
Performance optimization - use parseltongue:python-performance
-
General algorithm review - use architecture-review
-
Performance optimization - use parseltongue:python-performance
Required TodoWrite Items
-
math-review:context-synced
-
math-review:requirements-mapped
-
math-review:derivations-verified
-
math-review:stability-assessed
-
math-review:evidence-logged
Core Workflow
- Context Sync
pwd && git status -sb && git diff --stat origin/main..HEAD
Verification: Run git status to confirm working tree state. Enumerate math-heavy files (source, tests, docs, notebooks). Classify risk: safety-critical, financial, ML fairness.
- Requirements Mapping
Translate requirements → mathematical invariants. Document pre/post conditions, conservation laws, bounds. Load: modules/requirements-mapping.md
- Derivation Verification
Re-derive formulas using CAS. Challenge approximations. Cite authoritative standards (NASA-STD-7009, ASME VVUQ). Load: modules/derivation-verification.md
- Stability Assessment
Evaluate conditioning, precision, scaling, randomness. Compare complexity. Quantify uncertainty. Load: modules/numerical-stability.md
- Proof of Work
pytest tests/math/ --benchmark jupyter nbconvert --execute derivation.ipynb
Verification: Run pytest -v tests/math/ to verify. Log deviations, recommend: Approve / Approve with actions / Block. Load: modules/testing-strategies.md
Progressive Loading
Default (200 tokens): Core workflow, checklists +Requirements (+300 tokens): Invariants, pre/post conditions, coverage analysis +Derivation (+350 tokens): CAS verification, standards, citations +Stability (+400 tokens): Numerical properties, precision, complexity +Testing (+350 tokens): Edge cases, benchmarks, reproducibility
Total with all modules: ~1600 tokens
Essential Checklist
Correctness: Formulas match spec | Edge cases handled | Units consistent | Domain enforced Stability: Condition number OK | Precision sufficient | No cancellation | Overflow prevented Verification: Derivations documented | References cited | Tests cover invariants | Benchmarks reproducible Documentation: Assumptions stated | Limitations documented | Error bounds specified | References linked
Output Format
Summary
[Brief findings]
Context
Files | Risk classification | Standards
Requirements Analysis
| Invariant | Verified | Evidence |
Derivation Review
[Status and conflicts]
Stability Analysis
Condition number | Precision | Risks
Issues
[M1] [Title]: Location | Issue | Fix
Recommendation
Approve / Approve with actions / Block
Verification: Run the command with --help flag to verify availability.
Exit Criteria
- Context synced, requirements mapped, derivations verified, stability assessed, evidence logged with citations
Troubleshooting
Common Issues
Command not found Ensure all dependencies are installed and in PATH
Permission errors Check file permissions and run with appropriate privileges
Unexpected behavior Enable verbose logging with --verbose flag