julia-correctness-testing
Write and design Julia tests that assess mathematical correctness and coding correctness, including invariants, edge cases, regressions, and integration behavior across modules.
Standardize Julia BenchmarkTools setup, warmup policy, and before/after performance reporting for repeatable runtime comparisons in this package.
This listing is imported from SkillsMP metadata and should be treated as untrusted until upstream source review is completed.
Install skill "julia-benchmark-standard" with this command: npx skills add nicoloponcia/skillsmp-nicoloponcia-nicoloponcia-julia-benchmark-standard
This source entry does not include full markdown content beyond metadata.
This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.
Related by shared tags or category signals.
Write and design Julia tests that assess mathematical correctness and coding correctness, including invariants, edge cases, regressions, and integration behavior across modules.
Generate release changelog and downstream upgrade notes from Julia API diffs, including renamed symbols, signature changes, and data structure migrations.
Ensure global correctness and integration across all package modules by validating interfaces, invariants, cross-module behavior, and end-to-end workflows.
Refactor Julia code for type stability, runtime-first optimization, API/interface redesign, data structure refactoring, and aggressive code debloating while preserving mathematical robustness.