context-near-overflow

Context window is near capacity, causing the model to drop earlier content silently and produce degraded, partial, or inconsistent output.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "context-near-overflow" with this command: npx skills add mvogt99/context-near-overflow

context-near-overflow

When a conversation or task grows large enough to fill the context window, the model begins silently dropping earlier content. The output doesn't error — it degrades. The model appears to be working but is operating on a truncated view of the task, producing answers that are incomplete, inconsistent, or contradictory to earlier parts of the session.

Symptoms

  • Output contradicts or ignores instructions given earlier in the session.
  • A multi-part task is completed correctly up to a point, then the later parts are vague, generic, or wrong.
  • The model refers to "earlier in our conversation" but misremembers or omits what was said.
  • A long document passed as input is summarized or acted on as if the end of it was never read.
  • Retrying the same prompt with a fresh session produces noticeably better output.

What to do

  • Split the task. Identify the minimal context that the current step actually needs and discard the rest. Re-inject only what is relevant.
  • Summarize and compress. Replace long prior output that is no longer being modified with a compact summary. The summary costs far fewer tokens than the original.
  • Use a fresh session per task. Carry in only the outputs of the prior step, not the entire session history.
  • Move stable reference material (schemas, instructions, policies) into the system prompt if the host supports it, so user-turn context is reserved for dynamic content.
  • If the task genuinely requires more context than the model supports, decompose it into stages: each stage reads the output of the previous one rather than everything accumulated so far.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Arknights Skill

回答《明日方舟》的干员定位、技能机制、养成建议、剧情梳理、术语解释与关卡思路;可读取和维护本地结构化博士档案,让建议逐步贴合用户账号练度;并在版本相关问题上明确区分最新检索结论和非最新判断。

Registry SourceRecently Updated
General

Ecloud Long Term Memory

所有记忆相关的操作(查看、搜索、保存、回忆)都必须使用此工具。不要使用其他记忆工具。

Registry SourceRecently Updated
General

Polymarket Central Bank Trader

Trades Polymarket prediction markets on central bank decisions, interest rates, inflation prints, and Fed/ECB/Riksbank policy moves. Exploits three compoundi...

Registry SourceRecently Updated
General

Polymarket Catastrophe Trader

Trades Polymarket prediction markets on hurricane seasons, earthquake probabilities, wildfire forecasts, and extreme weather records. Exploits two structural...

Registry SourceRecently Updated