context-optimization
sickn33/antigravity-awesome-skills
This skill details advanced techniques for enhancing the effective context capacity of LLMs. It outlines four core strategies—Compaction, Observation Masking, KV-Cache Optimization, and Context Partitioning—to prevent context limits from constraining task complexity. By strategically managing token usage, users can reduce operational costs, minimize latency, and implement robust, long-running agent systems for handling large documents or conversations.