context-optimization
sickn33/antigravity-awesome-skills
Guides how to stretch limited LLM context windows by compressing, masking, caching, and partitioning information so agents process longer conversations or documents without enlarging models; includes when to trigger optimizations and concrete patterns for summaries, masking, KV-cache reuse, and budget tracking.