Post
2953
Context rot is such a catchy phrase, but the problem has been identified 2+ years ago, called attention decay.
Lost in the Middle: How Language Models Use Long Contexts (2307.03172)
I spotted the same problem in coding tasks, and documented in my book (https://www.amazon.com/dp/9999331130).
Why did this problem become hot again? This is because many of us thought the problem has been solved by long context models, which is not true.
Here we were misled by benchmarks. Most long-context benchmarks build around the QA scenario, i.e. "finding needle in haystack". But in agentic scenarios, the model needs to find EVERYTHING in the haystack, and just can't afford enough attention for this challenge.
Lost in the Middle: How Language Models Use Long Contexts (2307.03172)
I spotted the same problem in coding tasks, and documented in my book (https://www.amazon.com/dp/9999331130).
Why did this problem become hot again? This is because many of us thought the problem has been solved by long context models, which is not true.
Here we were misled by benchmarks. Most long-context benchmarks build around the QA scenario, i.e. "finding needle in haystack". But in agentic scenarios, the model needs to find EVERYTHING in the haystack, and just can't afford enough attention for this challenge.