Efficient Streaming Language Models with Attention Sinks (Paper Explained)
#llm #ai #chatgpt
How does one run inference for a generative autoregressive language model that has been trained with a fixed context size? Streaming LLMs combine the performance of windowed attention, but avoid the drop in performance by using attention sinks - an interesting phenomenon where the token at position 0 acts as an absorber of "extra" attention.
OUTLINE:
0:00 - Introduction
1:20 - What is the problem?
10:30 - The hypothesis: Attention Sinks
15:10 - Experimental evidence
18:45 - Streaming LLMs
20:45 - Semantics or position?
22:30 - Can attention sinks be learned?
27:45 - More experiments
30:10 - Comparison to Big Bird
Paper: https://arxiv.org/abs/2309.17453
Abstract:
Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses two major challenges. Firstly, during the decoding stage, caching previous tokens' Key and Value states (KV) consumes extensive memory. Secondly, popular LLMs cannot generalize to longer texts than the training sequence length. Window attention, where only the most recent KVs are cached, is a natural approach -- but we show that it fails when the text length surpasses the cache size. We observe an interesting phenomenon, namely attention sink, that keeping the KV of initial tokens will largely recover the performance of window attention. In this paper, we first demonstrate that the emergence of attention sink is due to the strong attention scores towards initial tokens as a ``sink'' even if they are not semantically important. Based on the above analysis, we introduce StreamingLLM, an efficient framework that enables LLMs trained with a finite length attention window to generalize to infinite sequence lengths without any fine-tuning. We show that StreamingLLM can enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language modeling with up to 4 million tokens and more. In addition, we discover that adding a placeholder token as a dedicated attention sink during pre-training can further improve streaming deployment. In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2x speedup. Code and datasets are provided at this https URL.
Authors: Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis
Links:
Homepage: https://ykilcher.com
Merch: https://ykilcher.com/merch
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://ykilcher.com/discord
LinkedIn: https://www.linkedin.com/in/ykilcher
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
-
LIVE
Awaken With JP
1 hour agoRobert De Niro Takes Over The World - LIES Ep. 42
1,573 watching -
LIVE
Tucker Carlson
1 hour agoJeffrey Sachs: The Untold History of the Cold War, CIA Coups Around the World, and COVID’s Origin
12,273 watching -
LIVE
PEOPLE'S PUNDIT
1 hour agoBarnes and Baris Episode 70: What Are the Odds?
1,570 watching -
4:39
Standpoint with Gabe Groisman
1 hour agoClip: Amb Bolton on Isolationism - Protecting America's Interests Does Not End at Our Borders
53 -
LIVE
ItsMossy
37 minutes ago🍀MULTIVERSUS IS BACK🍀!MENU !DISCORD !CLIP🍀
177 watching -
2:38:30
Matt Kim
16 hours agoThat DIDN'T go as planned...
9.12K5 -
43:05
Kimberly Guilfoyle
3 hours agoSPECIAL COVERAGE OF CLOSING ARGUMENTS, Live with Lawyer Mark Paoletta
17.1K14 -
2:46:24
Spittin' Chiclets
4 hours agoSpittin' Chiclets Episode 503: Featuring Special Guests
2.02K -
DVR
vivafrei
4 hours agoTrump Trial Closing Arguments! E. Jean Carroll to Sue AGAIN? Libertarian BABIES! & MORE! Viva Frei
49.4K29 -
32:28
TudorDixon
2 hours agoHonoring the Sacrifice: The True Meaning of Memorial Day | The Tudor Dixon Podcast
4.96K3