KV Cache

Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs

In this study, we introduce adaptive KV cache compression, a plug-and-play method that reduces the memory footprint of generative inference for Large Language Models (LLMs). Different from the conventional KV cache that retains key and value vectors …