Google’s new technique gives LLMs infinite context
VentureBeat -

Experiments reported by the Google research team indicate that models using Infini-attention can maintain their quality over one million tokens without requiring additional memory.

In related news