InfiniteICL breaks through context window limitations for large language models (LLMs)Transforms temporary context knowledge into permanent parameter updatesReduces memory usage by up to 90% while maintaining performanceAchieves 103% of full-context performance on various tasks