Daily Paper Cast
Daily Paper Cast

General Agentic Memory Via Deep Research

26 November 2025 25:36 🎙️ Jingwen Liang, Gengyu Wang

Listen to this episode

About this episode

🤗 Upvotes: 121 | cs.CL, cs.AI, cs.IR, cs.LG

        <strong>Authors:</strong><br />
        B. Y. Yan, Chaofan Li, Hongjin Qian, Shuqi Lu, Zheng Liu</p>

        <strong>Title:</strong><br />
        General Agentic Memory Via Deep Research</p>

        <strong>Arxiv:</strong><br />
        <a href="http://arxiv.org/abs/2511.18423v1">http://arxiv.org/abs/2511.18423v1</a></p>

        <strong>Abstract:</strong><br />
        Memory is critical for AI agents, yet the widely-adopted static memory, aiming to create readily available memory in advance, is inevitably subject to severe information loss. To address this limitation, we propose a novel framework called \textbf{general agentic memory (GAM)}. GAM follows the principle of "\textbf{just-in time (JIT) compilation}" where it focuses on creating optimized contexts for its client at runtime while keeping only simple but useful memory during the offline stage. To this end, GAM employs a duo-design with the following components. 1) \textbf{Memorizer}, which highlights key historical information using a lightweight memory, while maintaining complete historical information within a universal page-store. 2) \textbf{Researcher}, which retrieves and integrates useful information from the page-store for its online request guided by the pre-constructed memory. This design allows GAM to effectively leverage the agentic capabilities and test-time scalability of frontier large language models (LLMs), while also facilitating end-to-end performance optimization through reinforcement learning. In our experimental study, we demonstrate that GAM achieves substantial improvement on various memory-grounded task completion scenarios against existing memory systems

Want to find AI jobs?

Join thousands of AI professionals finding their next opportunity

Receive emails of