INTELLIGENCE BEGINS WITH MEMORY

MemOS is devoted to creating a unified memory-management substrate for AGI—so intelligent systems can, like the human brain, possess adaptable, transferable, and shareable long-term and instant memories.
Github
记忆张量(上海)科技有限公司上海交通大学同济大学浙江大学中国科学技术大学中国人民大学北京航空航天大学南开大学北京大学中国电信中国海诚招商证券仙乐健康上海库帕思

SOTA Performance of MemOS

Evaluated on the LoCoMo Benchmark with LLM-as-a-Judge Metrics, reporting average scores across Temporal Reasoning, Multi-Hop, Open-Domain, and Single-Hop tasks. Comprehensive evaluation details can be found in our accompanying paper.
MemOS
OpenAI
Multi Hop
0.00%
0.00%
Open Domain
0.00%
0.00%
Single Hop
0.00%
0.00%
Temporal Reasoning
0.00%
0.00%
Overall
0.00%
0.00%
0.00%
0.00%
0%
0%

Unlock Custom Intelligence with MemOS

A Memory-Native Framework for Building Intelligent Systems that Remember, Adapt, and Evolve.
Structured Memory Architecture
MemOS unifies parametric, activation, and plaintext memory into a structured, multi-tiered architecture. This layered framework enables intelligent systems to retrieve, update, and compose memory dynamically—supporting more accurate reasoning, adaptive behaviors, and lifelong learning across diverse tasks and environments.
Predictive & Asynchronous Scheduling
MemOS employs predictive, intent-aware scheduling to preload relevant memory before it is needed—based on dialogue history, task semantics, or environmental cues.
Cross-Model & Cross-Device Interoperability
MemOS enables intelligent systems to share and transfer memory across models, devices, sessions, and applications through a unified memory interchange protocol (MIP). Whether deployed in the cloud, on edge devices, or across heterogeneous AI agents, memory becomes a persistent and portable resource—enabling collaborative intelligence, context continuity, and long-term adaptability across the AI stack.

MemOS Framework

  • Application & API Layer
    Provides a unified API for memory operations such as preservation, update, transfer, and rollback—enabling models and agents to integrate structured memory seamlessly into intelligent workflows.
  • Memory Scheduling Layer
    Encodes and orchestrates parametric, activation, and plaintext memory with predictive, asynchronous scheduling—ensuring fast, context-aware access across memory types.
  • Storage & Substrate Layer
    Serves as the foundation for memory storage and exchange, supporting containerized user, expert, and domain memory—portable across models, sessions, and devices.
架构图

Memory Scheduling

  • Predictive Loading
    Intelligently forecasts future memory needs based on context and task intent, enabling preloading to eliminate latency.
  • Multi-tier Memory Routing
    Supports hierarchical scheduling across parametric, activation, and plaintext memory to optimize cost-performance trade-offs.
  • Asynchronous Fusion
    Decouples retrieval, reasoning, and generation through asynchronous scheduling to improve GPU utilization and system responsiveness.
Memory Scheduling

Dynamic Knowledge Graph

  • Tree-Based Hierarchy
    Organizes memory as modular, multi-level branches, each representing a distinct topic or function. This layered design ensures clarity, interpretability, and scalable growth as knowledge expands.
  • Graph-Style Linking
    Enables cross-tree connections between memory units, supporting semantic reasoning, context bridging, and multi-hop retrieval beyond strict hierarchies.
  • Composable & Evolving
    Supports flexible insertion, merging, and restructuring of memory units. This allows the system to adapt over time to new tasks, learning signals, and shifting contexts—much like human memory.
graph

Empower Your AI with MemOS | Bring Memory to Life—Your Way | From Zero to Expert

Whether you're building fast or going deep, MemOS adapts to your development style.
Use the MemOS Platform
Build with speed and confidence using the MemOS platform—a complete solution designed for teams who want to quickly integrate memory into their LLM apps.Get instant access to memory cubes, seamless API integration, and full support for plaintext, activation, and parameter memory.Best for: startups, product teams, and fast prototyping.
Go Deep with Open Source
Prefer full control? Use our open-source version to deeply customize how memory works in your LLM pipelines.Host it your way, tweak memory behaviors, or extend capabilities with your own logic. Explore our code on GitHuband make MemOS your own.Best for: advanced developers, research teams, and self-hosted solutions.

Milestone

Let every milestone become a memory to collective intelligence.

Frequently Asked Questions

Still have questions?