# mem0 **Repository Path**: brian_sys/mem0 ## Basic Information - **Project Name**: mem0 - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-04-24 - **Last Updated**: 2026-04-24 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README

Mem0 - The Memory Layer for Personalized AI

mem0ai%2Fmem0 | Trendshift

Learn more Β· Join Discord Β· Demo

Mem0 Discord Mem0 PyPI - Downloads GitHub commit activity Package version Npm package Y Combinator S24

πŸ“„ Building Production-Ready AI Agents with Scalable Long-Term Memory β†’

## New Memory Algorithm (April 2026) | Benchmark | Old | New | Tokens | Latency p50 | | --- | --- | --- | --- | --- | | **LoCoMo** | 71.4 | **91.6** | 7.0K | 0.88s | | **LongMemEval** | 67.8 | **93.4** | 6.8K | 1.09s | | **BEAM (1M)** | β€” | **64.1** | 6.7K | 1.00s | | **BEAM (10M)** | β€” | **48.6** | 6.9K | 1.05s | All benchmarks run on the same production-representative model stack. Single-pass retrieval (one call, no agentic loops). **What changed:** - **Single-pass ADD-only extraction** -- one LLM call, no UPDATE/DELETE. Memories accumulate; nothing is overwritten. - **Agent-generated facts are first-class** -- when an agent confirms an action, that information is now stored with equal weight. - **Entity linking** -- entities are extracted, embedded, and linked across memories for retrieval boosting. - **Multi-signal retrieval** -- semantic, BM25 keyword, and entity matching scored in parallel and fused. See the [migration guide](https://docs.mem0.ai/migration/oss-v2-to-v3) for upgrade instructions. The [evaluation framework](https://github.com/mem0ai/memory-benchmarks) is open-sourced so anyone can reproduce the numbers. ## Research Highlights - **91.6 on LoCoMo** -- +20 points over the previous algorithm - **93.4 on LongMemEval** -- +26 points, with +53.6 on assistant memory recall - **64.1 on BEAM (1M)** -- production-scale memory evaluation at 1M tokens - [Read the full paper](https://mem0.ai/research) # Introduction [Mem0](https://mem0.ai) ("mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. It remembers user preferences, adapts to individual needs, and continuously learns over timeβ€”ideal for customer support chatbots, AI assistants, and autonomous systems. ### Key Features & Use Cases **Core Capabilities:** - **Multi-Level Memory**: Seamlessly retains User, Session, and Agent state with adaptive personalization - **Developer-Friendly**: Intuitive API, cross-platform SDKs, and a fully managed service option **Applications:** - **AI Assistants**: Consistent, context-rich conversations - **Customer Support**: Recall past tickets and user history for tailored help - **Healthcare**: Track patient preferences and history for personalized care - **Productivity & Gaming**: Adaptive workflows and environments based on user behavior ## πŸš€ Quickstart Guide | | Library | Self-Hosted Server | Cloud Platform | |---|---------|-------------------|----------------| | **Best for** | Testing, prototyping | Teams running on their own infrastructure | Zero-ops production use | | **Setup** | `pip install mem0ai` | `docker compose up` | Sign up at [app.mem0.ai](https://app.mem0.ai?utm_source=oss&utm_medium=readme) | | **Dashboard** | -- | [Yes](https://docs.mem0.ai/open-source/setup) | Yes | | **Auth & API Keys** | -- | Yes | Yes | | **Advanced Features** | -- | Teasers | All included | Just testing? Use the library. Building for a team? Self-hosted. Want zero ops? Cloud. ### Library (pip / npm) ```bash pip install mem0ai ``` For enhanced hybrid search with BM25 keyword matching and entity extraction, install with NLP support: ```bash pip install mem0ai[nlp] python -m spacy download en_core_web_sm ``` Install sdk via npm: ```bash npm install mem0ai ``` ### Self-Hosted Server > **Note:** Self-hosted auth is on by default. Upgrading from a pre-auth build? Set `ADMIN_API_KEY`, register an admin through the wizard, or `AUTH_DISABLED=true` for local dev only. See [upgrade notes](https://docs.mem0.ai/open-source/setup#upgrade-notes). ```bash # Recommended: one command β€” start the stack, create an admin, issue the first API key. cd server && make bootstrap # Manual: start the stack and finish setup via the browser wizard. cd server && docker compose up -d # http://localhost:3000 ``` See the [self-hosted docs](https://docs.mem0.ai/open-source/overview) for configuration. ### Cloud Platform 1. Sign up on [Mem0 Platform](https://app.mem0.ai?utm_source=oss&utm_medium=readme) 2. Embed the memory layer via SDK or API keys ### CLI Manage memories from your terminal: ```bash npm install -g @mem0/cli # or: pip install mem0-cli mem0 init mem0 add "Prefers dark mode and vim keybindings" --user-id alice mem0 search "What does Alice prefer?" --user-id alice ``` See the [CLI documentation](https://docs.mem0.ai/platform/cli) for the full command reference. ### Basic Usage Mem0 requires an LLM to function, with `gpt-5-mini` from OpenAI as the default. However, it supports a variety of LLMs; for details, refer to our [Supported LLMs documentation](https://docs.mem0.ai/components/llms/overview). Mem0 uses `text-embedding-3-small` from OpenAI as the default embedding model. For best results with hybrid search (semantic + keyword + entity boosting), we recommend using at least [Qwen 600M](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) or a comparable embedding model. See [Supported Embeddings](https://docs.mem0.ai/components/embedders/overview) for configuration details. First step is to instantiate the memory: ```python from openai import OpenAI from mem0 import Memory openai_client = OpenAI() memory = Memory() def chat_with_memories(message: str, user_id: str = "default_user") -> str: # Retrieve relevant memories relevant_memories = memory.search(query=message, filters={"user_id": user_id}, top_k=3) memories_str = "\n".join(f"- {entry['memory']}" for entry in relevant_memories["results"]) # Generate Assistant response system_prompt = f"You are a helpful AI. Answer the question based on query and memories.\nUser Memories:\n{memories_str}" messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": message}] response = openai_client.chat.completions.create(model="gpt-5-mini", messages=messages) assistant_response = response.choices[0].message.content # Create new memories from the conversation messages.append({"role": "assistant", "content": assistant_response}) memory.add(messages, user_id=user_id) return assistant_response def main(): print("Chat with AI (type 'exit' to quit)") while True: user_input = input("You: ").strip() if user_input.lower() == 'exit': print("Goodbye!") break print(f"AI: {chat_with_memories(user_input)}") if __name__ == "__main__": main() ``` For detailed integration steps, see the [Quickstart](https://docs.mem0.ai/quickstart) and [API Reference](https://docs.mem0.ai/api-reference). ## πŸ”— Integrations & Demos - **ChatGPT with Memory**: Personalized chat powered by Mem0 ([Live Demo](https://mem0.dev/demo)) - **Browser Extension**: Store memories across ChatGPT, Perplexity, and Claude ([Chrome Extension](https://chromewebstore.google.com/detail/onihkkbipkfeijkadecaafbgagkhglop?utm_source=item-share-cb)) - **Langgraph Support**: Build a customer bot with Langgraph + Mem0 ([Guide](https://docs.mem0.ai/integrations/langgraph)) - **CrewAI Integration**: Tailor CrewAI outputs with Mem0 ([Example](https://docs.mem0.ai/integrations/crewai)) ## πŸ“š Documentation & Support - Full docs: https://docs.mem0.ai - Community: [Discord](https://mem0.dev/DiG) Β· [X (formerly Twitter)](https://x.com/mem0ai) - Contact: founders@mem0.ai ## Citation We now have a paper you can cite: ```bibtex @article{mem0, title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory}, author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj}, journal={arXiv preprint arXiv:2504.19413}, year={2025} } ``` ## βš–οΈ License Apache 2.0 β€” see the [LICENSE](https://github.com/mem0ai/mem0/blob/main/LICENSE) file for details.