# EverOS
**Repository Path**: study-ai_1/EverOS
## Basic Information
- **Project Name**: EverOS
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2026-04-15
- **Last Updated**: 2026-04-15
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README

[Website](https://evermind.ai) · [Documentation](https://docs.evermind.ai) · [Blog](https://evermind.ai/blogs)
> [!IMPORTANT]
>
> ### Project Structure Update
>
> We've unified [EverCore](methods/evermemos/), [HyperMem](methods/HyperMem/), [EverMemBench](benchmarks/EverMemBench/), and [EvoAgentBench](benchmarks/EvoAgentBench/) with usecases into a single repository.
>
> EverOS gives developers one place to build, evaluate, and integrate long-term memory into their self-evolving agents. 🎉
## Project Overview
**EverOS** is a collection of long-term memory **methods**, **benchmarks**, and **usecases** for building self-evolving agents.
```
EverOS/
└── methods/
├── EverCore/ # Long-term memory operating system
└── HyperMem/ # Hypergraph memory architecture
├── benchmarks/
│ ├── EverMemBench/ # Memory quality evaluation
│ └── EvoAgentBench/ # Agent self-evolution evaluation
└── usecases/ # Example applications
```
## Methods
Methods are production-ready memory architectures that give agents persistent, structured long-term memory. Each can be used standalone or composed together depending on your use case.
|

#### EverCore
A self-organizing memory operating system inspired by biological imprinting. Extracts, structures, and retrieves long-term knowledge from conversations — enabling agents to remember, understand, and continuously evolve.
[Paper](https://arxiv.org/abs/2601.02163) · [Docs](methods/evermemos/)
|

#### HyperMem
A hypergraph-based hierarchical memory architecture that captures high-order associations through hyperedges. Organizes memory into topic, event, and fact layers for coarse-to-fine long-term conversation retrieval. LoCoMo 92.73%.
[Paper](https://arxiv.org/abs/2604.08256) · [Docs](methods/HyperMem/)
|
## Benchmarks
Benchmarks are designed as **open public standards**. Any memory architecture or agent framework can be evaluated under the same ruler.
|

#### EverMemBench
Three-layer memory quality evaluation: factual recall, applied reasoning, and personalized generalization. Evaluates memory systems and LLMs under a unified standard.
[Paper](https://arxiv.org/abs/2602.01313) · [Dataset](https://huggingface.co/datasets/EverMind-AI/EverMemBench-Dynamic) · [Docs](benchmarks/EverMemBench/)
|

#### EvoAgentBench
Agent self-evolution evaluation — not static snapshots, but longitudinal growth curves. Measures transfer efficiency, error avoidance, and skill-hit quality through controlled experiments with and without evolution.
[Docs](benchmarks/EvoAgentBench/)
|
[](#readme-top)
## Use Cases
|

#### Earth Online Memory Game
Earth Online is a memory-aware productivity game that turns everyday planning into a living quest log.Â
|

#### Multi‑Agent Orchestration Platform
Golutra is pitched as “beyond the IDE,” a multi-agent workforce rather than a single assistant for engineering teams.
|
|

#### Mobi Is a Companion
An iOS app that lets users create, nurture, and live with a personalized AI “lifeform” companion called Mobi.
|

#### LAI Wearable with Memory
A context-native empathic AI wearable that listens to everyday life
and converts conversations into memory.
|
|

#### OpenClaw Agent Memory
A 24/7 agent with continuous learning memory that you can carry with you wherever you go.
[Agent Memory](https://github.com/EverMind-AI/everos/tree/agent_memory) · [Plugin](https://github.com/EverMind-AI/everos/tree/agent_memory/everos-openclaw-plugin)
|

#### Live2D Character with Memory
Add long-term memory to your anime character that can talk to you in real-time, powered by [TEN Framework](https://github.com/TEN-framework/ten-framework).
[Code](https://github.com/TEN-framework/ten-framework/tree/main/ai_agents/agents/examples/voice-assistant-with-everos)
|
|

#### Computer-Use with Memory
Use computer-use to launch screenshot-based analysis, all stored in your memory.
[Live Demo](https://screenshot-analysis-vercel.vercel.app/)
|

#### Game of Thrones Memories
A demonstration of AI memory infrastructure through an interactive Q&A experience with "A Game of Thrones".
[Code](https://github.com/EverMind-AI/evermem_got_demo)
|
|

#### Claude Code Plugin
Persistent memory for Claude Code. Automatically saves and recalls context from past coding sessions.
[Code](https://github.com/EverMind-AI/evermem-claude-code)
|

#### Memory Graph Visualization
Visualize your stored entities and how they relate. Pure frontend demo — backend integration in progress.
[Live Demo](https://main.d2j21qxnymu6wl.amplifyapp.com/graph.html)
|
[](#readme-top)
## Quick Start
```bash
git clone https://github.com/EverMind-AI/EverOS.git
cd EverOS
```
Then navigate to the component you need:
| | Component | Entry Point |
| :-- | :--- | :--- |
| **EverCore** | Build agents with long-term memory | [methods/everos/](methods/everos/) |
| **HyperMem** | Use the hypergraph memory architecture | [methods/HyperMem/](methods/HyperMem/) |
| **EverMemBench** | Evaluate memory system quality | [benchmarks/EverMemBench/](benchmarks/EverMemBench/) |
| **EvoAgentBench** | Measure agent self-evolution | [benchmarks/EvoAgentBench/](benchmarks/EvoAgentBench/) |
> Each component has its own installation guide, dependency configuration, and usage examples.
### EverCore Quick Start
```bash
cd methods/evermemos
# Start Docker services
docker compose up -d
# Install dependencies
curl -LsSf https://astral.sh/uv/install.sh | sh
uv sync
# Configure API keys
cp env.template .env
# Edit .env and set:
# - LLM_API_KEY (for memory extraction)
# - VECTORIZE_API_KEY (for embedding/rerank)
# Start server
uv run python src/run.py
# Verify installation
curl http://localhost:1995/health
# Expected response: {"status": "healthy", ...}
```
Server runs at `http://localhost:1995` · [Full Setup Guide](docs/installation/SETUP.md)
### Basic Usage
Store and retrieve memories with simple Python code:
```python
import requests
API_BASE = "http://localhost:1995/api/v1"
# 1. Store a conversation memory
requests.post(f"{API_BASE}/memories", json={
"message_id": "msg_001",
"create_time": "2025-02-01T10:00:00+00:00",
"sender": "user_001",
"content": "I love playing soccer on weekends"
})
# 2. Search for relevant memories
response = requests.get(f"{API_BASE}/memories/search", json={
"query": "What sports does the user like?",
"user_id": "user_001",
"memory_types": ["episodic_memory"],
"retrieve_method": "hybrid"
})
result = response.json().get("result", {})
for memory_group in result.get("memories", []):
print(f"Memory: {memory_group}")
```
[More Examples](docs/usage/USAGE_EXAMPLES.md) · [API Reference](https://docs.evermind.ai/api-reference/introduction) · [Interactive Demos](docs/usage/DEMOS.md)
[](#readme-top)
## Evaluation & Benchmarking
EverCore achieves **93% overall accuracy** on the LoCoMo benchmark, outperforming comparable memory systems.
### Benchmark Results

### Supported Benchmarks
- **[LoCoMo](https://github.com/snap-research/locomo)** — Long-context memory benchmark with single/multi-hop reasoning
- **[LongMemEval](https://huggingface.co/datasets/xiaowu0162/longmemeval-cleaned)** — Multi-session conversation evaluation
- **[PersonaMem](https://huggingface.co/datasets/bowen-upenn/PersonaMem)** — Persona-based memory evaluation
### Run Evaluations
```bash
# Install evaluation dependencies
uv sync --group evaluation
# Run smoke test (quick verification)
uv run python -m evaluation.cli --dataset locomo --system everos --smoke
# Run full evaluation
uv run python -m evaluation.cli --dataset locomo --system everos
# View results
cat evaluation/results/locomo-everos/report.txt
```
[Full Evaluation Guide](evaluation/README.md) · [Complete Results](https://huggingface.co/datasets/EverMind-AI/everos_Eval_Results)
[](#readme-top)
## Citation
If EverOS helps your research, please cite:
```bibtex
@article{hu2026evermemos,
title = {EverMemOS: A Self-Organizing Memory Operating System for Structured Long-Horizon Reasoning},
author = {Chuanrui Hu and Xingze Gao and Zuyi Zhou and Dannong Xu and Yi Bai and Xintong Li and Hui Zhang and Tong Li and Chong Zhang and Lidong Bing and Yafeng Deng},
journal = {arXiv preprint arXiv:2601.02163},
year = {2026}
}
@article{yue2026hypermem,
title = {HyperMem: Hypergraph Memory for Long-Term Conversations},
author = {Juwei Yue and Chuanrui Hu and Jiawei Sheng and Zuyi Zhou and Wenyuan Zhang and Tingwen Liu and Li Guo and Yafeng Deng},
journal = {arXiv preprint arXiv:2604.08256},
year = {2026}
}
@article{hu2026evaluating,
title = {Evaluating Long-Horizon Memory for Multi-Party Collaborative Dialogues},
author = {Chuanrui Hu and Tong Li and Xingze Gao and Hongda Chen and Yi Bai and Dannong Xu and Tianwei Lin and Xiaohong Li and Yunyun Han and Jian Pei and Yafeng Deng},
journal = {arXiv preprint arXiv:2602.01313},
year = {2026}
}
```
[](#readme-top)
## 🌟 Stay Tuned

[](#readme-top)
## Contributing
We love open-source energy! Whether you are squashing bugs, shipping features, sharpening docs, or just tossing in wild ideas, every PR moves EverOS forward. Browse [Issues](https://github.com/EverMind-AI/EverOS/issues) to find your perfect entry point, then show us what you have got. Let us build the future of memory together.
> [!TIP]
>
> **Welcome all kinds of contributions** 🎉
>
> Join us in building EverOS better! Every contribution makes a difference, from code to documentation. Share your projects on social media to inspire others!
>
> Connect with one of the EverOS maintainers [@elliotchen200](https://x.com/elliotchen200) on 𝕏 or [@cyfyifanchen](https://github.com/cyfyifanchen) on GitHub for project updates, discussions, and collaboration opportunities.


### Code Contributors
[](https://github.com/EverMind-AI/EverOS/graphs/contributors)


### Contribution Guidelines
Read our [Contribution Guidelines](methods/evermemos/CONTRIBUTING.md) for code standards and Git workflow.


### License & Citation & Acknowledgments
[Apache 2.0](https://github.com/EverMind-AI/EverOS/blob/main/LICENSE) • [Acknowledgments](methods/evermemos/docs/ACKNOWLEDGMENTS.md)
[](#readme-top)