The database for
AI Agent Memory
CortexaDB is a simple, fast, and hard-durable embedded database designed specifically for AI agent memory. Single-file, zero-dependency, no server required.
from cortexadb import CortexaDB
from cortexadb.providers.openai import OpenAIEmbedder
db = CortexaDB.open("agent.mem", embedder=OpenAIEmbedder())
# Store memories
db.add("User prefers dark mode")
db.add("User works at Stripe")
# Semantic search
hits = db.search("What does the user like?")
# => [Hit(id=1, score=0.87), Hit(id=2, score=0.72)]Performance Benchmarks
Benchmarks on M-series Mac · 10,000 embeddings × 384 dimensions · Debug build
Everything you need for agent memory
Built from the ground up for AI agents with hybrid retrieval, knowledge graphs, and rock-solid durability.
Hybrid Retrieval
Combine vector similarity, graph relations, and recency in a single query
Smart Chunking
5 strategies for document ingestion: fixed, recursive, semantic, markdown, json
HNSW Indexing
Ultra-fast approximate nearest neighbor search via USearch with 95% recall
Knowledge Graphs
Connect memories with directed edges and traverse them with BFS
Hard Durability
WAL and segmented storage ensure crash safety and data integrity
Multi-Agent Collections
Isolate memories between agents within a single database file
Why CortexaDB is the best choice
vs ChromaDB
Chroma uses Python plus external embedded databases in local mode, resulting in multi-millisecond overhead per query and slow batching.
vs LanceDB
LanceDB is incredible for massive datasets, but its columnar nature creates fixed overhead for single-item reads and frequent updates.
vs FAISS / sqlite-vec
Raw C++ FAISS requires manual persistence, while SQLite vector extensions can be 1-5ms for exact search.