v1.0.1 Stable

The database for
AI Agent Memory

CortexaDB is a simple, fast, and hard-durable embedded database designed specifically for AI agent memory. Single-file, zero-dependency, no server required.

...
GitHub Stars
...
PyPI Downloads
...
Chunks / Second
batch ingestion
...
Search Latency
p50, debug build
agent.py
from cortexadb import CortexaDB
from cortexadb.providers.openai import OpenAIEmbedder

db = CortexaDB.open("agent.mem", embedder=OpenAIEmbedder())

# Store memories
db.add("User prefers dark mode")
db.add("User works at Stripe")

# Semantic search
hits = db.search("What does the user like?")
# => [Hit(id=1, score=0.87), Hit(id=2, score=0.72)]

Performance Benchmarks

HNSW Mode (10K vectors)
1.03ms p50
952 QPS · 95% recall
Exact Mode (10K vectors)
16.38ms p50
56 QPS · 100% recall

Benchmarks on M-series Mac · 10,000 embeddings × 384 dimensions · Debug build

Everything you need for agent memory

Built from the ground up for AI agents with hybrid retrieval, knowledge graphs, and rock-solid durability.

Hybrid Retrieval

Combine vector similarity, graph relations, and recency in a single query

Smart Chunking

5 strategies for document ingestion: fixed, recursive, semantic, markdown, json

HNSW Indexing

Ultra-fast approximate nearest neighbor search via USearch with 95% recall

Knowledge Graphs

Connect memories with directed edges and traverse them with BFS

Hard Durability

WAL and segmented storage ensure crash safety and data integrity

Multi-Agent Collections

Isolate memories between agents within a single database file

Why CortexaDB is the best choice

vs ChromaDB

Chroma uses Python plus external embedded databases in local mode, resulting in multi-millisecond overhead per query and slow batching.

Our Advantage: CortexaDB uses a single, unified Rust engine. You cross the FFI boundary exactly once, resulting in 10x faster ingestion and true ~0.3ms latency.

vs LanceDB

LanceDB is incredible for massive datasets, but its columnar nature creates fixed overhead for single-item reads and frequent updates.

Our Advantage: CortexaDB is tuned for OLTP agent workloads—fast, frequent reads/writes. Keeping the HNSW index in memory prevents disk bottlenecking.

vs FAISS / sqlite-vec

Raw C++ FAISS requires manual persistence, while SQLite vector extensions can be 1-5ms for exact search.

Our Advantage: We use USearch (state-of-the-art C++ SIMD) wrapped in a Rust storage engine with WAL. You get FAISS-level speeds with real database durability.