ARC LABS · VOL. 01 · ISSUE 012026.04.17BANGALORE
OPEN CORE · APACHE 2.0 · RUST

Infrastructure
for agent
cognition.

Three composable primitives for autonomous AI — memory, planning, reasoning. Embedded-first. Rust core. Polyglot SDKs. The stack you would build yourself, if you had the time.

“Agents forget between sessions, plan with no grounding, and reason without evidence. We are fixing that — one layer at a time.”

RUST CORETYPED MEMORYPRE-WRITE FILTERHYBRID RETRIEVALEMBEDDED / SELF-HOST / CLOUDOPEN CORETS SDK · PY SDK · MCPLONGMEMEVAL TARGET 85%+RUST CORETYPED MEMORYPRE-WRITE FILTERHYBRID RETRIEVALEMBEDDED / SELF-HOST / CLOUDOPEN CORETS SDK · PY SDK · MCPLONGMEMEVAL TARGET 85%+
01Why we exist

Agents forget everything.

AI agents are stateless by default. Every conversation starts from zero. The memory solutions that exist today dump unstructured text into a vector store and call it done — the result is junk extraction, temporal blindness, and no way to audit what an agent actually remembers.

Arc Labs builds the cognitive infrastructure that agents are missing. We started with memory because it is the foundation — an agent that cannot remember cannot plan, cannot reason, cannot improve. Our first product, Recall, is a typed, structured memory layer with a write pipeline that rejects noise before it ever reaches storage.

We build in Rust for correctness and performance. We ship open source under Apache 2.0 because infrastructure this foundational should be inspectable. We design embedded-first so you can run locally before you ever need a server. The goal is not another SaaS — it is a stack of primitives you own.

02The Cognitive Stack

Three primitives.
One stack.

01  RECALL    MEMORY LAYER    SHIPPING
02  PLAN      PLANNING LAYER   Q3 2026
03  REASON    REASONING LAYER   Q1 2027
01 / ACTIVE
recall.
MEMORY LAYER · v0.1
01prefilter
02extract
03classify
04resolve_refs
05dedupe
06conflict_check
07persist

Rust-core memory layer that fixes junk-heavy extraction, temporal blindness, and operational friction. Seven-stage write pipeline rejects noise before storage. Typed schema with facts, preferences, events, entities, and relations. Drops into any agent framework.

<10% JUNK<200ms P99 READ85%+ LONGMEMEVAL
Available NowExplore
02 / SOON
plan.
PLANNING LAYER · Q3 2026

Turns goals into executable plans atop Recall context. Plans are DAGs of steps with subgoals, expected tools, and risk types. Maps steps to MCP tool calls, infers parameters from memories, replans on failures.

Q3 2026
03 / SOON
reason.
REASONING LAYER · Q1 2027
queryretrieveanalyze
↘ verifydecide

Structured, policy-aware reasoning for multi-step workflows. Combines chain-of-thought, self-consistency, Recall retrieval, and MCP tool coordination. Grounds answers in memories and tool outputs — not vibes.

Q1 2027
03Thesis
The next decade of AI isn't bigger models —
it's the infrastructure around them.
Memory. Planning. Reasoning.
Primitives you can own, audit, and run locally.
Arc Labs · Founding Thesis · 2026
04Principles

What we refuse to build.

Constraints are design decisions. Here are the ones we won't compromise on — even when it would be easier to.

01

Open core, Apache 2.0.

The engine is free, forever. We make money on hosting, not by paywalling basic features. Nothing that ships free today will ever become a paid tier.

02

Local-first is a feature.

Every primitive runs on your laptop — single binary, SQLite storage, zero dependencies. The cloud tier is a convenience, never a requirement.

03

Typed over text.

Flat text memories lose temporal and relational signal. Everything we store has a type, a provenance chain, a valid-from timestamp. Structure is the whole game.

04

Quality over quantity.

200 high-signal memories beat 10,000 noisy ones. Every pipeline has a pre-write filter that rejects more than it keeps. Fewer, better.

05

Provenance is non-negotiable.

Every memory links back to its source turn. Every retrieval is auditable. Every reasoning step cites its evidence. No black boxes.

06

Transparent pricing.

No 'basic feature is $249/mo' trap. LLM costs pass through at zero markup. If you can read a spreadsheet, you can predict your bill.

05Research & writing

What we're learning in public.

All writing
06Build with Recall

Everything you need to ship.

A quickstart for Tuesday afternoon, a Learn track for the deep dive, a glossary for quick lookups, and a playground when you want to see the system in motion.

Memory first.
Everything else follows.

Updates from the lab.

Engineering notes, research drops, occasional product updates. Roughly monthly.