Memory for Sales & CRM Agents
The problem
Sales is information-dense. A single deal touches many people, conversations, emails, and stages. A rep walking into a call with a buyer they've talked to twice before should have, ready in seconds: who else from that company has objected to what, what stage the deal is at, what was committed in the last call, and what's still open.
CRMs hold most of this — but as fields and notes, not as searchable, summarized, queryable memory. The CRM is the system of record; agent memory is the layer that makes it usable in real time.
What agent memory gives you
Sales agent memory is best modeled as a complement to the CRM, not a replacement. Three layers work together:
- Account-level memory — facts and recent events about the account, shared across reps. Aggregates objections, decisions, blockers from any interaction.
- Buyer-level memory — preferences and entity context about individual contacts. Communication style, decision authority, prior roles.
- Deal stage memory — events with temporal anchors. "Buyer asked about pricing in last call", "decision deadline mentioned for end of Q2".
The agent's job is to compose these at conversation start, surfacing the right facts for the right rep talking to the right buyer.
The account entity graph
Sales memory is organized around an entity graph with four node types: Accounts, Contacts (buyers), Deals, and Representatives. The edges between them carry typed relations:
- Account → [has_contact] → Contact
- Deal → [is_for] → Account
- Deal → [primary_contact] → Contact
- Deal → [assigned_to] → Rep
- Contact → [reports_to] → Contact (org hierarchy within the account)
- Contact → [influences] → Deal (economic vs. technical vs. champion)
This graph enables multi-hop queries that would be impossible with flat CRM data:
// "Who else from this account has talked to our team before?"
const accountContacts = await recall.search({
query: "prior contact conversations",
scope: { account_id: deal.account_id },
hints: { entities: [deal.account_id], hop: 2 },
types: ["event"],
limit: 20,
});
// "What objections have we seen at stage 4 from similar-sized accounts?"
const patterns = await recall.search({
query: "objection pricing migration concern stage 4",
scope: { org_id: "sales_org" }, // cross-account, anonymized
types: ["fact"],
filters: { tags: ["objection-pattern", "stage:4"] },
limit: 15,
});The second query uses an aggregate scope (no account_id or contact_id) to surface anonymized patterns across the full deal history — insight without leaking customer specifics into any rep's prompt.
How the write pipeline behaves for sales agents
The 7-stage Recall write pipeline handles sales data with specific behavior at each stage.
pre_filter: Reject pleasantries, filler ("sounds good", "will do"), out-of-office autoreplies, and meeting invitations that contain no substantive content. Target relevance_threshold: 0.55 for call transcripts (they're verbose); relevance_threshold: 0.45 for email threads (denser content-to-noise ratio).
extract creates:
- Objections → event with tags:
["objection", "type:pricing"]or["type:migration"]or["type:security"] - Commitments → event with tags:
["commitment", "owner:buyer", "deadline:2026-Q2"] - Decision-maker signals → fact (
"CFO has final sign-off authority") + entity update - Account context → fact (
"using Salesforce CRM, 350 seat Enterprise plan") - Stakeholder preferences → preference (
"prefers async email over calls","wants 1-pager not deck")
resolve_refs: "the CFO", "their technical lead", "the Salesforce admin" all need to resolve to entity IDs within the account graph. This requires building your entity graph proactively — import contact records from the CRM into the entity layer at onboarding. Without pre-seeded entities, the resolver falls back to creating new entity nodes for each pronoun, fragmenting the graph and breaking cross-call coherence.
conflict: When the deal stage changes, the old stage fact gets superseded. When a contact's decision authority changes (technical lead leaves, new champion emerges), the relation changes. The conflict stage detects these; configure it to sync the update immediately rather than holding for review.
synthesize: for call transcripts, the synthesis step compresses verbose back-and-forth into the durable signal: the objection type, the commitment structure, the decision criteria. A 45-minute discovery call should produce 6–10 memories, not 60. Tune the synthesis prompt to prioritize named commitments with owners and deadlines, explicit objections with stated reasons, and any mention of decision criteria or evaluation timeline.
score: confidence scoring in a sales context weighs two factors heavily — explicitness (was the claim stated directly, or inferred?) and source reliability (transcript of a call with the buyer vs. rep's post-call notes vs. email forwarded from a third party). A directly stated commitment from the buyer in a recorded call should score ≥ 0.85; an inferred sentiment from a rep's summary note should score ≤ 0.60. Use these thresholds to gate what syncs back to the CRM (high-confidence only) vs. what stays in Recall as working context (full range).
CRM as system of record — the integration boundary
The mental model: Recall is the working surface; the CRM is the durable record. They serve different functions and must not be conflated.
What lives in the CRM only (source of truth, never in Recall as primary):
- Contract terms and pricing
- Legal agreements
- Revenue metrics and forecasting
- Ownership and territory assignment
What lives in Recall only (working surface, secondary sync to CRM):
- Verbatim conversation context
- Rep observations that didn't fit a CRM field
- Inferred buyer sentiment
- Informal commitments made in conversation
What syncs bidirectionally (CRM as primary, Recall as secondary for querying):
- Deal stage (CRM → Recall at session start, refreshed each call)
- Contact records (CRM → Recall entity layer on import)
- Activity log (Recall events → CRM notes on confirmed high-confidence events)
async function prepCallBrief(dealId: string, contactId: string): Promise<Brief> {
// Step 1: Sync CRM state into working context (not into Recall store)
const [crmDeal, crmContact] = await Promise.all([
crm.getDeal(dealId),
crm.getContact(contactId),
]);
// Step 2: Pull conversation memory from Recall
const [objections, commitments, preferences] = await Promise.all([
recall.search({
query: "objections concerns blockers",
scope: { account_id: crmDeal.accountId },
filters: { tags: ["objection"] },
types: ["event"],
limit: 10,
}),
recall.search({
query: "commitments next steps follow-up",
scope: { account_id: crmDeal.accountId, contact_id: contactId },
filters: { tags: ["commitment"] },
types: ["event"],
limit: 10,
}),
recall.search({
query: "communication preference meeting style technical level",
scope: { contact_id: contactId },
types: ["preference"],
limit: 10,
}),
]);
// CRM data and Recall memories are assembled into the brief — not merged into each other
return assembleBrief({ crmDeal, crmContact, objections, commitments, preferences });
}Objection pattern mining across deals
The highest-value cross-account query in sales memory is objection pattern mining: which objection types appear at which deal stages, and what intervention resolves them at what rate?
This requires a two-layer approach:
- Store individual objections as events scoped to the account (customer-specific, private)
- Periodically extract anonymized patterns from across deals, tagged as fact type in an aggregate scope
// Step 1: Store individual objection (account-scoped, private)
await recall.write({
scope: { account_id: deal.accountId },
candidates: [{
type: "event",
content: "CFO raised migration cost concern; referenced prior failed ERP migration",
tags: ["objection", "type:migration-cost", `stage:${deal.stage}`],
entities: [deal.id, contact.id],
}],
});
// Step 2: Periodic batch job aggregates anonymized patterns (org-scoped, no customer data)
// This runs server-side in your data pipeline, not per-call
await recall.write({
scope: { org_id: "sales_org" },
candidates: [{
type: "fact",
content: "Migration-cost objections at stage 4 resolve at 68% rate when addressed with ROI case study + IT meeting",
tags: ["objection-pattern", "type:migration-cost", "stage:4"],
confidence: 0.78, // computed from sample size and variance
}],
});The pre-call brief can then surface both: "This CFO specifically raised migration concerns (private)" and "Migration-cost objections at stage 4 close 68% of the time with the ROI play (pattern)."
Retriever strategy for sales agents
Retriever weights for the pre-call brief query:
- Entity-graph (weight 0.40): highest priority. The call brief is fundamentally an entity composition task — who is this person, what account are they from, what's their relationship to the deal, who else is involved.
- Temporal (weight 0.25): second highest. "What happened in the last 30 days with this deal?" is almost always the most relevant query. Temporal retrieval pulls events ordered by recency.
- Semantic (weight 0.20): for soft-matching objections, sentiment signals, preferences that aren't exact keywords.
- BM25 (weight 0.15): for exact product names, competitor mentions, specific technical objections ("SSO", "SOC 2 Type II", "Salesforce API rate limits").
For post-call memory extraction, there's no retrieval — the pipeline writes. For competitive analysis queries ("what have we heard about Competitor X across deals?"), flip to BM25-dominant (0.50) since competitor names need exact-match precision.
Stage-aware decay and deal lifecycle management
Deals have a lifecycle: discovery → qualification → evaluation → negotiation → closed (won/lost). Memory relevance changes sharply at lifecycle boundaries.
Closed-lost deals: events and facts from the deal remain valuable for pattern analysis (what went wrong, what objections appeared) but are not relevant to a new deal with the same account. Tag memories with deal_id; scope future queries to exclude closed-lost deal memories unless specifically doing retrospective analysis.
Closed-won deals: switch to customer success mode. The knowledge gained during the sales process (technical requirements, stakeholder map, decisions) becomes onboarding context. Persist explicitly with tag ["post-sale", "onboarding-relevant"].
Active deal events: standard event half-life (30 days) is too aggressive for multi-month enterprise deals. Override to 90 days: "objection raised 8 weeks ago" is still relevant context for a deal in evaluation.
// Deal-stage-aware write with custom decay
await recall.write({
scope: { account_id, deal_id: deal.id },
candidates: [{
type: "event",
content: "Technical lead reviewed security architecture; no blockers found",
tags: ["technical-eval", `stage:${deal.stage}`, "deal-event"],
valid_from: new Date().toISOString(),
half_life_days: 90, // override default 30d for enterprise deal cycle
}],
});Background freshness decay jobs apply the half-life formula freshness(t) = 2^(-t/τ) where τ is the configured half-life. After 90 days, the above event scores at 0.5 freshness; after 180 days, at 0.25 — deprioritized but not deleted.
Hallucination defense for customer-facing memory
In sales, a hallucinated "fact" surfaced in front of a customer is a trust-ending event. Three-layer defense is non-negotiable:
- Write-time grounding (escape rate ε₁ = 0.20): the extraction LLM cites specific conversation spans for every claim. Claims not grounded in the source turn are rejected. This catches hallucinated profiles — claims the model invented about the buyer that weren't in the conversation.
- Store-time consistency (ε₂ = 0.50): the consistency scanner checks new facts against existing memories for contradiction. If the pipeline is about to store "CFO has sign-off authority" but the store already has "VP of Engineering is the final decision-maker (confidence 0.85)", it flags the conflict rather than silently accepting the new fact.
- Read-time faithfulness (ε₃ = 0.30): the faithfulness checker validates that the agent's pre-call brief is supported by the memories in context. If the brief says "buyer is enthusiastic about the migration timeline" but no memory supports that claim, it's flagged before the brief reaches the rep.
Multiplicative escape rate: 0.20 × 0.50 × 0.30 = 0.03. Three percent of genuine junk passes through all three layers. In a customer-facing context, additional human review of high-stakes briefs (enterprise deals > $500K) adds a fourth check.
Measuring sales memory quality
Pre-call brief accuracy: after each call, have the rep flag whether the brief was accurate (correct information) or misleading (wrong or stale information). Target: > 90% accurate. Misleading briefs are worse than no brief — they prime the rep with wrong context.
Objection coverage: after a deal closes (won or lost), review the objections the buyer raised. What fraction of them appeared in the Recall memory before the call where they were raised? If objections are appearing that the memory didn't surface, your retrieval precision is low for this query type.
CRM sync fidelity: of the high-confidence memories that should sync back to the CRM (commitments, decision authority updates, stage progressions), what fraction actually land in the CRM? If the sync is lossy, your CRM data quality degrades over time.
Retrieval latency: the pre-call brief must be ready before the rep dials. Target p95 latency under 800ms for the full brief assembly (three parallel Recall queries + CRM fetch). If you're consistently above that, profile which query is slow: large limit values on entity-graph hops are the most common culprit — reduce hop: 2 to hop: 1 for the contact-discovery query and paginate the second hop lazily.
Handling rep transitions and account handoffs
When a rep leaves or an account is reassigned, the memory should transfer cleanly. This is one of the strongest arguments for account-scoped memory over rep-scoped memory: everything stored at scope: { account_id } is automatically available to the incoming rep without any migration step.
What does require explicit handling on a handoff:
- Rep-specific preferences: any memory scoped to
{ account_id, rep_id }(e.g., "this rep prefers to lead with the technical integration story") is stale and should be retired. The incoming rep will build their own interaction style. - Open commitments: commitments from a departing rep that were never completed need to be surfaced explicitly in the first handoff brief, not just present in the memory store. Write a dedicated handoff event:
{ type: "event", tags: ["handoff", "open-commitment"], content: "Prior rep committed to providing reference customer intro by end of month" }. - Stakeholder trust signals: buyer preferences and rapport signals (
"responds well to technical depth","prefers data over anecdotes") should be explicitly flagged as still-valid in the handoff brief. They were built with the prior rep but reflect the buyer's actual preferences, not the rep relationship.
// Generate handoff brief for incoming rep
const handoffMemories = await recall.search({
query: "open commitments pending follow-up unresolved objections stakeholder preferences",
scope: { account_id: deal.accountId },
filters: { tags: ["-handoff-complete"] }, // exclude already-resolved handoff items
types: ["event", "preference", "fact"],
sort: { field: "recency", direction: "desc" },
limit: 25,
});Scoping decisions and privacy boundaries
Sales memory touches personal information about buyers — communication preferences, inferred seniority and authority, relationship dynamics. Scope decisions are also privacy decisions.
Per-contact scoping rule: preferences and personal-style observations ("prefers not to be CCed on internal emails", "responds slowly on Fridays") must be scoped to { contact_id }, never to { account_id }. Scoping them at the account level means any rep querying the account gets personal observations about a specific person — a privacy and data-minimization issue.
Cross-account pattern isolation: when the objection-pattern mining batch job runs, it must operate on pre-anonymized data. The job reads account-scoped objection events, strips all account and contact identifiers, and writes patterns to the org_id scope. The pattern fact must not contain any information that would allow a reader to reverse-engineer which customer it came from.
Data retention: closed deals older than your retention policy should have their account-scoped memories purged or downsampled. The pattern layer (org-scoped) is the residual value after the raw conversation data ages out. Design the pipeline to extract patterns continuously rather than holding raw events indefinitely in hopes of running analysis later.
These constraints interact with your retrieval scope configuration in a useful way: if you've consistently scoped data correctly, the retrieve call itself becomes the privacy enforcer. A query scoped to { account_id: X } cannot surface contact-scoped data from account Y by construction — the scope is a hard boundary, not a filter hint.
Example flow
- 1Rep opens call dialerAgent retrieves account memory + this buyer's memory + recent deal events.
- 2Agent surfaces pre-call brief'Buyer raised pricing concern in last call. Account is at stage 4; CFO's approval still pending. Last touch was Tuesday — they emailed asking about integration with Salesforce.'
- 3Call happens; rep takes notesNotes go through the write pipeline. Pre-filter rejects pleasantries. Extraction picks out commitments, objections, next steps.
- 4New events appear in the deal graphObjection: 'concerned about migration cost'. Commitment: 'will introduce CFO next week'. Both link to the deal entity.
- 5CRM sync (write-through)High-confidence memories sync back to the CRM as structured fields. Memory is the working surface; CRM is the durable record.
- 6Next rep on the deal benefitsAccount-level memory persists across reps. The CFO meeting commitment is visible to whoever picks up the deal next.
Patterns that work
- + CRM as system of recordMemory is the working surface; CRM is the source of truth for revenue ops. Sync high-confidence memories back; never delete the CRM in favor of memory.
- + Account graphBuild an entity graph linking buyers within an account, deals to accounts, and reps to deals. Multi-hop queries surface 'similar deals at similar stage' insights.
- + Objection patterns as relationsTrack objection types and their resolution patterns across deals. 'Pricing-concern' objections at stage 4 resolve X% of the time with Y intervention — actionable signal.
- + Stage-aware decayClosed-won and closed-lost deals decay aggressively (events were valuable; they're done). Active deal events stay fresh.
Pitfalls to avoid
- − Replacing the CRMMemory is fast and queryable but not authoritative. Compliance, reporting, and forecasting need the CRM's durability and audit trail.
- − Fabricated memory in pitchesIf the agent surfaces a 'fact' that the buyer never said, that's a hallucination in front of a customer. Three-layer hallucination defense is non-negotiable here.
- − Cross-account leak in the graph'Similar deals' patterns must aggregate without surfacing other accounts' specifics in any rep-facing prompt. Anonymize at the prompt boundary.
- − Stale deal stageStage in CRM updates; memory doesn't. Result: agent thinks deal is at stage 3 when it's actually at stage 5. Sync stage from CRM at session start, not memory.
Code sketch
// Pre-call brief
const brief = await recall.search({
query: `buyer:${buyer.id} objections OR commitments OR next-steps`,
scope: { account_id: deal.account_id },
// Pull account-wide context, not just this rep's
filters: { entities: [buyer.id, deal.id] },
types: ["event", "fact"],
limit: 30,
});
// After the call
await recall.write({
scope: { account_id: deal.account_id },
source: { call: callId, rep: rep.id },
// Extracted commitments and objections become events
});
Go deeper
Build this with Recall
Recall is open source and ships with the architecture above out of the box.