Code Pluginsource linked

Mem0 Pluginv1.0.5

Mem0 memory backend for OpenClaw — platform or self-hosted open-source

@mem0/openclaw-mem0·runtime openclaw-mem0·by @kartik-mem0
Community code plugin. Review compatibility and verification before install.
openclaw plugins install clawhub:@mem0/openclaw-mem0
Latest release: v1.0.5Download zip

Capabilities

configSchema
Yes
Executes code
Yes
HTTP routes
0
Plugin kind
memory
Runtime ID
openclaw-mem0

Compatibility

Built With Open Claw Version
2026.4.1
Min Gateway Version
>=2026.3.28
Plugin Api Range
>=2026.3.28
Plugin Sdk Version
2026.4.1
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
Name/description match the included code: this is a Mem0 memory backend that auto-recalls and auto-captures conversation data. However the registry metadata claims 'Required env vars: none' and 'Primary credential: none', while the SKILL.md and code clearly describe (a) platform mode requiring a Mem0 API key and (b) open-source mode using OPENAI_API_KEY by default for embeddings/LLM. This metadata mismatch is an incoherence developers should fix.
Instruction Scope
The SKILL.md instructs the agent to automatically inject recalled memories before a response and to auto-capture/submit extracted facts after responses. That's consistent with a memory plugin, but has privacy/flow implications: captures may be sent to external services (Mem0 cloud or configured embedder/LLM) and the plugin runs these actions 'silently' (fire-and-forget). The SKILL.md also defines extraction prompts and system preambles — the prompt-injection scanner flagged 'system-prompt-override', which in this context is expected because the plugin supplies system prompt text to the agent, but it’s worth noting.
Install Mechanism
There is no remote 'download and extract' install spec; package contents include source and lockfiles (pnpm/package-lock.json). Dependencies are from public registries (mem0ai and many common packages). No arbitrary external URL installs were found in the manifest provided.
!
Credentials
Although registry metadata declares no required env vars, the SKILL.md documents that platform mode requires MEM0 API keys and OSS mode uses OPENAI_API_KEY by default (or other provider apiKeys in oss.embedder.config). The plugin therefore expects credentials for third-party services; the manifest should list these. Additional concern: the package contains a file named memory.db and a .claude/settings.local.json — shipping a DB or local settings in the package can accidentally include sensitive data and is unexpected for a plugin release.
Persistence & Privilege
The skill writes local state and vector store files (default path ~/.mem0/vector_store.db and plugin stateDir) and registers tools that can list, add, update, and delete memories (including a bulk delete requiring explicit confirm). 'always' is false and model invocation is allowed (default), which is expected. No evidence the plugin modifies other skills or requests 'always: true'.
Scan Findings in Context
[system-prompt-override] expected: The SKILL.md and skill loader intentionally provide system prompts and extraction preambles used to drive memory extraction/recall. The scanner flagged potential system-prompt override text — this is expected for a memory backend that injects prompt material, but you should review those prompt strings to ensure they don't grant the plugin unconstrained control over the agent.
What to consider before installing
This plugin appears to implement the advertised Mem0 memory backend, but I recommend you take these steps before installing: 1) Confirm the registry metadata is updated to declare required credentials (MEM0_API_KEY for platform use; OPENAI_API_KEY or other provider keys for OSS defaults). 2) Inspect the packaged files (memory.db and .claude/settings.local.json) locally — ensure they contain no real user secrets or data; treat shipped DBs as potentially sensitive. 3) Decide whether you trust sending conversation content to Mem0 Cloud or configured embedders/LLMs (SKILL.md describes both modes). If you will use cloud services, limit scope and rotate keys as needed. 4) Review the SKILL.md extraction/system prompts and the code paths that perform auto-capture to verify they won't capture or transmit sensitive tokens/credentials; the changelog notes they attempted to detect and avoid storing credentials, but verify for your use case. 5) Prefer running in self-hosted OSS mode with a locally controlled vector store if you need stronger data locality guarantees. If any of these points are unacceptable or metadata remains inconsistent, treat the package as untrusted and do not install. If you proceed, monitor the plugin's network activity and the files it writes (e.g., ~/.mem0/) and restrict its access where possible.

Verification

Tier
source linked
Scope
artifact only
Summary
Validated package structure and linked the release to source metadata.
Commit
78ca85a260b7
Tag
78ca85a260b76da5f665d05ecd42e1a9b28a5342
Provenance
No
Scan status
pending

Tags

latest
1.0.5

@mem0/openclaw-mem0

Long-term memory for OpenClaw agents, powered by Mem0.

Your agent forgets everything between sessions. This plugin fixes that — it watches conversations, extracts what matters, and brings it back when relevant. Automatically.

Quick Start

openclaw plugins install @mem0/openclaw-mem0

Platform (Mem0 Cloud)

Get an API key from app.mem0.ai:

openclaw mem0 init --api-key <your-key> --user-id <your-user-id>

Or configure manually in openclaw.json:

"openclaw-mem0": {
  "enabled": true,
  "config": {
    "apiKey": "${MEM0_API_KEY}",
    "userId": "alice"
  }
}

Open-Source (Self-hosted)

No Mem0 key needed. Requires OPENAI_API_KEY for default embeddings and LLM. Vectors are stored locally in SQLite at ~/.mem0/vector_store.db — no external database required.

Defaults: text-embedding-3-small for embeddings, gpt-5.4 for fact extraction.

"openclaw-mem0": {
  "enabled": true,
  "config": {
    "mode": "open-source",
    "userId": "alice"
  }
}

Customize the embedder, vector store, or LLM via the oss block:

"config": {
  "mode": "open-source",
  "userId": "alice",
  "oss": {
    "embedder": { "provider": "openai", "config": { "model": "text-embedding-3-small" } },
    "vectorStore": { "provider": "qdrant", "config": { "host": "localhost", "port": 6333 } },
    "llm": { "provider": "openai", "config": { "model": "gpt-5.4" } }
  }
}

All oss fields are optional. See the Mem0 OSS docs for supported providers.

How It Works

<p align="center"> <img src="https://raw.githubusercontent.com/mem0ai/mem0/main/docs/images/openclaw-architecture.png" alt="Architecture" width="800" /> </p>

Auto-Recall — Before the agent responds, the plugin searches Mem0 for relevant memories and injects them into context.

Auto-Capture — After the agent responds, the conversation is filtered through a noise-removal pipeline and sent to Mem0. New facts get stored, stale ones updated, duplicates merged.

Both run silently. No prompting, no manual calls required.

Memory Scopes

  • Session (short-term) — Scoped to the current conversation via run_id. Recalled alongside long-term memories.
  • User (long-term) — Persistent across all sessions. Default for memory_add.

Multi-Agent Isolation

Each agent gets its own memory namespace automatically via session key routing (agent:<name>:<uuid> maps to userId:agent:<name>). Single-agent setups are unaffected.

Agent Tools

Eight tools are registered for agent use:

ToolDescription
memory_searchSearch by natural language query. Supports scope (session, long-term, all), categories, filters, and agentId.
memory_addStore facts. Accepts text or facts array, category, importance, longTerm, metadata.
memory_getRetrieve a single memory by ID.
memory_listList all memories. Filter by userId, agentId, scope.
memory_updateUpdate a memory's text in place. Preserves history.
memory_deleteDelete by memoryId, query (search-and-delete), or all: true (requires confirm: true).
memory_event_listList recent background processing events. Platform mode only.
memory_event_statusGet status of a specific event by ID. Platform mode only.

CLI

All commands: openclaw mem0 <command>.

# Memory operations
openclaw mem0 add "User prefers TypeScript over JavaScript"
openclaw mem0 search "what languages does the user know"
openclaw mem0 search "preferences" --scope long-term
openclaw mem0 get <memory_id>
openclaw mem0 list --user-id alice --top-k 20
openclaw mem0 update <memory_id> "Updated preference text"
openclaw mem0 delete <memory_id>
openclaw mem0 delete --all --user-id alice --confirm
openclaw mem0 import memories.json

# Management
openclaw mem0 init
openclaw mem0 init --api-key <key> --user-id alice
openclaw mem0 status
openclaw mem0 config show
openclaw mem0 config get api_key
openclaw mem0 config set user_id alice

# Events (platform only)
openclaw mem0 event list
openclaw mem0 event status <event_id>

# Memory consolidation
openclaw mem0 dream
openclaw mem0 dream --dry-run

Configuration Reference

General

KeyTypeDefaultDescription
mode"platform" | "open-source""platform"Backend mode
userIdstringOS usernameUser identifier. All memories scoped to this value.
autoRecallbooleantrueInject relevant memories before each turn
autoCapturebooleantrueExtract and store facts after each turn
topKnumber5Max memories returned per recall
searchThresholdnumber0.5Minimum similarity score (0-1)

Platform Mode

KeyTypeDefaultDescription
apiKeystringRequired. Mem0 API key (supports ${MEM0_API_KEY})
customInstructionsstring(built-in)Custom extraction rules
customCategoriesobject(12 defaults)Category name to description map

Open-Source Mode

All fields optional. Defaults: text-embedding-3-small embeddings, local SQLite vector store (~/.mem0/vector_store.db), gpt-5.4 LLM.

KeyTypeDefaultDescription
customPromptstring(built-in)Extraction prompt
oss.embedder.providerstring"openai"Embedding provider
oss.embedder.configobjectProvider config (apiKey, model, baseURL)
oss.vectorStore.providerstring"memory"Vector store provider (see list above)
oss.vectorStore.configobjectProvider config (host, port, collectionName, dbPath)
oss.llm.providerstring"openai"LLM provider
oss.llm.configobjectProvider config (apiKey, model, baseURL)
oss.historyDbPathstringSQLite path for edit history

License

Apache 2.0