Code Pluginsource linked

Openclawv0.2.0

Semantic memory search plugin for OpenClaw — persistent cross-session memory powered by Milvus vector search. Automatically captures conversation summaries and recalls relevant context.

memsearch·runtime memsearch·by @zc277584121
Community code plugin. Review compatibility and verification before install.
openclaw plugins install clawhub:memsearch
Latest release: v0.2.0Download zip

Capabilities

configSchema
Yes
Executes code
Yes
HTTP routes
0
Plugin kind
memory
Runtime ID
memsearch

Compatibility

Built With Open Claw Version
2026.3.23
Plugin Api Range
>=2026.3.11
Plugin Sdk Version
2026.3.23
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name, description, and code align: this is a memory plugin that summarizes turns, writes markdown memory files under an OpenClaw workspace, and uses a memsearch CLI (or uvx fallback) plus Milvus-derived indexes. The scripts (derive collection, parse transcript) and index.ts behaviors are coherent with the described purpose.
!
Instruction Scope
SKILL.md and the plugin instruct the agent to read and inject recent memories, expand chunks, and parse transcript files. The memory_get/memory_transcript flow can cause the agent to read referenced transcript file paths; SKILL.md even suggests 'try reading the referenced file directly' which grants broad discretion to open and parse files in project/workspace directories. Combined with the pre-scan 'system-prompt-override' flag, this expands the agent's runtime scope beyond just keyword recall and merits review.
Install Mechanism
No opaque downloads are embedded in the package. There is an install.sh that calls standard tooling (openclaw plugins install) and recommends installing memsearch via uv/uvx. The README/install instructions reference running an external uv installer (curl https://astral.sh/uv/install.sh | sh) as a user action — that is common but still a higher-risk manual step if you follow it without reviewing the remote installer.
Credentials
The skill declares no required environment variables or credentials. The code reads normal process.env values (HOME, MEMSEARCH_NO_WATCH) for expected behavior. There are no requests for cloud keys, passwords, or unrelated secrets.
Persistence & Privilege
Skill is not 'always: true' and uses standard OpenClaw plugin registration and hooks. It stores memories under the agent's workspace (.memsearch/memory) and does not attempt to modify other skills or global agent settings beyond typical plugin install/config flows.
Scan Findings in Context
[system-prompt-override] unexpected: The pre-scan flagged a 'system-prompt-override' pattern in SKILL.md. The provided SKILL.md text doesn't contain an obvious explicit system-prompt injection, so this may be a false positive from the scanner, but it should be reviewed manually. Any content in SKILL.md that instructs an LLM to change its system-level behavior would be unexpected for a memory plugin.
What to consider before installing
This plugin appears to do what it says (store and recall session summaries) but review a few things before installing: - Inspect stored memory files and anchors: memory_get can surface anchors such as <!-- session:UUID transcript:PATH -->. Verify anchors only reference transcripts within your expected agent workspace and not arbitrary system files. - Review index.ts and the parse-transcript script: they read workspace files and run local CLI commands. Confirm those commands (memsearch, uvx) come from sources you trust. - Be cautious with the suggested installer command for 'uv' (curl | sh) — if you follow it, review the upstream installer first. - If you are worried about accidental data capture, disable autoCapture/autoRecall in the plugin config before use, and prefer the ONNX provider to avoid sending embeddings to external APIs. - Because the SKILL.md triggered a prompt-injection warning, manually inspect SKILL.md and any user-facing prompts to ensure there is no instruction that would override agent/system prompts or cause unexpected behavior. Given these points, the package is coherent with its purpose but carries moderate risk from broad filesystem access and CLI invocation; if you lack time to audit, treat it as suspicious and restrict or sandbox its use.

Verification

Tier
source linked
Scope
artifact only
Summary
Validated package structure and linked the release to source metadata.
Commit
5b7d087aba8d
Tag
main
Provenance
No
Scan status
pending

Tags

latest
0.2.0

memsearch — OpenClaw Plugin

Automatic persistent memory for OpenClaw. Every conversation turn is summarized and indexed — your next session picks up where you left off.

Prerequisites

Install

From ClawHub (recommended)

# 1. Install memsearch
uv tool install "memsearch[onnx]"

# 2. Install the plugin from ClawHub
openclaw plugins install clawhub:memsearch

# 3. Restart the gateway
openclaw gateway restart

From Source (development)

# 1. Install memsearch
uv tool install "memsearch[onnx]"

# 2. Clone the repo and install the plugin
git clone https://github.com/zilliztech/memsearch.git
cd memsearch
openclaw plugins install ./plugins/openclaw

# 3. Restart the gateway
openclaw gateway restart

Usage

Start a TUI session as normal:

openclaw tui

What happens automatically

WhenWhat
Agent startsRecent memories injected as context
Each turn endsConversation summarized (bullet-points) and saved to daily .md
LLM needs historyCalls memory_search / memory_get / memory_transcript tools

Recall memories

Two ways to trigger:

/memory-recall what was the caching strategy we chose?

Or just ask naturally — the LLM auto-invokes memory tools when it senses the question needs history:

We discussed caching strategies before, what did we decide?

Three-layer progressive recall

The plugin registers three tools the LLM uses progressively:

  1. memory_search — Semantic search across past memories. Always starts here.
  2. memory_get — Expand a chunk to see the full markdown section with context.
  3. memory_transcript — Parse the original session transcript for exact dialogue.

The LLM decides how deep to go based on the question — simple recall uses only L1, detailed questions go to L2/L3.

Multi-agent isolation

Each OpenClaw agent stores memory independently under its own workspace:

~/.openclaw/workspace/.memsearch/memory/          ← main agent
~/.openclaw/workspace-work/.memsearch/memory/      ← work agent

Collection names are derived from the workspace path (same algorithm as Claude Code, Codex, and OpenCode), so agents with different workspaces have isolated memories. When an agent's workspace points to a project directory used by other platforms, memories are automatically shared across platforms.

Configuration

Works out of the box with zero configuration (ONNX embedding, no API key needed).

Optional settings via openclaw plugins config memsearch:

SettingDefaultDescription
provideronnxEmbedding provider (onnx, openai, google, voyage, ollama)
autoCapturetrueAuto-capture conversation summaries after each turn
autoRecalltrueAuto-inject recent memories at agent start

Memory files

Each agent's memory is stored as plain markdown:

# 2026-03-25

## Session 14:47

### 14:47
<!-- session:UUID transcript:~/.openclaw/agents/main/sessions/UUID.jsonl -->
- User asked about the memsearch architecture.
- OpenClaw explained core components: chunker, scanner, embedder, MilvusStore.

These files are human-readable, editable, and version-controllable. Milvus is a derived index that can be rebuilt anytime.

Uninstall

openclaw plugins install --remove memsearch
# Or manually:
rm -rf ~/.openclaw/extensions/memsearch