Integration Guides
Step-by-step guides for every supported framework. Each guide is self-contained — pick your stack and follow the steps. Full example files are included in the package under examples/.
LangChain
Add persistent memory to any LangChain agent. Vektor handles storage and recall — you keep your existing agent logic unchanged.
Install
Pattern
Recall relevant memories before the agent runs, store its output afterwards. Two lines of integration in your existing agent loop.
const { createMemory } = require('vektor-slipstream'); const { ChatOpenAI } = require('@langchain/openai'); const { AgentExecutor, createOpenAIFunctionsAgent } = require('langchain/agents'); const { TavilySearchResults } = require('@langchain/community/tools/tavily_search'); const { ChatPromptTemplate, MessagesPlaceholder } = require('@langchain/core/prompts'); async function main() { // 1. Init Vektor memory const memory = await createMemory({ agentId: 'langchain-agent', licenceKey: process.env.VEKTOR_LICENCE_KEY, }); // 2. Recall what we already know const prior = await memory.recall('user research topic', 5); const priorContext = prior.map(m => m.content).join('\n') || 'No prior research found.'; // 3. Build LangChain agent with memory context in system prompt const llm = new ChatOpenAI({ modelName: 'gpt-4o-mini', temperature: 0.1 }); const tools = [new TavilySearchResults({ maxResults: 5 })]; const prompt = ChatPromptTemplate.fromMessages([ ['system', `You are a research agent with persistent memory.\n\nPRIOR KNOWLEDGE:\n${priorContext}`], ['human', '{input}'], new MessagesPlaceholder('agent_scratchpad'), ]); const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt }); const executor = new AgentExecutor({ agent, tools }); // 4. Run agent const result = await executor.invoke({ input: 'Research agentic AI memory systems' }); // 5. Store the findings await memory.remember(result.output, { importance: 2 }); console.log('Done. Memory stored for next session.'); } main().catch(console.error);
examples/example-langchain-researcher.js
OpenAI Agents SDK
Add persistent memory to an OpenAI tool-use agent loop. The agent gets three memory tools — remember, recall, and graph — and manages its own memory automatically.
Install
const { createMemory } = require('vektor-slipstream'); const OpenAI = require('openai'); const memory = await createMemory({ agentId: 'openai-assistant', licenceKey: process.env.VEKTOR_LICENCE_KEY, }); const client = new OpenAI(); const brief = await memory.briefing(); // last 24h summary // Memory tools for the agent const tools = [ { type: 'function', function: { name: 'remember', description: 'Store an important fact in long-term memory.', parameters: { type: 'object', properties: { content: { type: 'string' }, importance: { type: 'number' }, }, required: ['content'], }, }, }, { type: 'function', function: { name: 'recall', description: 'Search long-term memory for relevant context.', parameters: { type: 'object', properties: { query: { type: 'string' } }, required: ['query'], }, }, }, ]; // Tool executor async function runTool(name, args) { if (name === 'remember') { const { id } = await memory.remember(args.content, { importance: args.importance || 2 }); return `Stored memory #${id}`; } if (name === 'recall') { const results = await memory.recall(args.query, 5); return results.map(r => r.content).join('\n'); } } // Agent loop with tool handling const messages = [ { role: 'system', content: `You are a helpful assistant with memory.\n\n${brief}` }, { role: 'user', content: 'What do you know about my coding preferences?' }, ]; while (true) { const res = await client.chat.completions.create({ model: 'gpt-4o-mini', messages, tools }); const msg = res.choices[0].message; messages.push(msg); if (!msg.tool_calls?.length) { console.log(msg.content); break; } for (const tc of msg.tool_calls) { const result = await runTool(tc.function.name, JSON.parse(tc.function.arguments)); messages.push({ role: 'tool', tool_call_id: tc.id, content: result }); } }
examples/example-openai-assistant.js
Claude MCP
Connect Claude Desktop to Vektor persistent memory via the Model Context Protocol. Claude gets four native memory tools and can recall, store, and traverse the memory graph directly.
Install
Option A — Claude Desktop (MCP server)
Add to your claude_desktop_config.json:
{
"mcpServers": {
"vektor-slipstream": {
"command": "node",
"args": ["/absolute/path/to/example-claude-mcp.js", "--mcp"],
"env": {
"VEKTOR_LICENCE_KEY": "VEKTOR-XXXX-XXXX-XXXX",
"SLIPSTREAM_AGENT_ID": "claude-desktop"
}
}
}
}
Restart Claude Desktop. The following tools become available natively:
vektor_recall— semantic search across all memoriesvektor_store— save important informationvektor_graph— traverse connected memoriesvektor_delta— see what changed recently on a topic
Option B — Direct chat mode
ANTHROPIC_API_KEY=sk-ant-... VEKTOR_LICENCE_KEY=VEKTOR-... node example-claude-mcp.js
examples/example-claude-mcp.js
Mistral
Connect Mistral agents (Le Chat, Mistral API, La Plateforme) to Vektor via a local HTTP bridge. Your memory runs entirely on your machine — nothing leaves it.
Setup
Run the setup script once after installing the package. It validates your licence key and starts a local bridge on port 3847.
node node_modules/vektor-slipstream/mistral/mistral-setup.js
The bridge runs at http://localhost:3847/api/v1/mistral/vektor_memoire. Add mistral/vektor-tool-manifest.json from the package to your Mistral agent or La Plateforme project.
Tool calls
{ "action": "recall", "query": "user preferences", "key": "YOUR_LICENCE_KEY" }
{ "action": "remember", "content": "User prefers TypeScript", "key": "YOUR_LICENCE_KEY" }