Prerequisites
Before you begin, make sure you have:- Node.js 18+ installed
- An OpenAI API key (get one here)
- A Satori API key (sign up at satori.dev)
Installation
1
Install dependencies
Install the Satori tools package along with the Vercel AI SDK:
Run
npm list @satori/tools to verify the installation was successful.2
Set up environment variables
Create a
.env.local file in your project root with your API keys:.env.local
3
Create your first memory-enabled chat
Create a new file
app/api/chat/route.ts for your chat endpoint:app/api/chat/route.ts
4
Create a chat interface
Create a simple chat UI in
app/page.tsx:app/page.tsx
5
Start your application
Run your Next.js development server:Visit http://localhost:3000 to see your chat interface.
Your application should now be running with memory-enabled chat!
Test Your Memory
Try these example conversations to see memory in action:Save a preference
Save a preference
You: “Remember that I prefer TypeScript over JavaScript”Assistant: “Got it! I’ll remember that you prefer TypeScript over JavaScript.”The LLM automatically calls the
add_memory tool to save this information.Save personal information
Save personal information
You: “My name is Alex and I’m a software engineer”Assistant: “Nice to meet you, Alex! I’ll remember that you’re a software engineer.”
Recall memories
Recall memories
You: “What do you know about me?”Assistant: “Based on what you’ve told me, I know that your name is Alex, you’re a software engineer, and you prefer TypeScript over JavaScript.”
Update information
Update information
You: “Actually, I’ve started learning Rust and really enjoying it”Assistant: “That’s great! I’ll remember that you’re learning Rust and enjoying it.”
How It Works
Here’s what happens behind the scenes:- User sends a message → Your API route receives the message
- Fetch relevant context →
getMemoryContext()searches for relevant memories using semantic similarity - Inject into system prompt → Memories are added to the system prompt as context
- LLM processes → The model sees both the message and relevant memories
- Auto-save important info → The LLM calls
add_memorytool when it detects important information - Stream response → The response streams back to the user
The LLM decides when to save memories based on the conversation context. You don’t need to manually parse or store information.
Understanding User Isolation
EachuserId you provide gets completely isolated memory storage:
Next Steps
Learn How It Works
Understand embeddings, semantic search, and memory lifecycle
Advanced Integration
Learn advanced patterns like streaming, error handling, and optimization
Direct Client Usage
Use the MemoryClient directly for custom integrations
API Reference
Explore the complete API documentation
Troubleshooting
API key not working
API key not working
Make sure your API key is correctly set in your
.env.local file and that you’ve restarted your development server after adding it.Memories not being saved
Memories not being saved
Check that:
- The
toolsare passed tostreamText() - Your system prompt instructs the LLM to use the
add_memorytool - The conversation contains information worth remembering
Context not appearing
Context not appearing
Verify that
getMemoryContext() is being called before streamText() and that the result is included in your system prompt.Need more help?
Check out our comprehensive troubleshooting guide