Skip to main content

Prerequisites

Before you begin, make sure you have:

Installation

1

Install dependencies

Install the Satori tools package along with the Vercel AI SDK:
npm install @satori/tools ai @ai-sdk/openai
Run npm list @satori/tools to verify the installation was successful.
2

Set up environment variables

Create a .env.local file in your project root with your API keys:
.env.local
OPENAI_API_KEY=sk-...
SATORI_API_KEY=sk_satori_...
SATORI_URL=https://api.satori.dev
Never commit your API keys to version control. Add .env.local to your .gitignore file.
3

Create your first memory-enabled chat

Create a new file app/api/chat/route.ts for your chat endpoint:
app/api/chat/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { memoryTools, getMemoryContext } from '@satori/tools';

export async function POST(req: Request) {
  const { messages } = await req.json();
  const userMessage = messages[messages.length - 1].content;

  // Create memory tools scoped to this user
  const tools = memoryTools({
    apiKey: process.env.SATORI_API_KEY!,
    baseUrl: process.env.SATORI_URL!,
    userId: 'user-123', // Replace with actual user ID
  });

  // Pre-fetch relevant memories for context
  const memoryContext = await getMemoryContext(
    {
      apiKey: process.env.SATORI_API_KEY!,
      baseUrl: process.env.SATORI_URL!,
      userId: 'user-123',
    },
    userMessage,
    { limit: 5 }
  );

  // Stream response with memory
  const result = await streamText({
    model: openai('gpt-4o'),
    system: `You are a helpful assistant with long-term memory.
    
What you know about this user:
${memoryContext}

When the user shares important information, use the add_memory tool to save it.
When asked what you remember, reference the context above.`,
    messages,
    tools,
  });

  return result.toDataStreamResponse();
}
Replace 'user-123' with your actual user identifier. Each user gets their own isolated memory space.
4

Create a chat interface

Create a simple chat UI in app/page.tsx:
app/page.tsx
'use client';

import { useChat } from 'ai/react';

export default function ChatPage() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();

  return (
    <div className="flex flex-col h-screen max-w-2xl mx-auto p-4">
      <div className="flex-1 overflow-y-auto space-y-4 mb-4">
        {messages.map((message) => (
          <div
            key={message.id}
            className={`p-4 rounded-lg ${
              message.role === 'user'
                ? 'bg-blue-100 ml-auto max-w-[80%]'
                : 'bg-gray-100 mr-auto max-w-[80%]'
            }`}
          >
            <p className="text-sm font-semibold mb-1">
              {message.role === 'user' ? 'You' : 'Assistant'}
            </p>
            <p>{message.content}</p>
          </div>
        ))}
      </div>

      <form onSubmit={handleSubmit} className="flex gap-2">
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Type a message..."
          className="flex-1 p-2 border rounded"
        />
        <button
          type="submit"
          className="px-4 py-2 bg-blue-500 text-white rounded"
        >
          Send
        </button>
      </form>
    </div>
  );
}
5

Start your application

Run your Next.js development server:
npm run dev
Visit http://localhost:3000 to see your chat interface.
Your application should now be running with memory-enabled chat!

Test Your Memory

Try these example conversations to see memory in action:
You: “Remember that I prefer TypeScript over JavaScript”Assistant: “Got it! I’ll remember that you prefer TypeScript over JavaScript.”The LLM automatically calls the add_memory tool to save this information.
You: “My name is Alex and I’m a software engineer”Assistant: “Nice to meet you, Alex! I’ll remember that you’re a software engineer.”
You: “What do you know about me?”Assistant: “Based on what you’ve told me, I know that your name is Alex, you’re a software engineer, and you prefer TypeScript over JavaScript.”
You: “Actually, I’ve started learning Rust and really enjoying it”Assistant: “That’s great! I’ll remember that you’re learning Rust and enjoying it.”

How It Works

Here’s what happens behind the scenes:
  1. User sends a message → Your API route receives the message
  2. Fetch relevant contextgetMemoryContext() searches for relevant memories using semantic similarity
  3. Inject into system prompt → Memories are added to the system prompt as context
  4. LLM processes → The model sees both the message and relevant memories
  5. Auto-save important info → The LLM calls add_memory tool when it detects important information
  6. Stream response → The response streams back to the user
The LLM decides when to save memories based on the conversation context. You don’t need to manually parse or store information.

Understanding User Isolation

Each userId you provide gets completely isolated memory storage:
// User Alice's memories
const aliceTools = memoryTools({
  apiKey: process.env.SATORI_API_KEY!,
  baseUrl: process.env.SATORI_URL!,
  userId: 'alice',
});

// User Bob's memories (completely separate)
const bobTools = memoryTools({
  apiKey: process.env.SATORI_API_KEY!,
  baseUrl: process.env.SATORI_URL!,
  userId: 'bob',
});
Always use unique, consistent user identifiers. Never share the same userId across different users.

Next Steps

Troubleshooting

Make sure your API key is correctly set in your .env.local file and that you’ve restarted your development server after adding it.
# Verify your environment variables are loaded
console.log('API Key:', process.env.SATORI_API_KEY?.substring(0, 10) + '...');
Check that:
  1. The tools are passed to streamText()
  2. Your system prompt instructs the LLM to use the add_memory tool
  3. The conversation contains information worth remembering
You can also manually test memory storage using the direct client.
Verify that getMemoryContext() is being called before streamText() and that the result is included in your system prompt.
console.log('Memory context:', memoryContext);

Need more help?

Check out our comprehensive troubleshooting guide