Skip to main content

Common Issues

Symptoms:
  • Unauthorized - Invalid API key error
  • 401 status code on API requests
Solutions:
  1. Verify your API key is correct:
console.log('API Key:', process.env.SATORI_API_KEY?.substring(0, 15) + '...');
  1. Check environment variables are loaded:
# Make sure .env.local exists
cat .env.local

# Restart your development server after adding env vars
npm run dev
  1. Verify the key format:
// Should start with 'sk_satori_'
if (!process.env.SATORI_API_KEY?.startsWith('sk_satori_')) {
  console.error('Invalid API key format');
}
  1. Check if the key is revoked:
  • Log into your dashboard
  • Go to API Keys
  • Verify the key status is “Active”
Symptoms:
  • LLM doesn’t call add_memory tool
  • No memories appear in database
Solutions:
  1. Verify tools are passed to streamText:
const tools = memoryTools(config);

const result = await streamText({
  model: openai('gpt-4o'),
  messages,
  tools, // ← Make sure this is included
});
  1. Check system prompt instructs LLM to save:
system: `You are a helpful assistant with memory.

When the user shares important information, use the add_memory tool to save it.
Important information includes:
- Personal preferences
- Personal details
- Goals and intentions`
  1. Verify maxSteps is set:
const result = await streamText({
  model: openai('gpt-4o'),
  messages,
  tools,
  maxSteps: 5, // ← Allow tool calls
});
  1. Test with explicit command:
User: "Remember that I love TypeScript"
If this doesn’t work, check your API logs for errors.
Symptoms:
  • LLM doesn’t reference saved memories
  • Responses don’t seem personalized
Solutions:
  1. Verify context is fetched:
const context = await getMemoryContext(config, userMessage);
console.log('Memory context:', context);

// Should output something like:
// "- User prefers TypeScript
//  - User loves hiking"
  1. Check context is in system prompt:
system: `You are a helpful assistant.

What you know about this user:
${memoryContext}  // ← Make sure this is included

Use this information to personalize responses.`
  1. Verify memories exist:
const client = new MemoryClient(config);
const all = await client.getAllMemories();
console.log('Total memories:', all.length);
  1. Check search threshold:
// Lower threshold for broader matches
const context = await getMemoryContext(config, userMessage, {
  threshold: 0.6, // Default is 0.7
});
Symptoms:
  • Too Many Requests error
  • 429 status code
Solutions:
  1. Implement exponential backoff:
async function retryWithBackoff(fn: () => Promise<any>, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await fn();
    } catch (error) {
      if (error.message.includes('rate limit') && i < maxRetries - 1) {
        const delay = Math.pow(2, i) * 1000;
        await new Promise(resolve => setTimeout(resolve, delay));
      } else {
        throw error;
      }
    }
  }
}

// Usage
await retryWithBackoff(() => client.addMemory('content'));
  1. Batch operations:
// Instead of multiple individual calls
const memories = ['memory1', 'memory2', 'memory3'];
await Promise.all(memories.map(m => client.addMemory(m)));
  1. Cache context fetches:
const contextCache = new Map<string, string>();

async function getCachedContext(query: string) {
  if (contextCache.has(query)) {
    return contextCache.get(query)!;
  }
  
  const context = await getMemoryContext(config, query);
  contextCache.set(query, context);
  return context;
}
  1. Contact support for higher limits
Symptoms:
  • API calls take several seconds
  • Chat feels sluggish
Solutions:
  1. Reduce context limit:
// Fetch fewer memories
const context = await getMemoryContext(config, userMessage, {
  limit: 3, // Instead of 10
});
  1. Parallel operations:
// Fetch context and start LLM call in parallel
const [memoryContext] = await Promise.all([
  getMemoryContext(config, userMessage),
  // Other async operations
]);
  1. Cache embeddings for common queries:
const embeddingCache = new Map();

async function getCachedContext(query: string) {
  const cacheKey = query.toLowerCase().trim();
  
  if (embeddingCache.has(cacheKey)) {
    return embeddingCache.get(cacheKey);
  }
  
  const context = await getMemoryContext(config, query);
  embeddingCache.set(cacheKey, context);
  
  // Clear cache after 5 minutes
  setTimeout(() => embeddingCache.delete(cacheKey), 5 * 60 * 1000);
  
  return context;
}
  1. Use streaming:
// Stream responses for better perceived performance
const result = await streamText({
  model: openai('gpt-4o'),
  messages,
  tools,
});

return result.toDataStreamResponse();
Symptoms:
  • Type errors in IDE
  • Build fails with type errors
Solutions:
  1. Install type definitions:
npm install --save-dev @types/node
  1. Import types correctly:
import type { Memory, MemoryWithSimilarity } from '@satori/tools';
  1. Check tsconfig.json:
{
  "compilerOptions": {
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "moduleResolution": "bundler"
  }
}
  1. Rebuild packages:
cd packages/js-tools
npm run build
Symptoms:
  • CORS policy error in browser console
  • Requests fail from frontend
Solutions:
  1. Never call Satori API from frontend:
// ❌ Don't do this in client components
'use client';
const client = new MemoryClient({ apiKey: '...' }); // API key exposed!

// ✅ Do this instead - use API routes
export async function POST(req: Request) {
  // Server-side only
  const client = new MemoryClient({
    apiKey: process.env.SATORI_API_KEY!,
  });
}
  1. Use server-side API routes:
// app/api/memories/route.ts
export async function GET() {
  const client = new MemoryClient({
    apiKey: process.env.SATORI_API_KEY!,
    baseUrl: process.env.SATORI_URL!,
    userId: 'user-123',
  });
  
  const memories = await client.getAllMemories();
  return Response.json({ memories });
}
Symptoms:
  • Memory not found when deleting
  • 404 errors
Solutions:
  1. Verify memory ID:
// Make sure you have the correct UUID
console.log('Deleting memory:', memoryId);
await client.deleteMemory(memoryId);
  1. Check memory belongs to user:
// Memory IDs are scoped per user
const memories = await client.getAllMemories();
const exists = memories.some(m => m.id === memoryId);

if (!exists) {
  console.error('Memory not found for this user');
}
  1. Handle errors gracefully:
try {
  await client.deleteMemory(memoryId);
} catch (error) {
  if (error.message.includes('Not Found')) {
    console.log('Memory already deleted or does not exist');
  } else {
    throw error;
  }
}
Symptoms:
  • Same information saved multiple times
  • Too many similar memories
Solutions:
  1. Check before saving:
// Search for similar memories first
const existing = await client.searchMemories(content, {
  threshold: 0.9, // High threshold for near-duplicates
  limit: 1,
});

if (existing.length === 0) {
  await client.addMemory(content);
} else {
  console.log('Similar memory already exists');
}
  1. Update system prompt:
system: `Before saving a memory, consider if similar information already exists.
Only save truly new or updated information.`
  1. Periodic cleanup:
// Find and merge duplicate memories
const memories = await client.getAllMemories();

for (let i = 0; i < memories.length; i++) {
  for (let j = i + 1; j < memories.length; j++) {
    const similarity = await calculateSimilarity(
      memories[i].content,
      memories[j].content
    );
    
    if (similarity > 0.95) {
      // Keep the newer one, delete the older
      await client.deleteMemory(memories[i].id);
      break;
    }
  }
}

Debugging Tips

Enable Verbose Logging

// Add detailed logging
console.log('=== Memory Debug Info ===');
console.log('User ID:', userId);
console.log('API Key:', process.env.SATORI_API_KEY?.substring(0, 15) + '...');
console.log('Base URL:', process.env.SATORI_URL);

const context = await getMemoryContext(config, userMessage);
console.log('Context fetched:', context);
console.log('Context length:', context.length);

const result = await streamText({
  model: openai('gpt-4o'),
  system: `...${context}`,
  messages,
  tools,
  onFinish: (result) => {
    console.log('Tool calls:', result.toolCalls);
    console.log('Finish reason:', result.finishReason);
  },
});

Test Memory Operations Directly

// Test script to verify memory operations
import { MemoryClient } from '@satori/tools';

async function testMemory() {
  const client = new MemoryClient({
    apiKey: process.env.SATORI_API_KEY!,
    baseUrl: process.env.SATORI_URL!,
    userId: 'test-user',
  });
  
  console.log('1. Adding memory...');
  const memory = await client.addMemory('Test memory content');
  console.log('✓ Memory added:', memory.id);
  
  console.log('2. Searching memories...');
  const results = await client.searchMemories('test');
  console.log('✓ Found', results.length, 'memories');
  
  console.log('3. Getting all memories...');
  const all = await client.getAllMemories();
  console.log('✓ Total memories:', all.length);
  
  console.log('4. Deleting memory...');
  await client.deleteMemory(memory.id);
  console.log('✓ Memory deleted');
  
  console.log('All tests passed!');
}

testMemory().catch(console.error);

Check Network Requests

// Log all fetch requests
const originalFetch = global.fetch;
global.fetch = async (...args) => {
  console.log('Fetch:', args[0]);
  const response = await originalFetch(...args);
  console.log('Status:', response.status);
  return response;
};

Getting Help

If you’re still experiencing issues:

Useful Resources