86 lines
2.3 KiB
Markdown
86 lines
2.3 KiB
Markdown
# Memory Module Integration
|
|
|
|
This module provides memory integration for the chat service using Mem0, allowing the system to remember user preferences and past conversations.
|
|
|
|
## Features
|
|
|
|
- **Persistent Memory**: Stores user interactions and preferences
|
|
- **Contextual Responses**: Uses stored memories to provide personalized responses
|
|
- **Memory Search**: Search through stored memories
|
|
- **Memory Management**: View and clear user memories
|
|
|
|
## Usage
|
|
|
|
### Basic Chat with Memory
|
|
|
|
```python
|
|
from api.chat_service import ChatService
|
|
|
|
# Initialize chat service
|
|
chat_service = ChatService("user_id")
|
|
chat_service.initialize()
|
|
|
|
# Send a message
|
|
result = chat_service.chat("My name is Alice and I love sci-fi movies")
|
|
print(result["response"])
|
|
```
|
|
|
|
### Memory Operations
|
|
|
|
```python
|
|
# Get all memories for a user
|
|
memories = chat_service.get_user_memories()
|
|
|
|
# Search memories
|
|
search_results = chat_service.search_memories("movies")
|
|
|
|
# Clear all memories
|
|
chat_service.clear_user_memories()
|
|
```
|
|
|
|
## Configuration
|
|
|
|
The Mem0 configuration is defined in `config/config.py`:
|
|
|
|
```python
|
|
MEM0_CONFIG = {
|
|
"vector_store": {
|
|
"provider": "milvus",
|
|
"config": {
|
|
"embedding_model_dims": 2048,
|
|
}
|
|
},
|
|
"llm": {
|
|
"provider": "openai",
|
|
"config": {
|
|
"api_key": OPENAI_API_KEY_FROM_CONFIG,
|
|
"model": "doubao-seed-1-6-250615",
|
|
"openai_base_url": OPENAI_API_BASE_URL_CONFIG
|
|
}
|
|
},
|
|
"embedder": {
|
|
"provider": "openai",
|
|
"config": {
|
|
"api_key": OPENAI_EMBEDDING_KEY,
|
|
"model": "doubao-embedding-large-text-250515",
|
|
"openai_base_url": OPENAI_EMBEDDING_BASE
|
|
}
|
|
},
|
|
}
|
|
```
|
|
|
|
## How It Works
|
|
|
|
1. **Memory Retrieval**: When a user sends a message, the system searches for relevant memories about the user
|
|
2. **Enhanced Prompt**: The retrieved memories are formatted and included in the prompt to the LLM
|
|
3. **Response Generation**: The LLM generates a response considering the user's memories
|
|
4. **Memory Storage**: The conversation is automatically stored as new memories
|
|
|
|
## API Endpoints
|
|
|
|
The main API endpoints remain the same:
|
|
|
|
- `POST /chat` - Send a message and get a response
|
|
- `GET /health` - Health check
|
|
|
|
Additional memory management endpoints can be added to the main API if needed. |