Perplexity Introduces Memory for AI Assistants, Personalizing Context Across Models
Author: Veronika Radoslavskaya
Perplexity has announced a major upgrade to its AI platform, introducing new personalization features that allow its AI assistants to remember user preferences, interests, and the context of past conversations. This breakthrough transforms Perplexity’s model ecosystem by providing a persistent layer of memory that follows the user across different AI systems.
The upgraded system is designed to mimic human memory: it automatically synthesizes conversational details and user inputs, storing them as contextual knowledge. This fundamentally solves a major limitation of traditional Large Language Models (LLMs), which often hit context limits during long, complex productive sessions and require the user to manually re-summarize past interactions.
Crucially, Perplexity’s approach is centered on precision and context retrieval, rather than using chat history for raw model training. The assistant retrieves specific context from a user's encrypted memory store to craft more precise, personalized responses—whether providing running shoe recommendations, travel suggestions, or recalling previously given advice. This context portability means users can switch between Perplexity's various specialized models (each fine-tuned for different types of tasks) while maintaining a single, consistent personalization layer, significantly boosting accuracy and user experience.
Perplexity maintains a strong commitment to user control and privacy. Users can fully disable the memory feature when desired. Furthermore, memory and search history are automatically disabled in incognito mode, and strong encryption is applied to all data. Users also retain the ability to opt out of contributing to model improvement through AI Data Retention settings. This launch positions Perplexity firmly in the race for advanced, personalized AI agents, following similar memory features introduced by major competitors.
Read more news on this topic:
Did you find an error or inaccuracy?
We will consider your comments as soon as possible.
