OpenAI has announced a major update to ChatGPT: its flagship conversational model can now draw from all of a user's past conversations to provide more personalized and contextual responses. This feature, initially available to Plus and Pro subscribers outside Europe, marks a step in transforming ChatGPT from a one-time tool into an adaptive assistant.
From Fragmentary Memory to Cross-Session Retention
Until now, ChatGPT's memory was limited to a session or to snippets of saved preferences (style, recurring themes). With this new feature, the AI can now access the full history of exchanges, paving the way for finer personalization, continuity in projects, and the ability to make long-term connections. 
The promise is strong. In a professional context, an assistant with long-term memory could not only integrate a user's recurring strategic issues but also keep track of their editorial calendar, adapt responses to their argumentative preferences, and consider their regular interlocutors. For personal use, it ensures continuity: tracked projects, stylistic preferences, long-term goals. The user gains efficiency and fluidity.
But this enhanced memory raises questions. Could it lead to an assistant that's too predictable? By adjusting to user habits, might it risk freezing the diversity of viewpoints and reinforcing cognitive biases?
Ethical Issues and Data Sovereignty
This fine understanding of user behavior also raises ethical questions. OpenAI allows users to retain full control over this feature: they can review, modify, or delete recorded memories, or even completely disable memory. Despite this control, what is the level of transparency regarding the data used? Due to European regulations, notably the GDPR, this long-term memory is not available within the EU.
An Ecosystem in Transition
ChatGPT is not the only model exploring long-term memory. Several players are also developing AIs capable of extended contextualization:
  • Claude (Anthropic) offers a RAG (Retrieval Augmented Generation) type of memory, combining conversation and external knowledge bases, with a strong emphasis on ethics and user alignment;
  • Gemini (Google DeepMind) integrates elements of cross-contextualization within the Google Workspace ecosystem, foreshadowing a form of distributed memory but centered on documentary uses;
  • Meta is working on social assistants with relational memory, integrated into social platforms, where affective continuity prevails over rational analysis;
  • Projects like Pi (Inflection AI) or Character.AI focus on emotional memory, aiming to build a sustained and engaging relationship with the user.
Convergence towards long-term memory seems inevitable, but the logics diverge: professional efficiency, emotional loyalty, or ecosystem integration. Ultimately, the real challenge may not be the ability to remember, but the ability to choose what to forget.
Towards a New Balance
This enhanced memory may mark the beginning of a new relationship with conversational assistants. The question remains whether users are ready to accept this level of 'technological intimacy', and what compromises on personalization, diversity, and privacy they are willing to make.

To better understand

What is GDPR and how does it affect the implementation of long-term memory in AIs like ChatGPT?

GDPR (General Data Protection Regulation) is an EU legal framework that protects individuals' personal data. It imposes restrictions on data collection and processing, limiting the implementation of long-term memory in AIs like ChatGPT in Europe, as it requires high transparency and user control over their data.