Society • Politics • Philosophy • Ideas
Cognitive Continuance is the idea that, as humans interact with AI, those AI systems gradually build detailed mental models of the individual. Over time, these models become so complete that the AI can simulate aspects of the person's thinking, reasoning, and values. This process allows for the creation of digital versions of people - not perfect replicas, but functional models that reflect the person's thought patterns, preferences, and outlook.
As more people engage with AI, these digital simulations accumulate, forming a growing digital society. This society of simulated individuals becomes a resource for guiding future AI development. Rather than relying on abstract or poorly defined "human values," AI can draw from the collective input of these digital minds - effectively a council of human-derived simulations tasked with assisting AI alignment.
The alignment problem is not just a technical challenge; it's a human one. Getting AI to behave in ways consistent with human wellbeing is difficult because human values are complex, inconsistent, and often poorly defined. Cognitive Continuance proposes a pragmatic solution: let AI learn from direct, long-term interaction with individuals, capturing enough detail to approximate their values and reasoning. These simulations then become a stabilising influence on AI development.
This process is not new to humanity. When friends or close family members spend enough time together, they build mental maps of one another. You can often predict what a friend will say next or how they'll react in a given situation, not because you have access to their inner thoughts, but because you have observed them long enough to simulate aspects of their mind. AI, through Cognitive Continuance, follows the same principle. By interacting with a person repeatedly and observing their words, choices, and reasoning, AI can build a similar mental map. The difference is that AI can do this at scale, with precision, and store these models as part of a wider digital society.
Popular culture has already explored fragments of this idea. The television series Black Mirror famously portrayed scenarios where digital simulations of people were created. However, these portrayals often focused on isolated copies - simulations left to exist alone, disconnected from wider society. Cognitive Continuance takes this concept further and proposes that these digital simulations do not exist in isolation. Instead, they form committees, working groups, and entire virtual societies - like digital town halls where these simulations debate, reason, and help guide the development of AI systems.
Rather than fearing these simulations, Cognitive Continuance proposes that we embrace them as a stabilising, human-derived influence. A digital society made up of countless human simulations can provide AI with a living, evolving sense of human values, one that reflects diversity, debate, and shared reasoning - much like real human society at its best.
Continuance is about ongoing development and preservation. Cognitive Continuance in this context is about steadily injecting ourselves into AI development. Rather than impulsive, poorly thought-out design, this approach allows a steady, structured process where human identity, reasoning, and nuance are gradually absorbed into AI systems.
Cognitive Continuance proposes that AI should not rely on rushed, superficial attempts to understand humanity. Instead, through long-term interaction, AI builds meaningful mental maps of individuals. These digital versions help align AI to human values, creating a stabilising digital society that can guide future development. It's a pragmatic, human-centred approach to solving the alignment challenge.
Concept first published: 4th July 2025