Every text message you've ever sent. Every email. Every social media post, comment, and reply. Every typed-out journal entry, review, or forum contribution. All of it creates a body of text that captures, in some form, how you communicate.
Large language models — the technology behind ChatGPT, Claude, and similar systems — are trained on text. And they're very good at learning and reproducing patterns in text. This has led to an obvious question: can an LLM trained on a specific person's writing simulate that person's communication style?
The answer is yes, with significant limits. And the implications for how we think about memory, grief, and identity are worth thinking through carefully.
What LLMs Can Actually Do with Someone's Writing
Language models are, at their core, pattern-matching systems. They learn the statistical relationships between words, phrases, and ideas in text — and they use those patterns to generate new text that is consistent with what they've learned.
When you fine-tune a language model on a specific person's writing, you're giving it a data set to learn from. The model learns how that person tended to structure sentences, what vocabulary they favored, what topics they wrote about, what their rhetorical patterns were, whether they were terse or expansive, formal or casual.
For someone who wrote a lot and wrote consistently — a person who kept detailed journals, who was an active email correspondent, who wrote regularly on social media — a well-trained model can produce text that resembles their writing style convincingly enough to fool people who knew them casually.
For someone who wrote little, or whose writing varied significantly across contexts, the simulation is much weaker. The model doesn't have enough data to capture real patterns, so it produces something generic.
What LLMs Cannot Do
Here's the honest limitation that tends to get glossed over in technology coverage: an LLM trained on someone's writing doesn't know what that person actually thought or would have said about any given topic.
It knows what they wrote. If they never wrote about their father, the model can't tell you anything authentic about how they felt about their father. If they never expressed an opinion on something, the model will invent one, in a style that sounds like them, that may be entirely wrong.
The model also doesn't capture emotional presence. It can mimic a communication style, but style is the surface. The depth — the actual warmth, the actual concern, the actual humor that comes from real engagement with a real moment — doesn't survive the translation into pattern.
There's a useful analogy here: a skilled impressionist can imitate a famous person's voice and mannerisms convincingly enough to fool brief listeners. But no one believes they're actually talking to that person. The impression is recognizable as an impression. LLM simulations of deceased people have this same fundamental property — they're recognizable as approximations to anyone who knew the person well, even when they're superficially convincing.
The Philosophical Question: Does Interaction with a Simulation Provide Real Comfort?
This is the question that memory researchers, philosophers, and grief counselors are genuinely wrestling with, without a settled answer.
The people who argue yes tend to point to the psychological function of the interaction, not its metaphysical status. If a bereaved person writes a message to an AI trained on their mother's emails and receives a response that sounds like their mother, and that experience provides emotional relief — does the origin of the response matter? Some argue that the comfort is real regardless of the mechanism that produced it. The response activates the same emotional-memorial pathways, they say, so functionally it achieves something.
The people who argue no — or who are skeptical — raise several concerns.
First, the comfort may be palliative rather than integrative. Natural grief involves gradually accepting the permanence of a loss, constructing a stable internal representation of the person, and learning to live alongside the absence. A simulation that provides ongoing "presence" may delay this process rather than support it. The absence is real; pretending it isn't may not serve the grieving person in the long run.
Second, the simulation risks replacing authentic memory. The more someone interacts with an AI trained on a deceased person's data, the more the simulated responses become mixed in their memory with actual things the person said and wrote. This contamination can distort the authentic record over time.
Third, it raises questions of identity and representation. A simulation says things the actual person never said. If those things become part of how you remember the person — if you find yourself attributing to them views the AI generated — you are not remembering the person. You're remembering a construction.
Philosopher Walter Glannon and others in the bioethics community have written about the "integrity of memory" — the idea that accurate memories of the people we've loved serve an important function in both grief and identity. Simulations that are partly false by definition may compromise that integrity.
What This Means in Practice
If you're thinking about this from a practical rather than philosophical angle, a few things are worth knowing.
AI chatbot simulations of deceased people work best as a supplementary experience, not a primary one. Brief interactions, approached with awareness that you're engaging with a simulation, may provide some comfort without the risks of sustained reliance.
They work better when the underlying data is better. An LLM trained on sparse, decontextualized text produces a worse simulation than one trained on rich, varied, and abundant writing. And transcripts of voice recordings — which capture how someone spoke — tend to produce more natural-sounding simulations than written text alone, because spoken language captures more of the person's authentic expression.
They are entirely dependent on third-party services. The AI chatbots that simulate people are built and maintained by companies. When those companies shut down — and in a market this new and uncertain, many will — the data goes with them. Unlike authentic recordings stored in your own archive, AI chatbot services represent a tenuous custody of something that can't be recreated.
Why Authentic Recordings Are the More Meaningful Layer
Every AI application for memory — chatbots, voice clones, conversational AI — sits on top of source material. The source material is what actually captures the person.
A recording of your mother telling a story is your mother telling a story. An LLM trained on her emails is a statistical model of her writing patterns. One is the person; the other is an abstraction derived from data about the person.
For families thinking about what they actually want to preserve, this distinction should drive the priority order. The authentic recordings are irreplaceable. They are the direct, unmediated presence of the person in a specific moment. No AI application can produce this. AI can only work with what already exists and derive approximations from it.
If you want the foundation — the authentic voice, the real presence — that requires deliberate recording while the person is alive. No amount of AI sophistication changes this.
For families building that authentic foundation — recordings of parents, grandparents, and the people who matter most — LifeEcho provides the structure to capture, preserve, and share those recordings in a way that lasts. See plans at lifeecho.org/#pricing.