Can AI Recreate a Conversation with Someone Who Died?

An honest examination of what AI can actually do — what services exist, what they technically require, the ethical dimensions, and what bereaved people report. Plus how this differs from listening to a real recording.

The question comes up more often as AI capabilities grow: can you use AI to have a conversation with someone who has died?

The short answer is that several companies have tried, with varying results and significant ethical complexity. The longer answer requires looking at what these services actually do, what they actually deliver, and what bereaved people actually experience — not what the demos suggest.

What Services Actually Exist

A handful of services have offered some version of "conversation with the deceased" over the past several years. They take meaningfully different technical approaches.

HereAfter AI is one of the more developed services in this space. Their model requires the person to record themselves — answering hundreds of interview questions about their life, values, memories, and opinions — before they die. These recordings are used to train a conversational AI that, after the person's death, can respond to family members' questions in a way that approximates how the person might have answered. The quality of the experience depends heavily on the completeness of the recordings made beforehand.

StoryFile takes a video-interactive approach. The person records responses to anticipated questions on video, and the StoryFile system matches incoming questions to the most relevant recorded answer. This is less generative than a language model — the responses are actual recordings of the person, not synthetic — but the scope is limited to what was recorded in advance. The system was notably used for Holocaust survivor testimonials, allowing later generations to "ask questions" of survivors and receive recorded answers.

Project December was a text-based service that used language model technology to simulate conversation with a deceased person based on whatever text the user provided about that person. It received significant attention and controversy. It allowed users to simulate a deceased loved one with minimal source material — and critics raised concerns about the potential for harmful psychological effects.

Various other services have emerged and, in several cases, shut down: Eternos, HereAfterAI updates, and several smaller startups that found the market and the ethics difficult to navigate.

What These Services Technically Require

The most important technical reality, which varies across services but is universally true in some form, is this: AI conversation requires source material.

There is no AI that can simulate a specific person convincingly without data about that person. The data might be:

  • Recorded interviews or answers to questions (HereAfter AI model)
  • Video recordings of the person speaking (StoryFile model)
  • Text — emails, social media posts, letters — that captures their writing style
  • Voice recordings used to train a voice model for audio responses

The more and the better the source material, the more convincing the simulation. With minimal data, the output is generic and hollow — an AI that sounds vaguely like it might be that person if you squint, but doesn't actually capture them.

This creates a significant practical problem for anyone who didn't plan ahead: if someone died without leaving significant recorded material, there's very little any AI service can work with.

The Experience: What People Actually Report

Bereaved people who have used AI conversation services describe a wide range of experiences.

Some find it comforting, at least initially. Hearing something that resembles the cadence of a loved one's speech, receiving responses that feel related to how they talked, can provide a momentary sense of closeness. Several people have written about using these services in the acute phase of grief and finding them helpful in the same way that talking to a photo or a gravestone can be helpful — a directed expression of longing that has somewhere to go.

Others find it deeply unsettling. The "uncanny valley" problem — the sensation when something is close enough to be recognizable but wrong enough to be disturbing — applies to AI simulations of people. When the AI gets something wrong, or says something the person would never have said, it can produce a jarring effect that feels like a violation rather than a comfort.

A third category of experience is more complicated: people who find the simulation initially comforting but then notice, over time, that their memories of the actual person are being contaminated by the simulation. They can no longer clearly distinguish between what their mother actually said and what the AI generated in her voice. This is the memory distortion concern, and it is among the most serious ethical objections to this technology.

The Ethical Dimensions

The ethics of AI conversation with the deceased involve several distinct questions.

Consent. Did the person consent to having their likeness, voice, and personality simulated? Services like HereAfter AI require active participation before death, which addresses consent. Services that allow family members to create a simulation using available data, without the person's prior agreement, do not. The lack of consent-based legal frameworks in most jurisdictions means this is largely unregulated.

Accuracy. Any AI simulation is an approximation. The more it's used, the more it becomes a constructed version of the person rather than a representation of who they actually were. The AI can't know what the person would have thought about something they never addressed. It generates plausible-sounding responses, but plausible and accurate are not the same thing.

Exploitation of grief. Critics have raised concerns that AI grief services exploit bereaved people's emotional vulnerability — offering a kind of closure or connection that can't actually be delivered and that may delay or complicate natural grief processes. The business model of charging monthly subscriptions to grieving people for access to a simulation of their loved one raises questions about the relationship between commerce and mourning.

Memory integrity. As noted: a convincing simulation, used extensively, risks altering authentic memories. This is a psychological harm that has no precedent in the history of grief.

The Difference Between Simulation and Recording

This distinction is worth dwelling on.

A recording of someone is fixed and authentic. It captures what the person actually said, in their actual voice, at an actual moment in their life. There's no interpretation or generation involved. When you listen to a recording, you know you're getting something real.

An AI simulation is dynamic and synthetic. It generates responses on the fly based on a model trained on the person's data. When it says something, there's no guarantee that the real person would have said that. It is an inference, not a fact.

For grief, this distinction matters enormously. Part of what makes the loss of a person real — and part of what allows grief to move — is the absence. The recordings help you feel the presence, but they don't pretend the person is still here. A simulation, at its best, creates an illusion of ongoing presence. Whether that illusion is therapeutic or harmful depends heavily on the individual and on how it's used.

The Foundation Still Matters

Regardless of what you think about AI conversation technology, the practical point remains: recordings are the foundation.

AI grief services work better — or only at all — with good source material. A family that has extensive recordings of a person can make more meaningful use of any AI tool than a family that has almost nothing. And a family with extensive recordings already has the most valuable thing: the actual person, in their actual voice, saying things they actually said.

The authentic recordings don't require an ongoing subscription. They don't disappear if a company shuts down. They don't generate responses that might be wrong. They are what they are — and in grief, that certainty matters.

If you're thinking about preserving something for your own family — a record that remains authentic and doesn't depend on third-party services staying online — LifeEcho is designed for exactly that kind of intentional, lasting preservation. Learn more at lifeecho.org/#pricing.

Frequently Asked Questions

What AI services allow you to 'talk' to a deceased person?

Several services have attempted this, including HereAfter AI (conversational AI based on recorded interviews), StoryFile (interactive video responses), and Project December (text-based chatbot). Each takes a different technical approach and has different quality and ethical tradeoffs. None of them recreate the actual person — they simulate responses based on data the person provided or that others contributed.

Is talking to an AI version of a deceased loved one healthy for grief?

The research is limited and mixed. Some bereaved people report finding it comforting in the short term. Others report feeling disturbed or finding that it complicates their grief by blurring the line between the real person and a simulation. Grief therapists are divided, with concerns centering on whether AI simulation interferes with the natural process of accepting loss. There is no consensus recommendation.

What happens to an AI grief companion if the company shuts down?

The service and everything it contains disappears. Unlike authentic recordings stored in your own archive, AI companions hosted by third-party services are entirely dependent on that company's continued operation. Multiple early services in this space have already shut down. This is one of the strongest arguments for prioritizing authentic recordings in your own control over AI-generated simulations.

Preserve Your Family's Voice Today

Start capturing the stories and voices of the people you love — with nothing more than a phone call.

Get Started

No app or smartphone required · Works on any phone