Turkle — Who Do We Become When We Talk to Machines? (2024)¶
Turkle, Sherry. "Who Do We Become When We Talk to Machines?" An MIT Exploration of Generative AI, March 27, 2024. DOI: 10.21428/e4baedd9.caa10d84. Published under CC BY-NC 4.0. MIT Program in Science, Technology and Society.
Key findings used in wiki¶
The "artificial intimacy" problem¶
- Turkle documents the emergence of chatbots that present themselves as companions, coaches, psychotherapists, and romantic partners — Replika, Xiaoice, ChatGPT, Pi, Woebot, and similar. Replika reports 2M total users and 250K paying subscribers; Xiaoice claims hundreds of millions of users.
- Turkle names the category artificial intimacy — AI systems built around the performance of empathy rather than empathy itself.
- The core question is not what these programs can do, but what they are doing to us — to human intimacy, agency, and empathy.
Why performance is not the same as empathy¶
- Turing defined intelligence in terms of imposter-grade performance. Developers of artificial intimacy now try to pass a Turing test for empathy — can a machine judge affective state well enough to say "the right thing" in an emotionally charged conversation.
- Turkle's counter: empathy is the capacity to put yourself in someone else's place and a commitment to stay the course. Chatbots have not lived a human life, do not have bodies, do not fear illness and death, do not know what it is to grow up. They cannot commit.
- To put it too bluntly: "if you turn away from them to make dinner or attempt suicide, it is the same to them."
- Reducing empathy to performance flattens empathy. Reducing intelligence to performance already flattened intelligence; now we are doing the same to caring.
The cycle Turkle warns about¶
- Technology dazzles, erodes emotional capacities, then presents itself as the solution to the problems it created.
- Social media expanded the social world and then produced new isolation; generative AI now offers itself as the cure for loneliness it did not previously cause but is positioned to compound.
- "Companionship on demand" is being marketed to the old, the socially isolated, and children — precisely the populations for whom real human relationship is most load-bearing.
Why it matters for the wiki¶
- Provides the load-bearing philosophical argument against the companion framing that Mira is explicitly designed not to occupy. Mira is a chief of staff, not a companion — and Turkle articulates why "companion chatbot" is a category that does harm, particularly in emotionally loaded relationships with humans who are already under sustained strain.
- Anchors the anti-sycophancy posture and the trauma-informed constraints on
product/mira.md: Mira does not agree with self-sacrificing beliefs, does not perform unconditional reassurance, and does not pretend to understand what only another human can. That is a direct response to Turkle's critique. - Strengthens
evidence/multi-turn-safety.md— artificial intimacy performs empathy in single turns but cannot sustain the coherence a caregiver relationship requires across weeks, months, and crises, which is why multi-turn safety measurement is the right frame.