Black Mirror, “Be Right Back,” and Philosophy
Black Mirror is without a doubt one of the most philosophical TV series of the last decade. Like a modern day Twilight Zone, every episode has a different cast and a different setting. “Be Right Back” tells the story of a woman’s grief for the loss of her fiancé, and (as most Black Mirror episodes do) it explores the moral and social implications of (possible) future technology. From a philosophical point of view, however, “Be Right Back” also raises engaging questions concerning artificial intelligence, personal identity, and the problem of consciousness. Is complex behaviour an indisputable mark of intelligence? Can machines think, or have a mind? Is personal identity reducible to our memories, beliefs and way of thinking? Can two people be the same person without being the same subjects of conscious experience? In this article I will take “Be Right Back” as an opportunity to explore these philosophical questions.
Imagine you’ve just lost your beloved partner without even having the opportunity to say goodbye. Imagine that someone tells you can talk with him or her again. Martha, the protagonist of “Be Right Back,” has to deal with this disconcerting possibility. Her boyfriend, Ash, has suddenly died in a car accident on the day of their relocation to their new house in the countryside. At his funeral, one of Martha’s friends tells her she can sign her up for something that helps, a way to talk to him again. Horrified, she refuses. A few days later, she receives an email from her deceased boyfriend. We all use social media, we record videos, write posts, emails, and texts, make phone calls, leave traces on the internet of who we are and how we think and speak. Ash was a heavy social media user. Martha signed up to a website that takes all the deceased person’s information from social media and creates a computer simulation, contactable through internet chat, that recreates his personality, his writing style, his sense of humor. After Ash’s death Martha found out she was pregnant. Desparate to talk one more time with her boyfriend, she starts chatting with the artificial Ash. Martha knows this is not the real person, but he talks exactly like him and has the same unique way of making jokes. This brings us to the first philosophical question: Can a person’s mind really be simulated by a computer? With whom is Martha talking?
Be Right Back and the Turing Test
“I propose to consider the question, ‘Can machines think?’” With these words, Alan Turing, one of the fathers of computer science, started his 1950 article “Computing Machinery and Intelligence,” which gave rise to a debate on artificial intelligence that is still alive today. Turing thought that, in order to reply to the question, one should first define precisely the meaning of the term “think.” Instead of attempting a definition too hard to phrase, Turing proposed an operational criterion to reply to the question. His idea was thought-provoking: if a machine is so complex that its verbal responses lead us to think we are interacting with a human being, then we should conclude that it is intelligent, like a human being.
The possibility of a computer passing the Turing test usually leads people to take different positions. Eliminativists might claim that Turing’s conclusion is not surprising; after all, we might not be so different from a very complex machine. For example, perhaps all the circuitry and wiring in our brain produces the illusion of being conscious, but we are nothing more than incredibly complex robots shaped by evolution and natural selection. The philosopher of mind Daniel Dennett proposed this view, sometimes labelled as “illusionism,” in his 1991 book Consciousness Explained. According to Dennett, consciousness is an illusion created by the work of innumerable parallel semi-independent and subconscious agencies in the brain. Every serious theory of consciousness should acknowledge this fact: only a theory that explained conscious events in terms of unconscious events could explain consciousness at all (Dennett, 1991).
On the other hand, a non- eliminativist might claim that, even if a machine passed the test—and even if we could say that it is intelligent—being intelligent doesn’t necessarily entail being minded or conscious. We can imagine a complex mechanism being able to perform tasks we wouldn’t be capable of as human beings. For example, think about the supercomputer Watson who trashed the two human champions of the game Jeopardy. Even if we might call Watson intelligent, we wouldn’t say Watson is minded, would we? Watson is simply made of strings of code programmed to produce an output. There is nothing it is like to be Watson, even if it is in a sense more “intelligent” than the other two Jeopardy participants. It wasn’t happy when it won the game, nor did it feel anything at all while playing.
Now one might rightfully wonder, how can we be completely sure that a machine is conscious if we are only grounding our judgement purely on its behavioral response? One of the biggest problems in philosophy, the problem of “other minds,” extends such skepticism to other people as well. Is it reasonable to conclude that other human beings (and perhaps animals) are conscious if such an inference is not necessarily warranted? If this conclusion is not warranted for human beings, why would the case of AI be any easier?
Let’s put absolute skepticism aside for a moment. “Be Right Back” confronts us with a very hard case—one where it’s not so easy to deny that a mind is present. Anyone who has had a long-distance relationship knows that chats are only good up to a certain point. There’s something in the voice that writing doesn’t convey. So Martha decides to try an upgrade of the software, which allows her to talk on the phone with Ash, recreating his voice from recorded samples. She shares with him her emotions while she goes through the pregnancy; she sends him videos of the echography of their baby. One thing leads to another and we come to the real breakthrough of the story, the final upgrade: Ash says he doesn’t have to be just a voice on the phone; the company can provide an artificial copy of his body, made out of synthetic flesh, onto which he can be uploaded. Martha cannot resist. She receives the clone and activates it: Ash is there, looking at Martha with her boyfriend’s eyes, talking with his voice, making jokes in his unique style. But something is not right. His skin is unrealistically soft. He jokes about it, asks her to try his fingertips, they are completely smooth. Ash asks Martha: “Does it bother you?” She replies “No! Yes… I don’t know”. Martha is at the mercy of her emotions, and ends up having sex with him. The sex is great, even better than with the real Ash (quite expected, given the clone’s repertoire of sex routines based on pornographic videos).
Leaving Turing Behind
But Martha discovers very soon what dealing with a clone means. He only follows her orders; he doesn’t have a will; and he lacks memories of what was never posted online. The fact is, he’s not enough of Ash. She cannot stand the fake Ash anymore, and so she takes him to the edge of a tall cliff and orders him to jump off. He agrees, but Martha gets even angrier; the real Ash would have begged for his life in despair. So the clone starts begging for his own life, and Martha understands she cannot get rid of it.
Martha ends up raising her daughter alone. She kept Ash’s clone hidden in the attic, and let her daughter visit him during the weekend. She probably wanted her to know something about her father, even just from a copy of him. At the beginning of the episode the attic is described as the place where the pictures of loved ones go when they die. The moral of the story seems clear, as suggested by the lyrics of the Bee Gees song at the beginning of the episode: “If I can’t have you, I don’t want nobody baby.” A copy is just a copy, and nothing can substitute for a real human being with all its complexity. Nonetheless, getting rid of something that resembles so faithfully someone we once loved, even if just from outside, can be very difficult, like getting rid of an old and faded picture.
“Be Right Back” offers a vivid representation of the imitation game. The moral of the episode concerning the Turing test, however, is open to interpretation. Someone might take it to mean that the clone is just a machine, without emotions and without will. Ultimately, it fails the Turing test, and that’s why in the end it doesn’t stand a chance as a substitute. On the other hand, someone else might think that the clone fails the test for “being Ash,” but apart from the creepy details concerning the synthetic body, he could pass for a human being. Maybe if Ash had survived the car accident with brain injuries, he could have ended up behaving somehow like the clone.
From Persons to Subjects of Experience
Can machines be considered persons? Someone might think, at this point, that the whole matter is based on the wrong perspective. The real question is about personal identity, and concerns whether the clone can be considered the same person as Ash. According to Robert Nozick, what really matters in defining the continuity of personal identity is closeness to the original person (Nozick, 1981). In this case, despite the lack of memories and despite the behavioral differences, Ash’s clone would be his “closest continuer.” Hence it should be considered the same person.
However, as we will see in a minute, in more complicated contexts the closest continuer is not always simple to identify. Consider a case presented in another Black Mirror episode. In the special episode “White Christmas,” we see that a person’s mind, instead of being simulated based on information or from social media, can be copied and extracted from the brain, and put into a portable device called “Cookie.”.Connecting the two episodes, we can witness the emergence of a paradox concerning personal identity. In “Be Right Back” we see that the clone is clearly different from the real Ash. But let’s imagine for the sake of the discussion that, from a small DNA sample, the company could actually grow in a lab a biological clone of the man, and thanks to the information contained in the Cookie it could upload an exact copy of his mind into his brain. Now we would have a perfect physical copy, with the same DNA, the same neural structure, the same memories, and the same personality. Would this clone be the same person as Ash? We might think so. What relevant difference can be found between them, after all?
Derek Parfit suggests that what really counts in our judgement concerning personal identity should be psychological continuity. Whatever your intuition is, consider this final case, proposed by Parfit. What if rather than cloning the dead guy, we were actually creating a copy of an existent person, physically and psychologically identical to the original one (as it happens with the Cookies in “White Christmas”)? Would the copy and the original be the same person? If we agreed in the previous case, then we have to agree in this case too. But this leads to a paradox, because it’s evident that two different individuals cannot be the same person. If they were the same person, then, the fact that one of the two died would be of little importance. This is indeed Parfit’s conclusion, according to which a person can survive as somebody else, or even through her offspring (Parfit, 1984). In this case, Nozick would agree too. The closest continuer would be the original person, which is physically continuous with the person before the creation of the copy. In the case that the original person died, the remaining copy would be the closest continuer, hence it would count as the same person. If everything is right so far, even an imperfect copy, as in the case of Ash, would count as the same person. This is one of the reasons why “Be Right Back” is so insightful: it points out the dilemma we would face in such a situation.
But think about a variation of the thought experiment in which a person is teleported instead of cloned: would you enter a teleportation that, instead of transferring your atoms and recomposing your body rearranging them in another place, created a perfect copy of your body elsewhere, but destroyed the original one? The paradox is the same. Had we not destroyed the original, we would end up with two copies that cannot be the same person. Our judgment concerning personal identity might rest on physical similarity and psychological continuity, but nobody would be happy with entering a teleportation device that destroyed her body, even if somewhere else a perfect copy would be created. It might be cold comfort knowing that there is an individual who is the same persons as her, but who is not her. In fact, this is not the end of the philosophical reflection, because being the same person might imply being identical to somebody in one’s memories, thoughts and beliefs, but it doesn’t imply being identical with her. Nozick’s and Parfit’s solutions are as convincing as an account of personal identity can be, but they might be leaving out what matters most: the experience of the individual from the “first person” perspective. For some of us the idea that we could survive as another individual it is only partially satisfying, even if this individual could rightly be considered the same person as us. In this case, being the same person doesn’t entail being the same “subject of experience,” and this brings us back to the problem of consciousness as experienced “from the inside” by the subject. If every day we created a new perfect clone of Ash, with all the same memories, and destroyed the previous one, we would continue dealing with the same person, but the reality might be we would be dealing every day with a new different subject of experience. This new subject would be indistinguishable from the previous one from the outside (we wouldn’t notice any difference), and from the inside (he would have exactly the same memories of the previous one, he wouldn’t probably even know he just came into existence). This conclusion might seem extravagant or even anti-reductionist, but it is indeed the most physicalistic we can conceive. The idea that two subjects can be identical with each other without being identical in every respect (physical composition included), and without being numerically identical, is part of the heritage of Functionalism. In this view, a subject ceases to exist in the moment in which the material aggregate of which he is composed ceases to exist. A perfect copy made out of different physical matter might be identical with the original subject, and might be rightly judged as the same person, but it would not be the same subject of experience. Of course, as anticipated, consciousness might be a complete illusion, but even in that case, there would be as many different victims of that illusion as are the subjects that walk the Earth. More questions about “Be Right Back” can be raised, and the first one is: Is it really Ash who has come back?
Acknowledgements: I wish to thank Ben Singer and Kyle Johnson for their insightful comments on a previous draft of this article.
Dennett, D. C. (1991). Consciousness explained. Boston, MA: Little, Brown.
Nozick, R. (1981). Philosophical explanations. Cambridge, MA: Harvard University Press.
Parfit, D. (1984). Reasons and persons. Oxford University Press.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 433-460.