The Death of Death Pt 3: Becoming Binary
A friend and I used to fall out every time we had a very peculiar conversation: a debate about whether there would be a time in the near future when an AI would be indistinguishable from a human.
He was on Team AI.
I was on Team Human.
Our thought experiment left the basic Turing Test far behind; our win-condition was sustained interaction: genuinely fooling someone into believing they were talking to a person over the long term. Like, decades.
It seems almost quaint now, the nature of our fights, given everything that's happened over the last few years with the mainstreaming of Large Language Model-based chatbots like ChatGPT and Claude. But although these stochastic parrots have successfully drawn in millions of people and inspired wild debates about the decline of humanity, I'm still firm in my opinion:
Humans can tell. If they can't, they're willingly fooling themselves. They're ignoring the signs. They're suspending their disbelief.
He thought that was enough. I do not.
It was only after a particularly fierce interaction one evening, which had started innocently with us imagining who we'd want to have around the AI designer table (psychologists (me), computer scientists (him), magicians (me), pathological liars (me), a thief (him), actors (me), architects (him), religious leaders (me)) that it hit me:
The reason we each felt the stakes were SO HIGH in our conversation was because we weren't arguing about AI. We were arguing about what we each imagined was the nature of being human.
He believed we could reverse-engineer our humanity.
I do not.
This is the core my my beef with the immortalists of Silicon Valley: their hubris that their numbers are the alchemical ingredients that underscore our beings. Their faith that data will resolve the ultimate question of life itself (sidenote: I'd be SO excited, though, if the answer they found was 42).
I realise this is contradictory to my training as a scientist: our empirical dogma necessitates that what we can observe becomes the data that we use to create hypotheses, test them, adapt them, and generate theories. But there's a hefty caveat to this, which I have heard in every statistics and research methods class I have ever taken:
Rubbish in, Rubbish out.
You have to have good data, which measures the thing you are testing (not what you hope it's testing), or you're going to get rubbish results on the other side.
Data drives Silicon Valley. But is the data that's driving their immortality imaginings rubbish?
I think it is. And that's because we are not code. We know this because there are unintended consequences of the designs of their technologies, which arise from those technologies being programmed to make assumptions about humans that are fixed, and do not take into consideration our messiness.
Let's look at Google, one of the premiere data aggregators on the web.
For years, people have been using Google as a kind of subconscious oracle, directly asking it some pretty delicate (or indelicate) questions, or a bunch of questions that add up to a delicate (or indelicate) insight. That we feel comfortable ‘confessing’ to the internet is similar to how some people might divulge secrets to a priest behind the screen; it feels less consequential than speaking face to face, like our words disappear the moment they are sent into the digital ether.
Google’s value is that we use it to find something we want, whether that’s your pharmacy’s opening hours, or what to do on your dream holiday. Unexpectedly, its founders invented something that psychologists like myself have been trying to access for almost a century: a window into our desires.
If this is true, I'm out of a job. The whole field of psychology has just been solved.
But it ain't. We psychologists are delighted by this information, because yes - it's super interesting! Insight! Cool! But! The action of putting a desire into the search box does not describe the whole of the psychological phenomenon "desire", particularly if you want to produce ACTION out of it. There is a difference between what we might want and what we will do. There are processes in our minds that cartwheel through many different unseen factors that ultimately may or may not transform this attitude into behaviour.
This inconclusive list includes: whether we think we can do it; what we imagine the repercussions might be; whether the imagined outcome fits into our perceptions of ourselves; what access we have to things that might make the action possible.
In other words: conversion doesn't just happen because we see a link.
However, the company has been successful at convincing enough people that the data they collect in search is valuable enough to buyers that they'll pay for those insights. Alphabet (Google's parent) is in fact an advertising company: the majority of its revenue is in selling ads.
This has reinforced the idea that they have lots of insight to offer - including in the health space. They've been very active here, and have been for almost twenty years. But the foibles of pesky humans have caused some of these data projects produce rubbish.
A good example is Google Flu Trends. In 2006, the ED of Google's philanthropic arm, an epidemiologist named Dr Larry Brilliant, announced to TED that he wanted to use Google's awesome power to:
‘help build a global system – an early- warning system – to protect us against humanity’s worst nightmares’.
He and several engineers decided to experiment with the search engine’s powerful predictive tool to look for ebbs and flows in the world of public health. They imagined that if they monitored certain search keywords related to health in real time across Google’s enormous database, they might be able to reveal where people were about to catch the flu. This would be immense: public health officials and doctors would be prepared in advance and hopefully intercept the worst of it from spreading like wildfire.
They came up with a shortlist of words related to the flu - from 'flu-like symptoms' to 'cold/flu remedy' - and pulled these out of the 50million searches that went through its system per week. They then cross-referenced these with the location of the searches, and looked for clusters over time. Smart thinking. Classic epidemiology, but with a ginormous database of desires.
GFT, as it was called, was tapping into something different from what was already being collected by the Centers for Disease Control at the time: incidence of diagnosed flu, self-reported to the CDC by the doctors' offices. That gave a database of actual, recorded cases, which the public health organisation used to track mini-epidemics.
Armed with their data, Brilliant and his team looked at the CDC's patterns and compared them with their own. In 2009, they reported that their system caught the flu up to ten days faster than the public health body's tracking.
This was huge. But it was wrong.
As the service continued to roll out based on these early findings, GFT was predicting twice as many flu- like visits to doctors as the CDC was . Woah! But this was a fabrication: the Google data was massively over- predicting. The algorithms and machine learning behind the scenes assumed people would use certain search terms together when they felt poorly, but they searched for them when they didn't too. As one commentator put it, ‘the initial version of GFT was part flu detector, part winter detector’.
Another way of describing it: correlation without causation. And with enough data, you're always going to find something.
Still, the idea of what could be surfaced from its monumental database of intent had thrilled Google, who now started to apply the Big Data philosophy to other Big Questions. Around this time, while I was working on a special project for the company, I was shown a prototype that was obviously riffing on GFT. It was codenamed PAX, and the algo was looking for patterns in search terms that indicated there was some kind of social and political unrest in a particular region. These would be words like "protest" or "overthrow". The output would be a report that could be sold to countries to give them a heads up that there were clusters of people that were starting to feel feisty. The ultimate universal aim was to interrupt war before it started.
This did not make it to market.
Humans are not machines, nor can we be reduced into binary ones and zeroes. Yet the belief that the body is a machine has become entrenched in our psyche. "It’s so pervasive, no one even thinks of it as a metaphor.’’ a contributor said to me when I interviewed them in 2014 for a BBC Radio 4 episode of Digital Human.
Technological metaphors update with each innovation. Steam power gave scientists the language to describe the body’s homeostatic systems: the body seeks equilibrium, so it releases energy, and circulates it in a particular way. The internal combustion engine explained how energy is transformed from one state to another; electric power, how our cells have potential. The telephone network helped us to understand how signals travel to the brain. Now, the biological metaphor has become information: data, measurements and taxonomies that can be fixed with the introduction of more code.
Ask Elon Musk, who at DAVOS last week assured the audience that ageing is a "very solvable problem":
"When we figure out what causes aging, I think we'll find it's incredibly obvious. It's not a subtle thing... all the cells in your body, you know, pretty much age at the same rate. I've never seen someone with an old left arm and a young right arm ever in my life, so why is that? There must be a clock that is synchronizing across 35 trillion cells in your body.
Immortalists believe that we are machines that stop functioning, and that all that needs to be done is to fix what’s broken using a technological solution which will lead to long – even eternal – life. This profound and complicated problem has become a game of numbers and mathematical equations and a sketchy understanding of the complexity of what makes us alive. This model gives the body primacy – the material that can be measured and contained.
This data- driven approach is one that scientists who are experts in the human body find baffling. In his 2024 book Why We Die, Nobel laureate Venki Ramakrishnan says,
‘the characteristic arrogance that many physicists and computer scientists display toward biologists’
is what causes the engineers to miss something crucial.
He's not the only medical doctor concerned with technologists' obsession with reducing humanity to data. Seamus O’Mahoney is a doctor and prize- winning author of The Way We Die Now (2016) . In 2025, he went to a longevity conference out of curiosity, and because they kept inviting him. When he got back, he wrote about his experience for a column:
They are interested only in the biomolecular and the monetizable; I heard a great deal over the four days about AI- designed drugs, glycans, the transcriptome ageing clock, and every other imaginable - ome, but almost nothing on the complexity of death systems and the social determinants of death and dying. They seemed strangely uncurious about the enemy they have declared war on. Ageing to them is simply a technical problem that can, and will, be fixed.
But I maintain, we are not machines, like those chatbots we wish to believe are, but in the long term aren't fooling anyone. The trade-off for believing we can be reverse-engineered is that we humans must become more like appliances.
I wish I'd come up with that idea, but I have to thank Dr Elke Schwarz, a political theorist at Queen Mary University. In a studio in Central London, she told me,
"We live an inconvenient life. We are weird. We are messy. Our bodies are mortal. We die. Why can’t we be like products? Why can’t we be like the things that computer scientists make that they can improve and fine-tune?"
Because we aren’t. And to suggest that we can be is so offensive to my idea about humanity - and explains why I was so immovable in the encounters I had with my friend.
But here's the thing: this is the starting point for how Silicon Valley intends to ‘fix’ mortality. Who's side are you on?