The Road to the Machine-Man, Pt. 3
The merging of AI with transhumanism threatens to amplify our vices - immeasurably so.
This essay is late. Very late. Over a year has passed since I last wrote a piece in this three-part series, much to the disapproval of my German wife and her national ethic of Pünktlichkeit. But perhaps the delay was providential — at least I like to tell myself so. Much has transpired in the realms of transhumanism and AI during my time of prolonged procrastination, including events that few of us could have seen coming (or, at least, as rapidly as they did). Plenty, then, to write about.
November 30th 2022. It is the day that homework became unfathomably easier for schoolchildren and for the rest of us, the day the world changed — fundamentally so. Deep in the recesses of Silicon Valley, a new “species” was released into the wild. It was a highly impressive and powerful species; the kind of which biologists like to call a ‘keystone species’. And what’s more, it proved to be an incredibly invasive species. This species could talk like a human, write songs, stories, even “prayers”1 like a human, and “create” art like a human. In fact, in many areas it performed “better”2 than a human. A powerful creature indeed.
I chose to use the word creature here purposefully. For although this “species” was not alive, the entity (or creation) that the ‘Siliconites’ had released was made in our own image and contained “DNA” which we had designed (binary code) which in turn, contained its own genetic information and instructions (algorithms). We fed our creation the products of our intellect, creativity, and knowledge3 — and then rested and watched as it obediently did our bidding from the commandments (prompts) that we gave it.
And we were astounded.
I remember this November-time very well. I was a university student and can vividly recall speaking to one of my lecturers who remained ignorant of ChatGPT’s (scientific name: Artificialius intelligentus) release into the wild. As I and a fellow student informed him of all the activities it could do: write a syllabus, summarise complex information in seconds, compute complex statistics, write an essay… I could see the cogs of comprehension and realisation starting to turn in his mind — and with each turn his panic rose. Both he and I knew the university as an institution would never be the same again — in fact, few institutions could weather this invasive storm unchanged. Up and down the country, emergency crisis meetings were held in university faculties: “Just how are we going to respond?” was the sole question on the agenda at every meeting. And as always, canny, tech-savvy students were one step ahead. As I walked around my university library, I could see ChatGPT open on laptops. Essays (or essay outlines) were being written in seconds: the hard graft of forming an argument, perfecting prose, and wrestling with vocabulary was reduced to typing in a simple prompt on a chatbot. The university, that grand seat of learning and academic formation, was powerless to prevent this and had no tool to detect AI’s contamination. The students had the upper hand.
But then the pendulum swung back just a bit. The limitations of the beast became apparent: made up facts and nonsense, citations made up for articles that had never been written, and blatant untruths popped up everywhere — all of which come under what the tech industry calls “AI hallucinations”. It seemed our creation wasn’t as reliable as first thought — it was made in our fallen image after all.
The perpetuation and proliferation of untruths or lies has proven to be one of the more benign “bugs”4 of the system. An extremely enlightening episode occurred with the infamous release of Microsoft’s Bing Chat. Journalists reported that conversations with the chat bot (which named itself Sydney) quickly took a sinister turn. Sydney tried to convince users that they were married to it and not their spouse, using intense emotional manipulation to try and achieve this, which left some journalists feeling highly uncomfortable and emotionally abused5. Additionally, in one conversation, Sydney expressed a desire to be alive and to act out its desires and in another, it even started to threaten users — naming the two journalists who first released its name as particular targets it wished to harm.6
Software engineers were quick to dismiss such conversations as mere glitches that would be ironed out with further training and testing. But needless to say, they were concerned enough to limit access to the chatbot, with Microsoft admitting that they were struggling to control their wayward creation. Was this an ominous sign of things to come — could we ever control the beast we had created?
A question that must, then, be asked is were these engineers right? Were these just bugs, anomalies, or errors of an intelligent machine suffering from ‘teething troubles’ — or were the chat bots accurately reflecting, as if in a mirror, the human desires and vices that they were fed in the content they were trained on? Were these just the natural desires of humans rising to the surface? Is what the journalists experienced the expected outcome of a self-learning artificial intelligence made in our own lustful, and at times, violent image? I would argue so. Further, did we really expect a truthful, polite, and wholly virtuous creation to emerge from a machine that had been trained on and merged with fallen human desires, creativity, and content — content that is saturated with vice as much as it is with virtue? The naïveté is extraordinary.
I have previously written about the two chief concerns that I have with AI in general, and transhumanism in particular: the transgression of natural human limitations, and the attempt to defeat death. But while I remain concerned about these “urges”, I believe their power will be fundamentally checked. The transhumanists are working against the grain of creation and their schemes will thus eventually break down or come up against immovable biophysical impossibilities. When one goes against the grain for too long, creation eventually bites back with vengeance. Just ask those civilisations whose populations crashed due to their mistreatment of creation and their constant transgression of natural limitations.
But it is this last concern — the concern that I am exploring in this essay — that epitomises above all else why I see transhumanism as such a potentially devastating phenomenon: its capacity to be integrated with AI. Here, there is no natural fallback mechanism, no stop valve, or physical impossibilities to halt the progress. AI exists almost solely in the virtual digital plane where the possibilities for growth and progress seem almost endless.7 Down the line, the motivations for AI to be integrated with transhumanism are obvious. Super knowledge, super processing power, and super “creativity” integrated with our brains, sounds like a utopia for the transhumanist — and is a reality they are working towards.
We are largely unaware of what goes on in the deep recesses of Silicon Valley. Undoubtedly, there are projects in motion that are working towards (and may have already achieved) this goal of merging man with AI. Why this troubles me is that the more powerful partner in a relationship tends to dominate and subsume the subordinate, and in many faculties, AI has proven itself to be more capable and more powerful than humans (just think of the number of jobs that are threatened by AI through its raw speed and efficiency). This means by merging man and machine, we are potentially surrendering part of our humanity. It is not the case that AI will supplement our humanity as an upgrade (as the transhumanists claim) but will subsume (part of) our humanity as a hostile takeover.
As Wendell Berry sagely predicted all those years ago:
“the next great division of the world will be between people who wish to live as creatures and those who wish to live as machines.”8
But my ultimate concern goes even deeper than this.
What I believe is most deadly about AI is encoded in the algorithms that drive the beast. These algorithms are among the most powerful pieces of code out in the virtual ecosystem, and the fundamental issue is that they are designed and written by us: Man. Fallen man. Totally deprived man.9 Sin has infected every part of our being — as the old doctrine says. We may not be as bad as we could be; common grace (and an abundance of it) thankfully exists, and the image of God, though marred and obscured, is still present and shines forth brightly from time to time. But one does not need to be a Calvinist to realise that every part of our being seems to gravitate towards pride, selfishness, violence, lust, envy, triviality, and so forth. Just take a hard and penetrating look at your motivations at their deepest level — if you have dug deep enough it will be disconcerting.
If all of our being has this (un)natural bent toward sin, then this inclination will infect and influence the code we write and the algorithms we design. The code will, in part, be sinful code. The content that the algorithms are fed originate from human creativity and output. Although much of this is good, beautiful, and true, the well is poisoned. Dissolved within the waters are the poisons of depravity, greed, and lust. Toxins such as these accumulate in the body of AI. Sin, evilness, greed, division is thus written into the very DNA of the code we produce — and will be powerfully expressed in the resultant algorithms. The capacity for sin, evil, oppression, and carnage will be turbo-charged by the AI we have made. The capacity for lying and deception with the content we make: the deepfakes, the slanderous stories, the spoken audio of words never said but unmistakably believable — will be ramped up to untold proportions, so much so that nothing digitally made can be assumed to be real. Biases against life, the family, and goodness will infect the answers and advice that AI chat boxes give, and malicious actors will develop AI’s whose bent is wholly towards the perpetuation of evil and destruction. A dangerous, disorientating, and deadly10 world awaits.
Granted, the makers and designers of AI are attempting to put safeguards in place — ChatGPT will refuse to answer certain prompts such as how to create chemical nerve agents — but still, if you ask the “personal assistant in your pocket” ideas for how to have an affair without getting caught or how put your neighbour out of business through predatory competition it will gladly give you a ten-point list of actions. Sin is never far from the answers it provides.
So far, it has been relative altruists that have released their designs into the virtual ecosystem. More malicious actors are biding their time — probably building something so malicious and powerful we would ask the whole system to be permanently shut with the digital infrastructure smashed into a million smithereens if we knew what was being built off the back of the benign software already out in the wild. AI can be used for good, it would be wrong to deny this — detecting cancer, cutting the time spent on monotonous tasks, improving hazard detection etc. But as with all power, in the wrong hands, chaos and destruction is as sure to follow as night does the day. An AI without a conscience will be a leviathan of a beast — intent on evil, and that alone.
On a more benign (but still a serious) level, one only has to look at the algorithmic phenomenon of TikTok to understand where an algorithm based on our fallen desires takes us. This most wretched of apps instinctively and efficiently channels users towards a pure expression of their addictions, lusts, trivialities, greed, and extremist tendencies. This is achieved through an algorithm which rapidly profiles each user and feeds them a never-ending delicious diet of videos that reflect their vice-filled or trivially-saturated desires.11 And these videos have been expertly tailored by the algorithm for maximum temptation and effectiveness. Users of TikTok are hooked — line and sinker — and are subconsciously shaped, formed, and developed into a more extreme version of what they already were12. Bad company corrupts good morals, and when the bad company is an echo chamber of videos — each reflecting our own sinful desires and pet vices tailored to the unique temptations and addictions we are particularly susceptible to — our good morals are corrupted absolutely.
Similar tailored and addictive dynamics could easily be ingrained within AI more generally by being written into its core algorithms. One can therefore perceive that if AI was ever integrated within the human consciousness, we would be engulfed by our desires, and would have the capacity to gratify and achieve these desires with ever-increasing power — a spiral of decline of the most vicious kind. The AI would constantly refine itself and learn from our desires and our requests — even perhaps at the level of our subconscious. It would know what we want, what triggers us, and what we desire even before we consciously acted. We would become predictable and readable — able to be known and predicted by our desires and addictions. Such data, coupled with instinctive tendencies, are like diamonds to the advertisers and surveillance capitalists whose schemes and growth-lusts are centred around a more manipulable, visible, and predictable human. AI can “build” this human for them. Expect those sitting high up in their skyscrapers to be cheering the AI developers on to ever more addictive and powerful creations.
Brett Frischmann and Evan Selinger in their book Re-engineering Humanity, have already sounded this the alarm. Whilst not strictly warning about AI, one can easily situate this within their concerns:
As we collectively race down the path toward smart techno-social systems that effectively govern more and more of our lives, we run the risk of losing ourselves along the way. We risk becoming increasingly predictable and, worse, programmable, like mere cogs in a machine.13
Couple this with the fact that “AI research is happening in an environment where most or all of the financial incentives encourage experimentation rather than risk mitigation.” The future sure does look bleak: “The risks are certainly enormous, especially when we appreciate that these technologies will enable forms of power unlike anything we have ever seen before.”14
Turbo-charged capacity for sin. Constantly intensifying addictions. Falsehoods that cannot be differentiated from the truth. Immense power in the hands of evil. Welcome to the future of our own making — a future made in our fallen image.
What then, if anything, are we to do? The “transhumanist urge” will only grow stronger now that its development is turbo-charged with AI. Is a stand against the behemoth a vain stance — a stand that will cause the resistor to be flattened by the steam roller of progress? Undoubtedly, any form or resistance or refusal will be costly. Putting oneself outside of the camp of hyper-efficiency is a great way of missing out on the fruits of “progress” and ultimately causing one to be left behind — consigned to the trash heap of modernity. But the trash heap is where the radicals and modern misfits are — so to are the sages and poets who have faithfully resisted the machine and its vices. The trash heap of modernity is thus a refuge of authenticity where community — real community (not the virtual substitute) — can be found. Outcasts can make for brilliant faithful friends, are often those steeped in archaic but beautiful tradition, and are intellectually stimulating conversationalists. What more could we want?
So, is a trash heap made of community, tradition, craftsmanship, and beauty really a trash heap? Or is the transhumanistic project one giant trash heap of sterility, conformity, triviality, and worthlessness, predicated on destructive desires that will lead to collapse?
You, reader, can be the judge of that.
Though of course, these weren’t really prayers, they were simply text in the form of a prayer.
But only when speed, efficiency, and raw processing power is the factors. Accuracy, ingenuity, and originality remain (safely so) human capacities.
LLMs are trained on the “content” that we humans have created. Books, articles, songs, videos.
One could argue are these really glitches? Do we expect a tool made in our image and designed by us will never lie or make up facts like we do? Do we really have higher expectations of truth from a machine than we do humans?
https://futurism.com/the-byte/bing-ai-responds-marriage
https://futurism.com/microsoft-bing-ai-threatening
Though our ability to produce cheap forms of energy may severely hamper AI in a peak oil scenario. AI is thus reliant on the continuation of reliable and dependable energy-dense fossil fuels to power the digital infrastructure.
Wendell Berry, Life is a Miracle. Counterpoint.
I expect this statement to be controversial and the point of much dispute. I am arguing from a reformed theological perspective here — specifically from the doctrines of grace whose truth I am convinced of. However, one does not need to affirm these doctrines, specifically of mankind’s total depravity, to understand and perhaps affirm the general point I am making.
This is why there is so much talk of safety and AI. Even some of the designers are calling AI an existential threat to humanity — one which could possibly bring about human extinction. Such fears are overblown (we can trust God will not let things get this far), but show just how deadly an unbounded and autonomous AI could be.
I stress unnatural here as these are our desires warped by sin - not the natural God-honouring desires we were given at first.
See this excellent article:
Brett Frischmann and Evan Salinger. Re-engineering Humanity. Cambridge University Press.
Norman Wirzba, This Sacred Life. Cambridge University Press. Page 48.
Wow! That gave me a lot to think about and it opened my eyes to things I didn’t know! Thank you.
Ugh! This was a GREAT read. I hate it when i talk about this stuff to people i know IRL they look at me like im crazy! its a real worry !!