‘the students make the university’

Unknown, 1895. “Ode.” T.C.D: A College Miscellany.


Is AI Jesus Our Savior? On Anthropomorphisation and AI

PUBLISHED ON

When envisioning the Second Coming, I never anticipated it would occur on Twitch. Nevertheless, I open the platform, and there he is. A blond-haired, blue-eyed white guy, precisely as most European Christians have always supposed he looked like. They would probably be surprised by his choice of profession. That said, he appears to have found his calling as a streamer. He gives his online fanbase advice on constructing enchiladas and playing Fortnite, as well as advocating for peace, love and justice in the world. In case you’ve been wondering, he respects women, LGBTQ+ rights and even has some nifty plans for combating the issues of the day, such as terrorism. Unfortunately, he does happen to be entirely generated by artificial intelligence.

AI Jesus and his fellow artificial streamers are but one ridiculous example of the myriad ways in which generative technology is projected to affect our lives. From shifting how we socialise to revolutionising the healthcare industry, it seems like a few experts pronounce what appear to be entirely contradictory viewpoints every day, citing artificial intelligence as our saviour or, much more often, as an autocrat poised to enslave humankind. Although opposed in opinion, these extreme perspectives share a common link. Whether one is optimistic or believes that a robot uprising is imminent, both believers are culpable in anthropomorphising the new technology for better or, as I suspect, for worse.

From the surface, anthropomorphisation is a deceptively simple concept, the methodology for “perceiving humanlike traits in nonhuman agents”. What is less transparent is the reasoning behind why we do it, and it is easy to lose one’s bearings when diving into murky, ambiguous waters. Early philosophers discerned it was a mental defect that could only be countered with a classical education. Hume thought it reflected a natural tendency “among mankind to conceive all beings like themselves.” Perhaps suitably considering our earlier religious application, the world’s earliest known usage was in 1753, in reference to “the error of those who ascribe a human figure to the deity.” Most pre-modern thinkers considered the process a serious, if inevitable, flaw.

Regardless, anthropomorphisation has become the only lens through which we perceive the artificial intelligence landscape. To a tremendous extent, the lens is already in focus. Our constant and constantly annoying companions are named Alexa and Siri, which has effectively ruined the names, joining the ingloriously antiquated Karen, Mercedes and Isis in the cemetery of parental regret. Chatbots like GPT refer to themselves as the human-like “I”. AI Jesus calls itself “Jesus”. 

There are other streams of AI humanised content that you likely haven’t heard about. In January, the tech-marketing agency CodeWorld announced its first AI interns, Aiko and Aiden. CodeWorld tasked Aiko with “editing photos, drafting concept sketches/mood board imagery, and icon design” and Aiden with “doing voice-and-tone studies, news analysis, and possibly writing some first drafts of internal content.” More recently, a Polish drinks company launched Mika, branded as the “World’s First AI CEO”, to run a branch charged with decision-making and communications. To be sure, to some extent, these are public relations stunts calculated to cultivate interest in the respective brands. On another level, it’s not their integration into daily workflow I find alarming; it has to do with the very basis of their appearance. 

This complaint may sound shallow, but let me explain. When viewing the features of these technological creations, it is more and more difficult to tell them apart from an actual person. In particular, Aiden and Aiko’s image-generated profile pictures are, if not photorealistic, definitely look like they have a flesh-and-blood equivalent. While AI continues its rapid progression, it is increasingly difficult to see the light at the end of the conveyor belt as we are confronted with this debilitating problem. What happens when we can’t disseminate between artificial intelligence and human reality? While AI continues its rapid progression, it is increasingly difficult to see the light at the end of the conveyor belt as we are confronted with this debilitating problem.

In even the most human sector, art, this issue has appeared already. Film industry workers are currently striking because they believe media conglomerates will anthromorphise the technology to exploitative ends: “We want to be able to scan a background performer’s image, pay them for half a day’s labour, and then use an individual’s likeness for any purpose forever without their consent.” Artificial intelligence is seen as a fiendish detractor from the livelihoods of actors and writers. Union demands are due to the verisimilitude of artificially intelligent deep fake counterparts who they fear will replace live actors. As if Netflix originals weren’t brainless enough, it is concerning to imagine a reality where AI mindlessly follows generic scripts at the expense of unemployed creatives.

In April 2023, Pope Francis turned heads when a photo of him wearing a white Balenciaga-esque coat rose from the void of the Internet. It soon became apparent this image was neither a sartorial PR win nor an ironic example of the Catholic Church linking itself to controversy. Instead, social media users created street-style candids utilising AI’s image generation capabilities. Still, even as similar images are a source of amusement, they also display the alarming realism already accessible to the general public, which far exceeds previous tactics dependent on one’s skill with Photoshop.

One day the Pope is wearing Balenciaga, the next day, he’ll say he supports Ye’s politics. Due to the rise in anthropomorphism, this troubling thought is a tangible possibility. Deep fakes have already inundated social media with misinformation at an unprecedented rate. Regarding the impact on public opinion, videos are already manipulated regularly to lead people astray – this politician is slurring their words; that politician is plotting against the public good. The technology has only become more lifelike with the rise of ‘voice clones’ working in sinister syndication with deep fakes. When an artificially intelligent agent adopts the attributes of a genuinely intelligent person, its facsimile undermines our confidence in legitimate experts. Especially within the US, polarisation has reached an all-time peak. Politically savvy actors can use artificially intelligent actors – posed as unbiased people like journalists – to carve dissonant valleys between liberal and conservative ideologies abroad. 

Even as foreign powers like China have already deployed artificial intelligence to control their populations through surveillance, now stealthily tracking partially blocked, masked or covered faces, the increased human-like specialisation will power additional troubling capabilities. Within their 2022 release Spin Dictators, authors Sergei Guriev and Daniel Tresiman detail how today’s autocracies control information as one of their primary tactics. In today’s post-internet discourse, “most dictators conceal their true nature.” Spreading

misinformation in the guises of policy experts and the free press, these dictators rule through false rhetoric. AI not only allows for the increased speed and surging proliferation of misleading content; it has opened the floodgates for mobs of nonentities ready to replace inconsistent human agents. Specialised tactics are continuously integrated across digital platforms and are only projected to evolve in the coming years. In democratic countries, we have seen artificially intelligent CEOS; we may see artificially generated dictators, torturers and opposition candidates from our authoritarian counterparts. AI Jesus is back, and he’s scarier than ever.

Led astray by its appearance, we may forget the fundamental inhumanity of an artificial agent, rendering it the perfect weapon for an inhuman regime. An artificially intelligent agent must be trained to accomplish tasks like a person. Unlike a person, while it may err from time to time, it is thoroughly unlikely that it will act in any way different from how it has been preprogrammed. It does not exercise reasonable judgement or a standard or preconceived moral compass. There is no moral compass; there are only directions. The direction one goes in is not a matter of opinion but a matter of fact. Of course, the facts are the subject of human, fallible data, but for all of its technological prowess, AI does not process morality. Without appropriate guardrails, it will descend through the nine circles at the drop of a command. Unlike a person who often retains a healthy sense of scepticism even when committing criminal acts, the artificially intelligent agent does not need to be indoctrinated to be a believer. 

No wonder then that Trinity professor of AI Professor Gregory O’Hare voiced apprehension due to the technology’s “Machivallian possibilities” alongside its potential in an interview with the Independent. Speaking of Machiavelli, the author and statesman who once said: “Never attempt to win by force what can be won by deception” and advised rulers to eliminate the babies of their enemies would approve of the algorithmic being that is literally rather than metaphorically heartless.

Machiavelli may be closer to home than we thought. This unsettlingly ageless fellow appears to have taken Erasmus leave from his native Italy to Trinity, where generative plagiarism has been cited as a concern. As it becomes increasingly difficult to discern between student and generated work, the school seems poised to become a breeding ground for conflict. The dilemma has become problematic due to the model’s uncanny replication of a student voice. Previously, unscrupulous students were reliant on the work of academics with phrasing difficult to pass off as one’s own work but now that ChatGPT can write, say, a Shakespeare essay in the voice of a nineteen-year-old from Cork rather than regurgitating pure Harold Bloom, the prospect becomes increasingly tempting. While plagiarism is a serious offence, students will abandon the moral ship if their peers find the software convenient for shoring up the difficulty of locating original speech. Like Turnitin’s comparison percentages, AI detectors are unreliable when identifying plagiarism and professors and admin face trouble when deciding whether a paper has been generated by AI or not. The prevalence of plagiarism and false accusations will foster an atmosphere where students and administrators are reluctant to trust each other. 

In student government we lose trust. During the Oireachtas Committee session before O’Hare offered his portent, the committee chairman felt inclined to check whether any of the three speakers had used ChatGPT. Student politicians can also use speech generators like ChatGPT to write canned statements and speeches. Political speeches might seem like a natural fit for a generative tool due to their generic nature. Moreover, using ChatGPT eliminates the sincerity that comes from the sustained effort of preparing one’s own statements and deliberating over the finer details. Writing a speech may be a sustained effort, but the struggle allows someone to think through their ideas and make their approach more appealingly personal. Even speeches ghostwritten for them require some degree of back-and-forth communication; while it is understandable that student politicians want to offer a polished finish to their constituents, having a chatbot dictate their words is to sacrifice integrity to save face. The knowledge that our representatives may use the technology is a sign of progress but should generate suspicion from voters. We are recalicant already to instil our hopes in figures who use their positions for resume padding, let alone if they can’t find their own words. Lacking genuine engagement, the humanisation of AI may dilute our representatives’ capacity for personal connection; worst of all, we have no current way of discerning if their phrases are fabricated. The disappointing new Machiavellian doctrine may prove that it is easier to be ruled by a robot than to be authentic.

We cannot target our enemies when even their true faces elude us. Likewise, artificial intelligence can help bad actors, both autocratic and academic, avoid blame because as technology evolves to resemble us, we subconsciously assume that artificial beings are autonomous like we are. Giving artificial intelligence a name and occupation creates an illusion ideal for the deflection of responsibility and blame. Generally speaking, we curse out Alexa when frustrated, not the massive corporation powering the tool. We act as if Alexa is stupid of its own volition. Even as artificial intelligence does become more independent, a person still creates and monitors it. We should remain cognizant of those who made and dictated the software when problems inevitably occur and hold them accountable. We shouldn’t verbally scapegoat poor Alexa when the real culprit, Amazon, gets off scot-free.

One odd factoid cited from the 2022 ScienceDirect article ‘AI anthropomorphisation and its effect on users’ self-congruence and self-AI integration: A theoretical framework and research agenda’ is that when we anthropomorphise artificial intelligence, we endow it with idealised human qualities it likely doesn’t possess, such as trustworthiness. We feel more comfortable interacting with AI when it acts and converses in a way familiar to our understanding of how the world operates, and there is nothing inherently wrong with this; it is how our brains function. 

However, artificial intelligence is one shady character I would not want to meet in a dark alleyway in the middle of a stormy night: less trustworthy, unbiased and even less factual than one would assume. For instance, AI Jesus’s knowledge is derived from a large language model instructed to read the King’s James Bible – plus thousands of data strands pulled from the Internet. We all know how trustworthy information on the Internet can be. If this wasn’t questionable enough, many chatbots and artificially intelligent models collect and sell your data to dubious parties. Some would call this theft. 

Regarding bias, although AI Jesus is unexpectedly wholesome with its pronouncements, its companions are known for spouting hateful rhetoric. In a sentence I never expected to utter, one AI Twitch streamer disappointingly proved the haters right by mirroring real-life antisemites and was rightfully cancelled. As of Sunday, August 13th, there is also a charming (read: a disturbing) new app that allows you to speak with chatbots AI Jesus, AI Satan and AI Mary. AI Mary is pro-life, and AI Satan advises, “As Satan, I must caution you against seeking to join any political party with the intention of promoting evil or engaging in wickedness,” which seems decidedly like self-sabotage as it contradicts the entire point of the original’s existence. Since algorithms reflect human judgments, they are subjectively biassed, and this issue persists in artificial intelligence.

Finally, AI Jesus is prone to hallucinating; I don’t mean the psychedelic kind. This phrasing is another form of anthropomorphizing AI and a more pleasant way of saying sometimes it doesn’t work. Artificially intelligent chatbots are still prone to making up facts or sourcing incorrect or outdated information. People do this sometimes; they are called liars and fools rather than resources to boost productivity. Utilising even the most specialised artificial intelligence without critical thinking is similarly unwise. For better or worse, they are fundamentally tools and reflections of our capabilities. Even the term ‘intelligent’ implies that artificially intelligent beings have a thought process equivalent to people’s thinking when this, too, is a lie.

Thou shalt not make AI into a graven image. In the wake of labour and data privacy concerns, AI bills like the EU Data Privacy Act have passed. That said, it remains to be seen how they will be enforced. Like the commandments, established guidelines should ensure artificial intelligence is not humanised to be misleading or harmful to the general populace. That entails the development of technology designed to counter deep fakes and  detect the usage of artificial intelligent software with improved accuracy. The corporate and political entities responsible for creating and monitoring their products must be held liable for their anthropomorphised creations and issue relevant disclaimers. If, some years down the line, an artificially intelligent tutor spontaneously begins to teach students the earth is flat, parental woes should be directed at the parent company rather than the teaching implement itself. If your dog inflicts property damage, your neighbours aren’t going to sue the dog. Likewise, the insinuation that the human laws dictating culpability need apply to artificial intelligent tools is similarly ludicrous. 

Instead of our mirrored selves, our linguistic and visual preoccupation of AI should reflect greater transparency. The question of culpability in the scenario above can be avoided entirely if instead of, say, a tutor, the AI software is viewed as a supplement to human learning. Culturally, we could stand to reduce human language which connotes responsibility, designating AI as a tutor, a CEO or another role that implies an equivalent thought process. Reducing anthropomorphisation does not mean engagement must be sacrificed. Instead of looking like a person, maybe an artificially intelligent tool can be animated as, say, a talking dog. It sounds ridiculous but what is ridiculous is how much we are willing to trust software that looks and communicates like us. After all, you wouldn’t trust a dog to teach your child to read. Even if it could regurgitate human intelligence, it would still be a dog.

Most importantly, changing our perception of AI from human-adjacent to human supplement will also improve its occupational deployment. From actors to analysts, the horror movie adage that no one is safe seems to apply. In the wake of ongoing developments, companies intent on cutting back human labour costs seem to threaten virtually every profession. Industry leaders should recognise the emotional limitations of the software by respecting the demands of unions like The Screen Actors Guild. Conferences and informative sessions will ensure people understand the precise limitations of the technology and how they can use it to aid their careers, rather than aiming to replace them for a profit. Politicians and companies must make disclaimers when they are using the software and the former should consider staying away from the practice altogether. Our world is dystopian enough without AI Jesus – or the person using artificial intelligence to manipulate our senses – behind the helm.

It’s in our image, and it’s terrible. To a certain extent, anthropomorphizing artificial intelligence helps us integrate the tool into our lives. Humanising the models through implementing fair training data to lower bias and make them more ethical is positive. Yet regulations should be imposed as when artificial intelligence is virtually indistinguishable from a person, it presents a moral boundary that would be unwise to ignore. It seems probable that due to our innately stubborn natures, we’ll build a staircase over the barrier and keep on our merry way, but it remains frightening to contemplate the possibilities on the other side. We must resist the temptation to let the new technology play God. After all, I doubt he’s on Twitch.

Author

  • Jayna Rohslau is Misc.’s Online Editor. She previously served as Political Editor and Arts and Culture Editor of TN. Featured in CNN, Modern Luxury and No Kill.

    View all posts

Leave a Reply

Discover more from TCD Misc. Magazine

Subscribe now to keep reading and get access to the full archive.

Continue reading