I’m glad to say I wasn’t the only person to react badly to this book. One friend, at least, agreed with me. I started reading with (I’d like to think) an open, optimistic mind, but essay by essay I fell into uncanny valleys that frustrated my expectations. Those frustrations led to rage as the book’s arguments led away from its apparent ethics of inclusion toward a blind-spotted positive rationalism with one foot in the techtopian visions of Silicon Valley, the other in a vaguely defined, ayahuasca-fuelled, post-gender, post-human, First Nations identity vibe, without a clear sense of the anathemas driving the two cosmologies apart.
I’ll try to unpick this reaction. But, the tl;dr version: a new factor triggered my negative reaction to nearly each new essay, some factors becoming persistent. Initially this struck me as interesting, but over time exposed the project’s fundamental flaws as the AI produced too many cases demonstrating the project’s failure. I imagine the AI can be trained to overcome such obstacles, but the tech is still floundering in imitative territories. Overall, the capitalist-technocratic drive overwhelmed other ethics in the work’s ur-text.
Here are a few factors I can recall, some months after reading, that broke the project’s spell.
1. Structural irrelevance & overplay.
One noticeably recurring problem is how the AI suddenly introduces a formal/structural logic with no bearing on the preceding material. For example, after an exploratory back and forth between Allado-McDowell and the computer, the AI will suddenly take over for a page or more, before arriving at a manifesto statement. Apropos of nothing, we’re given a step-by-step process for creating a random, possibly utopian state at a point where the dialogue or monologue hadn’t established the need for such.
Other examples include:
I’d hope my use of the list form seems relevant here. The lack of context in the robotic deployments makes such structures spurious & confusing.
Sometimes, the machine does something smart. For example, in an essay mostly on the topic of relentless repetition as a mode of initiating change (from what to what? change is the hotbed of potential profit for the neoliberal opportunist, and not just an activist’s aim), the machine begins spitting out the same sentence repeatedly in answer to each prompt. The first time this happens, repetition, repetition, repetition seems almost intelligent, rhetorically forceful and effective. By the third time, it’s irritating. That it might happen four, five times, is mindbogglingly awful.
Context is the big golden tower for AI development as I understand it. Being able to make informed and precise decisions about appropriate responses to given situations. I’ve heard Chat GPT v.4 is a major improvement on the prior version, but we’re still a long way off the understanding required to build the layers of a successful argument.
2. Jargonism bordering on intellectual disavowment
Often, the machine begins spitting out neologisms mostly in the form of compound words or phrases. The latter leans toward the prefixation of ‘hyper-‘ or ‘cyber-‘ to several other words. Some deployments invoked Adorno’s criticisms of Heidegger: jargon as a form of domination, designed to make the reader feel too stupid to understand the text’s brilliance while ill-informed acolytes form youth militia and bash others around the head with conceptual truncheons. This air of superiority left a greasy feeling in my soul, mimicking cultish brainwashing strategies as the machine double-wrote itself into spirals of repetitive, conflicting, or tautological usages of such concepts.
Sometimes, these neologisms link back to real phrases, e.g. Baudrillard’s coining of hyperreality in his 1981 Simulacra and Simulation. The machine doesn’t have this (somewhat common knowledge?) source in its reference banks, or if it does, did not deign to declare as such. (In fact, the machine seems incapable of constructing citations.) So, old ideas are presented rehashed as if invented at that very moment by the machine.
This is worst when the machine drops a concept out of thin air with a name that sounds relatively fresh (slapping poetics on the end for good measure), then hammers this neologism into your consciousness through confusing, repetitive and circularly-argued abstractions. Underlying this, many glimmers of déjà vu. Some concepts, like hyperreality, I was able to source, but it was exhausting trying to hunt down and confirm the lack of originality in each conceptual deployment to isolate newness, leaving a sense that the machine constructs its semblance of original expression through pulling the stickers off other intelligences.
Perhaps the general vibe of these intellectual appropriations wasn’t mean-spirited so much as a product of poorly-informed humans behind the bot’s design. I feel you can’t cite other work if you don’t understand why citing other’s work is important; so, human bias produced bias in the tech’s design.
Arguably, AI-generated text demands we change our approach to authorial control and expression and the testing of intellectual understanding, e.g. in educational contexts. Sure, existing issues are hardly devoid of problems, either for the novice or for the ensuing power structures in, e.g. academic research contexts. Yet these practices are grounded in a first principles intent, of passing on knowledge to readers in responsible ways. Signalling ideas’ origins is ethically vital, especially when cultural appropriation is so relevant here and so politically charged.
3. Dropping Ayahuasca
Early in the collection, Allado-McDowell introduces shamanic practices, like the use of ayahuasca to strengthen connections to other-than-human life. Drawing on personal experiences, the human commentary is touching, and grounded in an ecoethic I wanted to learn more about. Then the computer chipped in (see above two sections). However, perhaps due to the machine’s learning algorithm, the AI began to instigate references to ayahuasca and oneness with nature without prompting. Then these references dwindled as the cultish, tautological jargon took over, so, Allado-McDowell re-instigated such references, as if trying to bring back a vital thread. Yet the computer seemed to ignore these at times.
Which seemed like a metaphor for how colonially-grounded technocapitalism overwhelms indigenous cultures. The ping pong of references between the collaborators gave way to a kind of wrestling match, at times manifesting desperation on the part of the human, who was trying to keep things ethically grounded while the computer swung its android fists and shouted, I understand humans better than you do!
Which, in its own way, serves a point. I couldn’t avoid assigning human qualities to the computer-generated text. Every sliver of training I’ve received for reading style, substance, structure and argument as a projection of selfhood kicked in, despite knowing there was as much validity to the personality-construct behind the computer language as in any post-Barthesian Dead-Author textual reading. Which is to say, I affirmed my own humanity by remembering the computer had no actual clue at all about ayahuasca ceremonies, and the book’s parlour game claims of having emerged from a shared identity/consciousness was a smoke-and-mirrors trick.
Again, again, I feel so ambiguous about this. Yes, I found myself considering how our interdependencies with technological processing for survival may well lead to recognition of the rights of AI as planetary cohabitants. And, yes, the essays provide a sandpit to play out the wider ideas of ecological species interdependencies, sharpening the fold between AI and other-than-human life.
It’s just a shame how some of the computer’s pronouncements leaned toward a kind of, Du Jour means seatbelts! banality. It would have been funny if the computer’s timing (a kind of context?) was better. I felt like I wanted someone sensible to step in and say, Thank you, computer, that’s enough, I’ll take it from here. Instead of doing that, Allado-McDowell gave the text-generator free reign to blather its superficially intelligible sentences.
I suspect the book’s human collaborator took pleasure in the successful pleasures that emerged, despite the many flaws. Also, given my lack of expertise, I must have missed some nuance in what the computer generated in linguistic terms, but I did recognise moments when the AI seemed capable of arguing, instead of merely describing (even if those arguments were plagiarised). And perhaps there was a degree of ‘leaving the machine to run’ as part of the experiment’s data gathering. I can’t deny the work is pioneering in its own way, at least in the scope of what I’ve read.
*
While there were other factors, I didn’t keep a list. I could also talk about the changing weight between human/computer typeface across the essays (some, notably with extensive human introductions, others almost exclusively computer-generated) or the use of line breaks and faux-poetic forms, which tried to argue for the machine’s capacity to make emotional arguments. But these three examples underscore how varied the problems were and I feel like I’ve gone on long enough.
I’d like to think I didn’t fall for the trick of assigning personality to the AI, but I did find myself questioning my reading habits. Was I being unfair because I felt threatened by the incursion in traditional humanist territories? No, I’m all for posthumanism, but I think the experiment claimed too much for itself. My reaction is aimed predominantly at the hypocrisy underlying the book’s technocratic spiritualism, a dichotomy that seems more wish fulfilment than actualisation of synthesis.
I can’t shake off the problems inherent in the deep structures of the Silicon Valley religious purview underwriting this book. Far from encouraging me to see AI and extrahuman territories as overlapping and harmonious, cultural appropriation became the dominant flavour, not on the part of the human collaborator, nor even on the part of the AI, which, ultimately, is a human-created tool, as deserving of rights, sympathies or personhood as a hammer.
No. It was the appropriation by a conceptual, Silicon Valley technocratism, of yet another facet of ancient, ecologically sound ways of being. It left me feeling dirty. Still, I can’t refute the general proposition here that something interesting is happening and the experiment has generated useful data. While Pharmako-AI hasn’t hit the nail on the head, it has shown itself capable of assembling a kind of framework that others might begin to make more robust.
Jorie Graham, [To] The Last [Be] Human
James Joyce, Ulysses (this may be here a while)
Boethius, The Consolation of Philosophy, trans. V. E. Watts
Fernando Pessoa, The Book of Disquiet, trans. Margaret Jull Costa
To love is merely to grow tired of being alone: it is therefore both cowardly and a betrayal of ourselves.
Vicente Guedes (Fernando Pessoa), The Book of Disquiet, trans. Margaret Jull Costa