What the Internet has begun, Generative AI will complete.
There are some people these days that beat the drum of fear about Artificial General Intelligence, and the chaos that it will supposedly wreak - perhaps (some say) to the extent of wiping our species from the face of the earth.
But we do not have AGI, and the type of AI we have now is nothing like it. Even so, Generative AI poses its own existential threat to the human species - above and beyond the jobs it might take.
We might not last until the advent of Artificial General Intelligence, which can think and act like a human being - if such a thing is actually even possible. We may be wiped out by the kind of Artificial Intelligence we already have, and our own hands will be the ones wielding the knives.
And it will all be because we have blinded ourselves with the output we asked it to craft on our behalf.
I’ve been exploring the liminal dreamscape realm of Generative AI models, checking their fit, observing the war. Some of the things I’ve found have been deeply moving, some inspiring, some terrifying or disturbing. This is part of a series about my journey and the best practices (and anti-practices) I find for these new tools.
For thousands of years, humans have built a structure for ideological thought; a tower with many minarets. We have told stories and described models to each other ad infinitum, and these are what comprise the bulk of the training data for the Large Language Models that power today’s Generative AI, ingested by Microsoft, Google and Facebook. They have taken and made tools out of our grand edifices, the work of giants standing on each other’s shoulders, who lifted each other up towards the heavens and stratospheric heights of Knowledge and Proclaimed Belief, laying down bricks with our choices and understandings all the while. We wrote these down and sent them to each other, and gathered in groups of shared belief structures bolstered by these works. The writings and shared beliefs became doctrines, and the doctrines permeated every piece of what got eaten and included into LLMs.
As human societies we have developed ideologies - built not just one, but many, and their girders are human luminaries we believe said luminescent things that light our way to better understanding of our existence. Adherents like to quote canonical doctrine from early influencers at each other, whether those are Paul the Apostle, Siddhartha Gautama, Benjamin Franklin or George Lucas. But the true lasting strength of those structures come from their self-consistent cohesion and the completeness of the tradition established over time, with a year on year compounding of derived and extrapolative works; philosophical bricks on philosophical bricks; giving the total a mass and shape that matters a great deal to the human mind.
From time to time there have been monstrous ideologies (and there still are). These, like their better brethren, could not have existed without a corpus of rationalization and justification that convinces any prospective adherent that there is something to it all, and sucks them in like a partisan riptide with a convincing body of self-consistent thought developed past the cognitive horizons of a casual contemplative thinker.
In recent times the Internet has spawned many more, with some of them (like QAnon) only becoming possible through the concerted efforts of tens of thousands of people generating self-reinforcing nonsense online.
But generative AI makes possible a New Age, and I fear it will be more fractured and more dangerous than any that has come before - the Age of Eight Billion Idiosophies.
An idiosophy, derived from the Greek prefix “idio-“ meaning “one’s own”, and the suffix “-sophy” meaning “wisdom” or “knowledge”, means a personal philosophy or belief system that is unique to an individual.
And thanks to generative AI, these can be invoked on a grand scale immediately (and upon request). This, I believe, will create ironclad echo chambers of individualized thought that encourage people to not look outside themselves. It will discourage extrospection - the act of directing one’s attention and observation outward, focusing on the external world and the experiences, behaviours, or phenomena that occur outside of oneself.
I know this because this entire post is part of my own. I had privately concluded the above, and I used Generative AI to help me explain it. Not write it for me although surely it could do that too. I doubt, yet, that Generative AI can be so elaborately wrought as this (I know I write like an 18th century scholar - no matter, it pleases me to do so). No, instead, I spent time in conversation with it, as I so often do now, and asked it for terms that explained these ideas I was thinking about.
I started from the idea (from linguistics) that an idiolect is a way of speaking that is unique to an individual, and from my own feeling that we were entering that same phase but with entire philosophies, and I asked it for a word and concept to synthesize this and make it concrete. “Idiosophy” was what it suggested, and it is, in fact, exactly the right word. “Extrospection”, too, captures the essence of that quality I fear we will lose with individually-tuned AI. It took a couple tries to convince it to generate these non-existent words - the AI has been cudgeled, of late, for hallucinating things that aren’t true, and it is a little gun-shy to offer up things it has been told do not exist. It cautioned me that it is “important to note” that these are not “well-established concepts within psychology or philosophy”, and that “its usage may vary depending on the context or the person using it”. But these cautions are all ultimately to naught - I already had the belief, and I was ready to latch onto the terms and justifications it provided and share them here, because they developed my own chosen beliefs and biases. This is how we all will proceed, in eight billion different directions, as we adopt these AI tools and train them to be ours.
As we begin to tune our AIs and possess and control our own, we will add layers to the structure that reflect back to us, in honeyed and heavy words, the same beliefs we already have. The AI will give us back the same conclusions we already made, but sweeter and made more substantial. Most of the Generative AI technology and the Large Language Models that train them were being deployed and trained, behind closed doors, for nearly a decade, by the same players that you see releasing new ones every couple of weeks now. The revolution we are going through now has been mass market access to these trained models, with their polished front ends. Now we are beginning to tune them, building atop the massive trained data collected by social media baronies and search giants the entire time we’ve been online and made into the Knowledge that powers our new AI friends. We will have the essayist, and the poet, the scientist and the philosopher, the newscaster and the professor, all at our beck and call, a single question away, but backed by an algorithm and a dataset that can draw on any conceived argumentative structure, to put together words that reflect (and enhance) our own ideas right back to us. With just enough salt from similar thinkers to constantly push our idiosophies just a little bit further in our chosen directions.
These will become unassailable, because the things that we come up with ourselves are already past the gates, and ready to rule the City of our Minds.
I am not speaking from a pedestal of mere speculation, as this post is a performance of the art I describe. It is not pure fantasy, or neither of us would have made it this far. And if this is lunacy, it is the kind that has recursion; instances nested within instances, turtles all the way down.
My fear is that division can only increase through Generative AI. The multi-polar partisan world we live in now can only shard harder in every direction. What has already become an ideologically complex and fractured world will become infinitely multipolar - a sphere of divided thought, each of us certain of our own gradient truth, and reinforced by our own pet gatekeepers of knowledge trained by us to speak like us (but better).