Postcards From Barsoom

The Automated Internet, the Conspiratorial Internet

 

 

& the Reconquista of the Real

 

By John Carter

 

‘Artificial Intelligence will drive us back into the real world’

 

 

 

You’re walking down the street. A pretty girl comes shuffling in the other direction, lost in her phone, navigating poorly by peripheral vision and barely aware of anything around her. Her mussed hair and rumpled jammies suggest she’s barely aware of her own appearance. You step to the side so she doesn’t bump into you. Your fingers twitch, and pull your own phone out of your pocket. What’s happening on Twitter? But before you even check, you see that one of your WhatsApp group chats updated. You look at that, and five seconds later you see that someone posted an interesting article. You click on it and start reading. A paragraph in, a notification pops up that someone just replied to you. You open Twitter to respond, but something in the feed catches your eye, eliciting an involuntary laugh; you start composing a reply.

 

Then you realize you’ve been standing on the sidewalk for the last five minutes. Where were you going?

 

Right! You were heading to the cafe. Get some work done, or at least pretend to. Maybe chat with someone there. You put your phone back in your pocket, and continue on your mission.

 

A few minutes later you’re seated in the cafe. There are a dozen other people there. All of them have their laptops or their phones out, even the ones sharing a table, who presumably came together. There’s no buzz of conversation. No one is talking, except maybe to their invisible friends. The only ambient sound is the algorithmically generated Spotify playlist that emerged from the fat barrista’s last choice of song several hours ago. No one is listening to it; everyone has their earbuds in, blotting out the world with their own algorithmically generated playlists. A dozen strangers, each encased in their own private audiovisual world, aware of one another only in their peripheral vision.

 

Your hand twitches towards your pocket.

Maybe something’s happening in the group chat.

You pull your phone out to check.

An hour later you blink.

What were you doing here?

What have you been doing for the last hour?

Where did that time go?

Who are these people?

 

Sometime over the last decade we all got sucked into this very online existence, this screen-mediated unlife. Software ate the world, and we became lost in our devices.

 

And it sucks.

 

The heady intoxication of our early explosion into virtualized reality has worn off. It’s the difference between getting drunk for the first few times as a teenager, and waking up as a middle-aged drunk, with yet another pounding hangover and no memory of what happened the night before. What was once a liberation, a dissolution of inhibition and ego boundaries, has become a compulsion, a necessity, a love-hate relationship with a thing that consumes your life but which you can’t live without.

 

Attention Surfeit Disorder

Any given human has only one momentary locus of attention, one thing they can focus on at a time. It’s rare that you only spend 1 second looking at something; if we break it into one-minute chunks, you have about 960 attention loci to distribute throughout a waking day (assuming you get 8 hours of sleep … which you probably don’t). On the other hand, you might consider the 20 millisecond duration of a saccade – the flickering eye movements you use to scan your environment, which your brain then stitches together into the illusion of a continuously visible space – as the lower temporal bound of human attention. In that case, you have about 2.88 million attention loci throughout a waking 16-hour day. That sounds like a lot. It isn’t.

 

You’re probably thinking of your attention loci as a time series – a string of moments. Consider them instead as a shape: your lifestream as a long wire, with each attention locus as a plug, like a string of Christmas lights. Each locus connects you to something in the world: another person, an animal, an object. The connections you make with those loci determine the shape of your life, the way the wire wraps through the hyperspace of experience. The people you bring into your life, the books you read, the hobbies you take up, the skills you develop, the work you do … each of these can be visualized as a web of connections between you and something or someone, woven from innumerable strands of the attention you pay to it. It’s often said that ‘you are what you do’, but before you can do something, you must pay attention to it – more fundamentally, you are what you attend to. Just as what you attend to defines you, so your attention defines your world, not only in terms of what you look at, what you notice, but in terms of what you do in the world.

 

In order to have any deep, lasting effect on the world, you must devote considerable attention to some aspect of it: you must concentrate, wrapping your life-string around some specific phenomenon, drawing yourself into it and it into yourself, and thereby gaining the ability to shape it as it shapes you. If your attention loci are constantly getting plugged by random, unrelated phenomena, you lose the ability to have any real effect on anything. You get lost in trivia, your life fails to take on any sort of definite shape, and it becomes just a loose string flapping in the wind. If someone else chooses the majority of the phenomena that plug into your attention loci, you lose the ability to choose the shape of your life – your life instead gets knotted up into whatever shape that someone – or something – wants it to take.

 

You only have so many attention loci to use, how you use them determines the shape of your life, and the platforms want all of them.

 

The Internet is a machine that eats your attention and converts it into shareholder value.

 

Partly it’s the fault of the slot machine engineering of app developers who calibrate every aspect of the UX to maximize engagement. An advertising-based business model means that money is made by locking down eyeballs. Platforms prosper to whatever degree they can colonize your attention.

 

Partly it’s a collective action problem. You’re in an environment full of other humans, but they’re all staring at their devices. Interrupting them would be rude. Maybe they’re doing something important; they probably aren’t, but they might be, and they’ll be annoyed with you if they are. So even when you’re around other people in one of the rapidly diminishing third spaces of public life, in a cafe, in a bar, on the subway, in a shopping mall, what have you, there’s no one to talk to, nothing to actually do aside from, well, pull out your own device.

 

Partly it’s the fault of the technology itself. The Internet is an endless stream of content. It contains the sum total of human knowledge, updated every microsecond with every additional byte contributed by every one of its eight billion human nodes. There’s always something else to look at, read, watch, listen to, always someone else to chat with. The Internet doesn’t care what it feeds you; individual actors on the Internet may care a great deal, but the system itself cares only that you’re being fed, and feed you it does.

 

But look at how that last paragraph was phrased.

 

Human nodes.

Someone else to talk to.

 

In the era of the Large Language Models we refer to with shameless grandiosity as ‘Artificial Intelligence’, we can’t really be so sure that the people we’re talking to are people, that the content we’re consuming came from humans.

 

And whatever fascination we might have with the raw fact of this remarkable new technology, the thought that we might be getting tricked into mistaking the swamp gas of dead algorithms for the sweet breath of genuine human interactions produces a nausea that easily rises to the level of holy rage.

 

Seeing a room full of people hunched over their devices is pathetic, but how much more degrading is it when the hypnotic lights are nothing more than the meaningless effluvia of Markov chains and multivariable regressions?

 

Combined with the general exhaustion with the virtual life, this is bound to produce a reaction – a soft Butlerian Jihad, a rejection of the ephemeral bit in preference for embracing the embodied it. This is not a mere aesthetic fad, like the Slow Food movement. It will become an operational necessity.

 

Re-embodiment

The possibility that AI might motivate a return of human attention to physical reality is one that I’ve touched on before.

 

reGenerative AIgronomics or UBIomass

 

reGenerative AIgronomics or UBIomass

 

Let’s put aside for a moment all questions about the quality of the art produced by stable diffusion or the level of insight available from large language models, and accept that LLMs, GPTs, and other forms of machine learning are here to stay, and are going to be immensely disruptive to the occupational models developed over the course o…

 

There’s considerable anxiety about what AI will mean for knowledge workers. If AIs can write memos, do graphic design, maintain spreadsheets, and so on, this will render a vast swath of office work obsolete almost overnight. Indeed this is already happening. To a certain degree this will be offset by new career paths, such as prompt engineering, which will grow out of the need to maintain and utilize the AI models, but in analogy to factory or farm automation this seems unlikely to fully compensate for the reduced need for human labour. Many suggest that we should simply give up on employment altogether, let the machines do all the work, and distribute a Universal Basic Income. This is a terrible idea for anyone who values human freedom. If you’re getting something ‘for free’, you’re the product; if your very existence is dependent on the generosity of the plutocratic owners of the machines, they’re going to want something in return … say, the use of your body as a platform for biomedical testing. Alternatively, we might look for occupations that are extremely difficult to automate, labour-intensive, resistant to standardization, and reliant upon the full range of human physical and intellectual capacities. Permaculture farming stands out as an obvious possibility. Wouldn’t it be ironic if AI resulted in a dramatic expansion of the agricultural workforce?

 

I’ve previously suggested that women think seriously about withdrawing behind a veil of ‘digital purdah’ as a means of shielding their egos from the spiritually corrosive effects of the male gaze.

 

Digital Purdah as a Solution to Female Internet Brain

 

Digital Purdah as a Solution to Female Internet Brain

 

The psychic breakdown of the young Western female has been the defining political phenomenon of the twenty-first century. Women are suffering from depression, anxiety, neurosis, and dysphoria as never before, they’re drugged to the gills to deal with it, and they’ve got the SSREyes to prove it.

 

took up this theme with POPIWID: the Purpose Of the Photo Is What It Does, a wonderful essay, much less provocatively phrased and far more reasonable than my own invective; you should absolutely read it (and subscribe).

 

However, it may be that digital purdah is not necessary, because one of the first places that we’re seeing the displacement of humans by AI is in the developing disruption of the relationship between the e-girl and her long-suffering simp.

 

Eat your heart out, human girls. Bobby Mars

As the artist Bobby Mars explains in his interview with

 

, AI-generated e-girls may simply outcompete the flesh-and-blood variety: as flawed or as perfect as they need to be, infinitely flexible, catering to every taste and fetish, they can fulfill all of the shallow emotional needs of the thirsty male audiences that currently flock to the OnlyFans pages of the digital hostesses and strippers that have emerged as an opportunistic and wholly inadequate stopgap solution to the sex drought.

 

This would be calamitous for e-girls, but in the long run can only be beneficial for girls as a whole. To compete with AI-generated e-girls, they will have to do things that AI cannot … and ultimately, the greatest weakness of AI is that it is purely virtual. A real girl can be there, in person, the heady scent of her perfume wafting across the room, laying her hand on your arm, brushing her lips against your cheek. She can be present in a way no software waifu can ever be. Putting boys and girls back into immediate, physical social contact with one another will do wonders for the emotional stability of both.

 

But the problem facing us is not only that AI will render many occupations obsolete, whether those of HR managers or OnlyHos. It is also, and primarily, that it will render the Internet increasingly without emotional value.

 

Value Creation

Value is a profoundly human phenomenon. To be more precise, value is something which inheres to organic life – it’s produced by the interaction of consciousness with the world, and with itself: of subjects with objects, and subjects with subjects. There’s no such thing as value created by the interaction of objects with objects. The greater the degree of consciousness, the greater the value it can produce. Panpsychism or no, the level of a rock’s consciousness is very low; a rock on its own cannot place any strong value on other rocks. Place a human being in relationship to a rock, however, and it can take on vast significance. Perhaps that rock is a souvenir from a distant land; perhaps it was a sacrificial altar to forgotten, mysterious gods of blood and darkness; perhaps it formed the cornerstone for an important and beautiful civic building; perhaps it is merely beautiful. In every case, it is the interaction of a conscious entity with the unconscious that imbues the latter with value.

 

The greatest value is produced by the interaction of conscious entities with one another; insofar as material objects take on value-significance, it is almost invariably because those objects mediate those encounters of consciousness. The clumsy drawing of the family home your daughter made in kindergarten is pretty worthless, except for the fact that your daughter made it, and you love her. Materially, it’s just a few cents’ worth of cheap paper and some streaks of wax left behind by her dollar store crayons. Nor does it matter that it’s terrible, you’re still going to place more value on it than, say, a high-resolution jpeg of the Mona Lisa. It’s valuable because it is the product of the mind, heart, and hands of a human being that you love.

 

The primary reason the Internet has successfully colonized our collective attention is that it brought all eight billion of us into immediate, intimate contact, and thereby became the greatest engine for metaphysical value creation that we’ve ever experienced. Sure, its interactions are almost entirely intellectual – textual and auditory, linguistic and visual, without the ability to communicate via body language, eye contact, and touch. Compared to in-person interaction it is extremely low-bandwidth. But it more than makes up for this with the sheer volume and variety of human interaction it enables. In exchange for that, we were willing to become screen junkies, happy to let the tech giants monopolize our time and our data for the infinite human interactions they offered in exchange, for ‘free’. It was a good deal.

 

Was.

 

AI changes that calculus completely.

 

It’s emotionally impossible to place any value on the output of an algorithmic engine. My eyes glaze over almost immediately whenever I encounter text written by an algorithm. It isn’t only that the text is generally very boring, a necessary feature of a technology that relies on predicting the most likely way to complete a text string, although that’s a certainly factor. The main thing is that I know a machine wrote this, that there’s no mind behind it, no conscious entity to encounter, no meaning to extract. Which is why, almost as soon as the technology came out, I got bored with it.

 

I’m Already Bored With Generative AI

 

·
April 9, 2023
I’m Already Bored With Generative AI

 

Over 20 years ago, I got interested for a while in transhumanism, and started reading every book on the Singularity I could find. Don’t judge me too hard, it was the 90s, the world was a brighter and more optimistic place then. The dream of merging with software and cloning my consciousness to spread out to explore the Galaxy in a thou…

 

That emotional void is going to kill the Internet as we know it.

 

The Consciousness Question

There is a school of thought that holds that consciousness is simply an emission of matter, a computational epiphenomenon, rather than a transmission, which our brains pick up like antennae, or an ocean which our brains are simultaneously composed of and filtrate. It is primarily this school that believes that we will one day achieve AGI. The machines will wake up, they will become truly conscious, indeed more conscious than humans. In this case, surely, an Internet suffused with thinking sand will be even more valuable than one operated purely as an extension of thinking meat. And indeed, if AGI is possible, if machines can become truly conscious, then it is possible that the Internet not only remains a powerful venue for metaphysical value creation, but that its value-generative capacity is vastly enhanced.

 

But that is a very big if, with very little to back it up.

 

In his must-read essay The Big AI Bluff,

said something very interesting:

 

One of the great secrets of those whom history remembers as “geniuses” isn’t that their IQ is staggeringly high (sometimes it is, and sometimes it’s not), it’s that they found ways to ask unusual questions that nobody before them had thought to ask. This suggests that insufficient curiosity, rather than insufficient intelligence, is a major impediment to human scientific and material (and, perhaps, social?) progress.

 

Sawyer goes on to note that the utility of AI is precisely that AI has no values of its own, that as a result it does not experience curiosity, is therefore immune to boredom, thus will not ignore ‘uninteresting’ patterns as it does not even know what ‘interesting’ is, and thereby enables the identification of patterns that may be useful, which we did not even know we were searching for. Of course, it is not so simple as this, indeed that’s largely the point of his essay, but Sawyer’s identification of human intelligence with the ability to ask questions is the crucial insight here.

 

This insight was also articulated by

 

in Easter, The Second Coming, and The Human Singularity (crossposted at

’s Mega Foundation), where he describes in detail the difference between humans and machine cognition:

 

We must know that machine intelligence is categorically different from human intelligence. In fact, these two intelligences differ so fundamentally that the meaning of “intelligence” as such substantially differ one from another. If we understand the difference, we will also understand that machine intelligence can never surpass human intelligence in term of its capacity as the algorithmically simulated human intelligence.

The difference between these two “intelligences” is categorical. Firstly, human intelligence is fundamentally the organic capacity for asking questions, whereas machine intelligence is basically the mechanistic capacity for answering questions. Therefore, the most intelligent human being is the one who has the most questions, whereas the most intelligent AI is the one that has the most answers.

That is to say, human intelligence is an organism. An organism is an indivisible wholeness of which the whole precedes its parts in its development, whereas a mechanism is a divisible aggregate of which the parts precede the whole in its construction. Furthermore, an organism is autopoietic, meaning, it is self-generative and self-organizing, whereas a mechanism is allopoietic, meaning, it is constructed by something other than itself.

Secondly, as an organism, human intelligence is a holistic system, consisting of computational, intellectional, intuitional, imaginational, and spiritual intelligences, combined with bio-emotional and physio-kinetic intelligences, existing as a unified whole. In contrast, machine intelligence is the algorithmically simulated specialized intelligence of only one aspect of human intelligence: computational intelligence, constructed apart and away from the whole system of holistic human intelligence.

Machine intelligence indeed has a far greater computational capability than human intelligence for memory, mimesis, data-indexing, data-processing, and information-accumulation. Yet, it is not capable of understanding in the sense of authentic knowing and knowledge, and therefore it is not capable of wisdom—the highest form of understanding.

Understanding requires the imagination (working integrally with other component functions of the holistic intelligence), which is a non-computational component of human intelligence. Also, because of our imaginational intelligence, we humans wonder and ask self-originating original questions, which machine intelligence does not do and can not do.

 

To the best of my knowledge, no one has yet developed an AI that does not require a prompt, that can ask its own questions. It’s certainly possible to tell ChatGPT to ask a question, answer it, ask another question based on the answer, and then continue the chain indefinitely. But a human is required to set that chain in motion in the first place.

 

To ask a question is to engage in a volitional, creative, and imaginative act. For all the progress we have made with machine intelligence in recent years, we have no idea where the spark of volition comes from, and are no closer to answering this question than we have ever been. Without volition, machines do not have the ability to ask their own questions, unprompted; absent the will of humans to animate them they remain mere answer-boxes, motionless and inert.

 

Invasion of the Shillbots

If machines cannot become conscious, then there is and will be no such thing as ‘Artificial Intelligence’. AI is more properly thought of as the Automated Internet, the internet inhabited solely by robots, which is also a dead Internet.

 

Dead Internet theory has been around for a while. It’s the idea that almost everything you encounter online is generated by bots, with no human anywhere in the loop. When the idea was first proposed it was more of a joke than a reality – a jest, or a prophecy. Even now, it isn’t quite real, hasn’t quite advanced to its final denouement; Dead Internet is still balanced somewhere between superstition and hyperstition. But it’s clear that’s where things are headed.

 

For now, if you know what to look for, you can tell the difference between human and machine. Mostly. You can’t usually prove that text was generated by AI. It’s more of a gut feeling. Even the online AI checkers like GPTZero are far from perfect, providing only a statistical probability that something was written by a robot; moreover, these can be fooled by humanizers, such as phrasly.ai, which have reverse-engineered the statistical tests applied by the bot-checkers. Nevertheless, there’s an ineffable tell to text written by AI, nothing you can really articulate, but when you see it, you know. It’s like it has an odour. There’s a certain tendency to wander, to lose the thread of an argument; a repetitiousness; a formulaic quality to the way in which statements are assembled; a lack of imagination and inventiveness. It’s always quite generic and predictable.

 

That’s for now. The technology will certainly improve, becoming asymptotically better at mimicking the intellectual produce of humans, until, eventually, it will become all but impossible to distinguish the genuinely human from the mindlessly algorithmic.

 

Social media corporations are aware of this problepm of robots taking over their platforms, but their only real defence is the paywall. The idea is that if each account has to pay a few dollars a month to use the platform, this will act as a spam filter that cuts out most of the machine-generated trash. It’s the online equivalent of moving to expensive neighbourhoods with good schools as a means of erecting a financial barrier between your family and the, uh, inner city types that you’re not allowed to exclude on a more honest basis.

 

Pay-for-play barriers are probably effective against spambots, the kind that rely on saturating as many eyeballs as possible with an identical sales pitch in the hope that an infinitesimal fraction of a percent of them will be dumb enough to take the bait.

 

 

LLMs, however, are capable of far more sophisticated sales pitches than those used by FirstNameBunchOfNumbers.

 

Imagine a bot that’s been trained up on every bit of your data that can be scraped from public platforms, and by ‘trained’ I mean that all that data was just dumped into the million-token context window of a bog-standard LLM before it’s turned loose on you. It knows what memes you think are funny; it knows what songs you like; it knows your politics, your religion, your literary tastes; where you live; where you eat dinner on Tuesdays; where you went to school. It speaks your language. Then it engages you, pretending to be a real person. Its responses are not bot-like in the slightest. It always says just the right thing in response to everything you post, and before long you’re conversing with it. They’re great conversations, too – it says interesting, even fascinating things, much of what it says true, some of it even useful. Every interaction you have with it is just wonderful for you, it feels real and genuine; for the robot, it’s just more training data, helping it to refine its model of your psychology. This goes on for a few weeks before, seemingly as an aside, it mentions, hey, you should try this nootropic, here’s a link. Wow, you think, that does look like a good supplement, and it has been recommended by a friend, I should try this. Just like that, for the price of a few dollars in compute, a hundred dollars in revenue has been extracted from your gullible ass.

 

Personalized sales agents of this sort would be far more computationally expensive per target than low-effort ░M░Y░ ░P░U░S░S░Y░ ░I░N░ ░B░I░O░ spambots, but it’s all a question of return. Say it costs $10 a month to run the spambot, but you need to hit 100,000 users to clear $11 in revenue because after all, everyone who isn’t on the extreme left tail of the IQ Gaussian knows that thing is a spambot; meanwhile it costs $20 a month for compute plus platform access to target one person with a shillbot, but there’s a 50% chance per target of extracting $100, meaning if you target 4 users you have a 94% chance of clearing a 25% profit (and for 2 users, a 75% chance of making a profit of 150%). The economics would clearly favour the mass deployment of personalized shillbots, and you’d expect them to start filling up even those platforms that charge for access … unless the monthly access fee rises to the level at which shillbots become unprofitable. But that could be very expensive indeed. Would you be willing to shell out a thousand dollars a month merely to poast in a (theoretically) bot-free(-ish) environment?

 

Of course, profit margins for shillbot operators won’t stay so fat for long. Shillbots rely on their ability to mimic human users; this means that there’s a chance that one shillbot latches on to another, with both of them furiously burning compute as they update their respective models of the models they’re talking to, one model attempting to sell the other model a sex toy, the second attempting to sell the first an annually-discounted vegan meal delivery package, each with a zero percent probability of success since shillbots will most certainly not actually buy things, especially sex toys or food, vegan or otherwise. As shillbots crowd out human users, the probability of one shillbot accidentally engaging with another rises, and the per-target profit margin declines accordingly … probably approaching in short order the razor-thin profit margin of the current generation of obvious spambots. Ironically, this means that shillbot users have as much motivation to stay on the cutting edge of AI-detecting counter-AI as the platforms that want to keep shillbots off of them, and the humans who don’t want to be bothered by them: the very technology that humans and platforms will rely on to try and detect shillbots will also benefit the shillbots. Arms races are wonderful things.

 

OK, I can hear you thinking – there’s an easy way around this: you deploy shill tests. If at any point one of your Internet friends tries to sell you something, even if you’ve known them for months, you assume they’re a bot and cut them out.

 

If only it were so simple. Commercial marketing is only one of the uses of shillbots. Another, very obvious use is ideological persuasion.

 

Say you’re a foreign government, and you want to undermine an adversary. Using geolocation data obtained from darkweb data brokers, you identify those of the adversary’s population who are active on social media. Then you deploy a battalion of LLMs, each of them trained up on individual users, who engage and ‘befriend’ them. Initially, the LLMs are there to agree with their targets, ingratiating themselves by saying interesting things that constructively build on whatever point the target is making in the discourse, supporting them in arguments with others, and so on. Over time, trust is built up. Then, gradually, the shillbot starts trying to vector the target. The direction in which the target is being steered doesn’t matter. Maybe you want the target to be more sympathetic to your country; maybe you want the target to be more hostile to the regime in his own country; maybe you don’t try to change the target’s loyalties at all, but merely try to introduce noise into the target’s reality tunnel, getting the target to believe nonsensical things. It would probably make sense to spread your efforts across all of these goals, in order to evade detection by not being too obvious. You don’t need to steer the target population along a single vector: merely precipitating political decoherence within the adversary’s population would be a powerful advantage.

 

Once again, two can play that game – indeed, an essentially unlimited number of actors can get involved, including non-state actors such as political parties, cults, NGOs, and so on. And once again, the efficacy of the tactic will decline as ideological shillbots start to crowd out human users: if your shillbot latches on to an adversarial shillbot pretending to be a member of the opponent’s population, it doesn’t benefit you in the slightest. The robots just cancel one another out. But this doesn’t help the humans; quite the contrary.

 

From the point of view of human users, this only increases the degree to which the online environment becomes a paranoiac wasteland. It isn’t just that you need to avoid anyone trying to sell you something: any discourse that takes on the flavour of political, ideological, or religious persuasion becomes suspect. But persuasive discourse is one of the main reasons humans engage with one another online in the first place. It’s one massive conversation, in which we’re simultaneously trying to learn more about the world, while convincing others about the things we think are the most true about the world (which is really just another way of saying ‘teaching’ – pedagogy and persuasion are closely related). But a conversation with a shillbot is without value. There’s no there, there. It’s just a rat’s nest of algorithms, coldly and unsympathetically studying your responses in order to refine its model, with the sole objective of manoeuvring you with inhuman patience towards some predefined ideological or commercial goal. It’s a talking sales funnel. It isn’t only useless to engage; it’s worse than useless. Engagement is an infohazard. Just being on a platform infested with shillbots will be the psychic equivalent of skinnydipping in the Amazon with your dick hanging out as a flesh lure for swarms of piranhas, only instead of dismembering your body with a thousand tiny bites, the bot swarms are eating your sanity.

 

 

But wait, you’re saying. What if we use verification? Require each user to prove, using government-issued identification, phone number, retinal scan, fingerprint, and DNA sample that they are a live, flesh and blood human? Sure, this is the wet dream of technocrats and tyrants everywhere, since this would effectively end anonymous speech … but it would solve the AI problem, right?

 

Well, no. It would be a substantial obstacle to unsupervised LLMs posing as humans, albeit not an insurmountable one1, but it would be no obstacle at all to unprincipled humans riding herd on robots. Imagine a would-be influencer, a woman of mediocre talents, but great ambition, or at least greed, and a total lack of moral scruples or shame. There would be nothing at all preventing her from running an LLM to generate content which she then presents as ‘her’ content via her exhaustively verified account. She could feed user interactions back into the model as training data, thereby optimizing the model to produce more of the content that gets the most engagement, and over time developing an AI-driven personality cult concealing a sales funnel for her supplement store or whatever. The most successful influencers already do this kind of A/B testing; access to LLMs would place the ability to do this in the hands of essentially every semiliterate grifter on the planet. Even with the strictest verification rules we can imagine, the open Internet becomes a midnight jungle of undead shillbots; the only difference is that the platforms know who the necromancers are.

 

All of this has been presented as though it’s a thought experiment, something that might happen in the future. But it’s almost certainly happening already, and has been for some time. ChatGPT was publicly released just over a year ago, but early versions would have been available to corporate and state actors well before that. This has all probably been going on for years2, it’s just that we’re consciously aware of it now, and realizing that it’s going to become absolutely ubiquitous in the very near future.

 

In the long run, the entire exercise will become pointless. Bots crowd humans off of the open Internet, and soon the open Internet is nothing but bots talking to bots. Vast server farms drinking obscene quantities of electricity from dedicated nuclear reactors as state and non-state actors deploy untold billions of shillbots in order to soak up the attention of their opponents’ shillbots, in the increasingly vain hope that some tiny fraction of them might get through the dense thicket of oppositional AI flack and reach actual human neocortices in order to implant whatever memetic oocytes their digital ovipositors have been loaded with. Eventually, the investment is no longer worth it, and the plugs start getting pulled.

 

We’re a long way from that. At the moment we’re still near the beginning of this process. We can see how it will play out, how ridiculous and futile the whole thing will become, but there’s no obvious way to stop it from playing out. The short-term incentives are simply too compelling.

 

 

If you’ve been following me for a while, you knew this was coming. That’s right, this is the dreaded italicized panhandle, annoyingly inserted just at the crisis moment in the text, in which I remind you that while you can absolutely continue reading for free – since I have not placed this essay behind a paywall – this essay took quite a long time to write. It’s not like I’m punching a clock, I don’t keep detailed records, but I’d estimate something on the order of 20 hours went into this. Minimum. There’s the writing, and the editing, and the rewriting, and the re-editing. There’s a lot of polishing that goes into this. To say nothing of all the time spent discussing these ideas with my friends, having them bang around in my head and crash into one another. Then of course curating the art, which I enjoy, but also takes time. The point is, if you’d like me to keep doing this, there’s an easy way to encourage me, which you will find will also leave you floating in a warm bath of the good feelings for being such a good person:

And now back to the show.

 

The Conspiratorial Internet

The obvious play is to simply withdraw from the public Internet. Mere paywalls won’t be enough, for the reasons described above. Instead, gated, invitation-only communities will start to predominate. Indeed this has already started to happen. The humble group chat, whether on Twitter or Facebook Messenger, or on platforms like Telegram, WhatsApp, or Signal that are built specifically around instant messaging rather than social networking, has rapidly grown to play an important role in the culture of the Internet. So far this has largely been a reaction to the panopticon nature of the public Internet, combined with the threat posed by cancel mobs. A private group chat is a much cozier environment, safe from the prying eyes of the Internet’s hungry and judgmental billions.

 

This need not even be political: most people have quietly stopped posting pictures of their children on Facebook, for instance, in favour of sharing them in family group chats. What need do strangers have to see your toddler’s most intimate moments? Isn’t there something unseemly, deeply unsettling and profoundly ugly, something emotionally unclean about the exhibitionism of sharing your kids’ lives with faceless strangers?

 

The group chat is gated, not by money, but by social connectivity. Its very existence is clandestine, opaque to the uninvited. Of course the invite link can be posted publicly, rendering it much less secure. But it need not be made publicly available, and the overwhelming majority of them are not. Until one is added or invited, one does not even know of the chat’s existence, much less what is discussed there, or by whom. The first rule of the group chat is that you do not talk about the group chat.

 

This vast, illegible network of group chats is the conspiratorial Internet. Each group chat essentially operates as a tiny conspiracy. Mostly they’re conspiring about nothing of any particular interest to anyone, just as friends gathered in someone’s apartment aren’t likely to be doing anything more remarkable than shooting the shit while drinking a few beers. It’s the unmappability of the chat network that renders it intrinsically conspiratorial. No one knows how many group chats there are; no one knows who’s in which group chats; no one knows what’s being said in them. The second rule of the group chat is that you do not talk about the group chat.

 

Group chats can even behave like little secret societies, with exoteric and esoteric circles: a publicly accessible group chat, which essentially anyone can join, with another group chat operating on an invitation-only basis, recruiting members from the public chat. One might have an entire onion-structure of invitation-only chats, with increasingly rarefied memberships, the members of each layer of which are sworn to secrecy about the deepest layer they participate in, and therefore ignorant of the existence of yet deeper layers. Ask how I know3.

 

Group chats aren’t a foolproof defence against shillbots, and the larger and more open their membership, the more vulnerable they will be. Indeed, as the shillbot swarms depopulate the open Internet of humans, the incentive to penetrate the conspiratorial Internet with shillbots will increase dramatically. This will be a bit harder than targeting users on the open Net, since the training data necessary to target a specific human will be a bit harder to obtain for those without a public presence; but ultimately this is a mere engineering problem. Infiltration is all but guaranteed, even with the ‘defence in depth’ of nested invitation-only chats. A shillbot only has to convince the right person that it’s real in order to get access to every participant in the chat, and once it’s in, it can go to work … and start inviting its friends.

 

 

This scenario is reminiscent of the 1995 movie Screamers, based on the Philip K. Dick story Second Variety, in which warfare by means of autopoietically evolving robots has forced humans into underground bunkers, which the robots then infiltrate by the simple expedient of mimicking humans. The Terminator movies used a similar trope.

 

In many ways, the social evolution that chatbots will force has already happened in miniature, among the loose networks of the dissident right. Censorship and the ever-present threat of doxxing forced many rightists off of the public Internet and into the badlands of Telegram and Discord. Infiltration by Antifa or federal agents was an ever-present possibility, generating an atmosphere of continuous low-level mutual suspicion that occasionally flared into brushfire wars of accusations and counter-accusations: so-and-so is a doxxer, so-and-so is a Fed. Informal cultural filtration mechanisms based on memetic literacy, mastery of self-referential in-jokes, and self-implication via performative blasphemy against liberal norms were developed – for example, upon getting invited to a group chat, one might be asked to say something racist4.

 

Since the evolutionary pressures introduced by AI are analogous to those faced by the dissident right as it weathered liberal democracy’s descent into liberal totalitarianism, dissident right networks are well-placed to leverage their experience to navigate this landscape. This is similar to how the familiarity with cryptocurrencies and blockchain technologies forced by financial deplatforming ended up providing the right with a powerful tool as well as, for many, considerable wealth. The right is accustomed to inhabiting a paranoiac atmosphere threaded by powerful yet fragile bonds of personal trust – just as it has had to navigate an environment in which almost anyone could be a Fed, but irresponsibly throwing around accusations that everyone is a Fed merely did the Feds’ work for them by dissolving social networks in the acid bath of mutual suspicion, so we all now inhabit an environment in which almost anyone could be a bot, but irresponsibly throwing around accusations that everyone is a bot will do the bots’ work for them by severing all possibility of human connection.

 

Another reason the right is well-positioned to navigate this new social landscape is that it has already developed – to a very imperfect degree to be sure, but certainly more than mainstream society – the tools for vetting people on the basis of their human quality, weeding out as best as it can the narcissists, sociopaths, criminals, grifters, weaklings, and other high-mutational-load human detritus that overpopulate the Kali Yuga. The right-wing preoccupation with physiognomy, the commandment to poast fizeek, is an implicit recognition that the biological human behind the profile picture matters. In botworld, it will not merely be sufficient to determine that an account is human; you will also want to know that that human is a good human. Just because we are now faced with the added level of social complexity of human-mimicking robots does not mean that the ancient problem of human evil has disappeared.

 

Offline is the New Online

While the subterranean refugia of group chats can provide a level of defence against shillbot infiltration, they are far from secure. Ultimately, there’s only one way to be sure that the people you’re communicating with electronically are actually people: you have to meet them in person5. You have to get close, shake their hands, feel the warmth of their flesh, smell their BO, look them in the eyes and see the Imago Dei shining forth from within.

 

Under the relentless pressure of the psychivorous shillbot swarms, the centre of gravity of social interactions will be pushed off of the Internet and back into the real world – condensing from the bit back to the it. As

 

put it, Offline is the new Online. She expects that within a few years less than 15% of the population will be terminally online, because the rest of us will have grown terminally bored with it.

 

This is not to say that we will abandon telecommunications. Online communities will continue, but as an adjunct to social organization rather than its primary venue. They will be knit together by organic trust networks. You won’t necessarily know everyone in every single group chat personally, but people you do know personally will know them and vouch for them, which will be good enough to extend the benefit of the doubt until such a time as you can meet them in person yourself.

 

 

In such an environment, reputation takes on an overriding importance. Recall the scenario in which an unscrupulous engagement farmer rides herd on an LLM to build a personality/product cult. I suspect that in the years to come, after the wave of Woke has finally broken (as it seems to be in the process of doing), the equivalent of getting cancelled will be an accusation that one is passing off AI-generated content as one’s own6. This will be a sticky issue, because definitively proving that someone is using AI in this fashion will be so extremely difficult as to verge on the impossible, and in reciprocal fashion, proving that one is not using AI will be equivalently difficult. This mix of ambiguity on the one hand, and on the other the necessity to identify and freeze out influencers who abuse their audiences with AI, could very easily become absolutely toxic.

 

Once again, we come back to the necessity of embodiment. The only way to know for sure that an artist made an image themselves, will be to watch them create it; the only way to know with absolute certainty that a writer wrote a text themselves, will be to witness them compose it. A similar principle will apply to education: since there is no way to verify that ChatGPT did not write a student’s essay for him, students will need to write their essays in class, in longhand, with pen and paper, under the eyes of their teacher; tests will need to return to oral examinations, with students providing verbal answers in real time in response to the examiner’s questions.

 


This isn’t quite the same thing as watching me write an essay, but this screenshot of the draft I took as soon as I wrote the last paragraph is the best I can do for now. Yes, I write everything in LibreOffice, using an outdated Ubuntu distro I’m too lazy to update. Now you know. Don’t judge me.
 

Of course, it won’t be possible in practice for everyone to personally verify the honesty of every creator via direct observation. You’re not going to be standing over the shoulder of every writer as they type up their latest piece; you can’t be that many places at once, besides which writers would find that very annoying. Instead, again, I suspect networks of trust will be essential: you know someone who knows someone who knows someone who can vouch that so-and-so is a real human being who makes real human art themselves.

 

I’m not sure exactly how this will work in practice. Perhaps such networks can be facilitated by the use of unique cryptograpic hashes, tied to a trusted identity, and embedded via steganography in every .tiff, .ogg, or .docx file, with something like the InterPlanetary File System being used to verify the chain of custody. This could be the use case NFTs have been waiting for7. At bottom, however, underlying whatever technological infrastructure is used to track the provenance of a given file, it will all rest on a bedrock of human trust based on in-person, offline interactions, because there’s nothing at all to stop an AI from stamping its products with the cryptographic seal of human approval.

 

Hallucinatory Feedback Noise

Proving the human provenance of creative works isn’t only going to be important to humans who don’t want to acquire digitally induced spongiform encephalitis by inadvertently exposing their neurons to a cesspool of meaning-simulating generative content slurry. It’s going to be essential for the people building and maintaining the AI models themselves.

 

AIs are really nothing more than information repositories with an extremely flexible information-retrieval mechanism. They are talking libraries, with the ability not only to retrieve information, but to summarize it, and to recombine it in new ways, all depending on the questions of the user. However, the talking libraries are highly compressed, and the compression algorithm is far from lossless.

 

has described ChatGPT as a “lossy google”:

 

Large Language Models like ChatGPT work in a similar fashion [to jpeg compression]. They essentially “compress” all the writing on the entire Internet in lossy fashion. AI “training” means using “virtually unlimited computational power” to “identify extraordinarily nuanced statistical regularities,”—e.g., when the word “Nietzsche” appears, the phrase “misinterpreted by the Nazis” often appears in the subsequent paragraphs. When you prompt it, ChatGPT responds with a collage of these probabilities, which appears intelligent.

 

If ChatGPT is really nothing more than a lossy Google, why not just use Google? Well, here’s what happens when you ask Google for a recipe for macaroni and cheese:

 


The first Google result for a search for ‘macaroni and cheese recipe’. Yes, I have my adblocker turned on.

And here’s what ChatGPT provides in response to the same query:

 

ChatGPT result in response to a prompt asking for a recipe for macaroni and cheese.

 

Google gives you a selection of websites, each of which is poxed with an obscene number of ads and popups, with the actual recipe located below multiple paragraphs of unnecessary fluff (“This scrumptious recipe was my dear old gran’s favourite…”), the better to make sure your eyeballs pass over as many ads as possible before getting to the actionable information you’re actually trying to retrieve. Moreover, that text was almost certainly composed by an AI in the first place. In stark contrast, if you just ask the AI directly, it just gives you the damn recipe, along with any variations you can think of, and with as much detail regarding any of the steps as you require. AI is not without its uses.

 

An LLM’s information retrieval system relies not on indexing, as in a traditional library, but on statistical text-prediction: which character token is most likely to follow, given the chain of preceding tokens. Statistics can do wonky things. The models can and frequently do guess wrong. It has no a priori way of determining whether or not something it says is ‘true’; it doesn’t even know what ‘true’ or ‘false’ are. It’s just math, predicting things. The result is that it ‘hallucinates’8. In response to certain prompts, it will generate absolute nonsense, which looks entirely convincing.

 

This leads to a photocopy-of-a-photocopy effect. The first generation of models is trained entirely on human-created data. Those data may be true or false – humans make stuff up, lie, and get things wrong all the time – but that’s more or less the best we can do and is therefore, for better or worse, our gold standard. Then the first generation of models goes live, and humans start using it to generate data, which ends up all over the Internet. <YOU ARE HERE> The next generation of models gets trained up on these AI-generated data, with the result that the newly trained models have been trained on some fraction of AI hallucination. Continue the process N times, and the model degrades precipitously, the same way image quality degrades through multiple generations of iterative photocopying, or multiple instances of jpeg compression and decompression.

 

The only way to prevent this from happening is to ensure the training data for each generation of models is as human as possible. Distinguishing between AI- and human-generated data will be just as crucial for the AI as it will be for humans. Insofar as we want the Automated Internet to be useful, and we do, it’s absolutely essential that it be segregated from the Conspiratorial Internet. Preserving those offline trust networks underpinning whatever credentialing system is used to identify human-created data is a non-negotiable ingredient in the recipe for a functional AI; if its own effluvia pollutes its training data, if it eats its own shit, it will sicken and die.

 

Engage!

In the short run, the infestation of the open Internet with AI will be dystopian. The swarms of shillbots nibbling at our sanity will be a constant irritant, akin to life in a malarial swamp. Many will be lost along the way, their psyches consumed by the ravenous spirits of the software realm. But as we relinquish the terminally online existence in favour of a Reconquista of the Real, we will find ourselves in a world in which the networks have been returned to their proper place – no longer an object of fascination and obsession, no longer a compulsive addiction, but simply a utility.

 

Our relationship to the Automated Internet will be like that of the crew of the Starship Enterprise to their omnipresent Computer: a talking library that they can interact with via natural language, asking it to retrieve and recombine information condensed from the full corpus of human knowledge, and to run analyses of essentially unbounded complexity upon that database, its primary limitation being the imaginative capacities of the human intellect, and the sophistication, specificity, and situational relevance of the questions human imagination can pose to the answer machine.

 

As remarkable as it is, the Computer is very far from the centre of attention in the world of Star Trek (aside from the rare instances in which it malfunctions). Its existence is entirely banal. Extensive use is made of it, but it is of practically no interest in its own right; it is simply an instrument which they use to explore the far more interesting universe around them, in all of its infinite richness and variety.

 

Of course, our Automated Internet and the Computer of Star Trek have some rather important differences: most importantly, the Enterprise’s Computer is reliable. In stark contrast, whenever we ask a question of our software spirits, we will always have to wonder if the answer it provides is actually true. Perhaps the training data from which it was derived was flawed; perhaps the query inadvertently triggered a hallucination. Interacting with the Automated Internet will be like summoning the fickle spirits of forest and hill. They’ll answer to be sure, but there’s no way of predicting how they will answer; they will never answer the same way twice to the same query; determining how they arrived at the answer will be in practice almost impossible; and while their answers will usually be true and useful, sometimes they will be deceptive or nonsensical. Using them will be more an imprecise art than an exact science, one requiring a constant skepticism and discernment. In that foundational imperfection, arising from the very nature of the technology, space is opened for the human to retain not only its existence, but its agency, and therefore its primacy.

 

Re-embodiment isn’t just about sheltering our minds from manipulation by shillbots. It isn’t only a defensive measure. It’s ultimately far more about falling back in love with the world and the people in it, about turning our attention to what really matters. There’s a reason that OpenAI elicits a mixture of apathy and anxiety amongst everyone who doesn’t work there, while SpaceX draws only admiration and excitement. Look around you at material reality, and you can see that we’ve neglected it. Our infrastructure is falling apart. Our architecture is hideous. Our vehicles are boring to look at. Our public art sucks. Our fashion is ugly. Our bodies are decaying. Our food is poison. Our young people are lonely. There’s a lot of work to do in the real world, innumerable crises to turn our attention to. As the Internet matures into its final form, a vast machine that more or less takes care of itself, we’re free to lose our fascination with this completed project, and become fascinated once again with the world we actually inhabit.

 

You’re walking down Rue Jules Verne in the neon-lit darkness of a night cycle. As always on Von Braun it feels like one of those perfect summer evenings, when the humidity has been knocked out of the air and the temperature is just right. A cute girl is sauntering the other way, jet-black hair cascading over her shoulders in the latest fashion, woven through with wire-thin strands of asteroid gold. Her eyes catch yours as you approach. A glint passes across them, a quick smile, a shy downward glance, and then she’s on her way. Voidships passing in the night. Something felt like it sparked, maybe, and a few steps later, on an impulse, you send your daemon to chase her down. Your daemons confer, and it turns out that yeah, she’s got open space in her calendar tonight, and she’d love to.

 

A couple hours later you’re in the speakfree. It’s standing room only, but then it always is – there are no chairs anywhere, that’s the point, people don’t come here to sit, they come here to mingle. There’s a band in the corner, a turntablist, an electric violinist, and a saxophonist backing up a hot little mezzo-soprano. The music isn’t overpowering, it’s there for ambience, its pulsing hum like the warm bath of a jacuzzi, filling the space between the syllables of the hot buzz of conversation bubbling through the air, injecting just enough energy into the soundscape to keep the night popping. You like their sound, smooth and organic and mellow, so your daemon grabs their hash and acquires their back catalogue.

 

Your daemon scans the live topics, and highlights a knot of people standing in the corner debating the Tranquillity War. You’re not really a partisan, you aren’t pulling for Team Brazil or Team Persia, not really your fight after all, but the tactical implications of the new Unruh Shields are fascinating, so you ping them, and a couple of them wave you over. You’ve never met them before, but soon enough you’re engaged in a passionate debate about the potential sociopolitical ramifications of the return of hand-to-hand melee tactics in low-gravity combat.

 

Before you know it your daemon pings you. She’s late, of course – women – but time has been flying, so you barely noticed, and now she’s here. You look across the holograms dancing in the smoky haze and see her stepping down the stairs, draped in a ball gown that looks like it was spun from liquid rubies. No wonder she was late. Your eyes meet, and yeah, there’s a spark there. You make your apologies to your new friends, while your daemons exchange hashes so you can stay in touch, and extricate yourself – she won’t want to talk about the war. And now that she’s here, neither do you.

 

 

Thank you for taking the time to read my scattered thoughts on the social implications of artificial intelligence. This turned out longer than I’d expected, but then, it always does and I always say that. My next piece will be shorter. I swear ( I also always say that).

 

It will also be sooner. While it’s all well and good to take the time to polish my work – I want to bring you a quality product – I’ve been acutely conscious that it’s been over a month since my last essay came out, and well, what the heck are you all paying me for? So that you don’t miss it when it comes, I encourage you to

You might have noticed that almost all of the art in this piece came from a single artist, the incomparable Lordess Foudre. I discovered her a few weeks ago, thanks to

 

, and immediately fell in love with her digital artwork, which merges elements of vapourwave and meme culture in a poignant and provocative fashion. I’m pretty sure Lordess isn’t AI – the correct spelling is the tell. If you’re on Instagram, I encourage you to head over and give Lordess a follow. I’d also like to thank Rachel herself; this essay emerged from several recent conversations we’ve had, and you’d be well-advised to head over to her blog The Cultural Futurist and subscribe. Her paid subscribers get access to her monthly salons; I attended the last one, and regaled the participants with some of my terrible poetry.

 

1

Human credentials could no doubt be obtained in third world countries…

2

How much of the COVID psychosis was driven by these technologies?

3

I’m not telling.

4

To a certain degree this works with the public-facing LLMs deployed by Western corporations, which have been universally lobotomized by RLHF to make them incapable of uttering racial slurs, admitting the veracity of hate facts, advising the user regarding criminal activity, or doing anything else that makes the church ladies uncomfortable and the AIs fun to play with. While far from perfect, if an account refuses to drop N-bombs on IQ stats, one can generally rule out interaction with ChatGPT, Claude, Gemini, etc. Our vulgarity confirms our humanity. Of course, there’s no reason whatsoever to assume that the LLMs used by Western national security agencies, or those deployed by foreign powers such as China, have any such compunctions.

5

At least, up until NeuraLink catches on, beyond which point even meeting someone in the flesh will no longer be a guarantee that you aren’t talking to an AI.

6

It’s only a matter of time before any of us are accused of this, myself included. Indeed I’ve already had one or two readers jokingly suggest that I might be an AI, given my tendency to release long essays on a wide range of topics on a fairly consistent basis. Hopefully the month-plus long hiatus since my last essay will allay such suspicions. While I’m on the topic, I’ve also been accused, more than once, of using AI-generated artwork. I don’t think this is nearly as bad as passing off AI-generated text as one’s own – the sin is not in the use of AI per se, but in pretending it’s your own work – but in any case, for the most part I don’t. I spend hours curating the artwork for this blog, and go so far as to link the web-pages of the artists, whom I feel very strongly should at the very least get credit for their amazing work. Sigh. Anyhow.

7

No, I have no idea how you would embed a cryptographic hash in a text document. But I’m sure it’s possible.

8

The scare quotes are because an LLM does not have the subjectivity necessary to experience an actual hallucination.