GPT-3 and a weaponized world of unreality

Pursuit of Perspicuity
12 min readJan 13, 2021

An exploration of how technologies like GPT-3 may be harnessed in mass misinformation campaigns, with the potential to threaten social stability.

In the last months of 2020 there was a fair deal of media attention given to GPT-3, the autoregressive natural language AI developed by OpenAI that began its commercial licensing to Microsoft in September 2020. Simply stated, GPT-3 produces incredibly human-like written word by amassing and analyzing vast amounts of texts online, applying deep machine learning to a given starting phrase or sentences, and, after pre-training, churning out related text almost in an instant. And by “related text”, we’re talking full on impersonation of personalities, adoption of speaking /writing styles and cogently developing arguments. Mind you, there’s nothing truly intelligent about the AI’s understanding of its own opinion, rather what is going on is incredibly well-worked pattern-matching of related concepts, generative modeling to produce output, and natural language processing to string them into coherent and flowing sentences. For example, this New York Times article highlights GPT-3’s response to “How do we become more creative”, showing how the AI responds by emulating a pop psychologist:

I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more you get exposed to different people, to different opportunities, to different places and to different challenges …

A pretty realistic response. But it gets better with my personal favourite - GPT-3 taking up the challenge of full-on comedy impersonation in this script that Arram Sabeti got it to generate between “Peter Thiel” and “Larry Page”, where Larry is contemplating doing a Reddit AMA (Ask Me Anything):

Peter: “Socializing on the internet is like if you walked into a bar and everyone there had only read about alcohol, but never actually tried it. You’ll be torn to shreds.”

Larry: “I don’t understand how the internet could possibly be worse than some of the stuff that’s happening in the news these days.”

Peter: “The news is funny because you know it’s fake, right? It’s not actually that bad. But if you go on Reddit and tell people that one plus one is two, they’re going to Photoshop your face onto a frog and put you in a Hitler mustache and have you firing rockets into space.”

Larry: “That sounds stupid.”

Peter: “They’re going to Photoshop your face on top of John Travolta and make it look like you’re in Pulp Fiction, dancing to Madonna. And it’s not going to be a good look for you.”

Larry: “I could dance.”

Peter: “Yes, you can, but that’s not the point. They’re going to Photoshop your face onto a starfish, and they’re going to call you an alien cephalopod overlord from another dimension.”

Larry: “What the hell is that?”

Peter: “The point is, the internet is a freakin’ nightmare. The Reddit crowd is like the virtual version of the school bully. And if you step out on your own, it’s going to take you out behind the bleachers and kick your ass. It’s going to hit you with a chair and kick you while you’re down, and then it’s going to pee on your unconscious body and start screaming about how you deserved it, because you did something bad.”

I let that go on for too long. But isn’t it somehow … absolutely marvelous? Surely this must be an excerpt straight out of Curb Your Enthusiasm? Except, it isn’t. No, that’s GPT-3.

Oh sure, GPT-3 has its problems. It may rhyme inconsistently, sentence conclusions don’t always stand up to scrutiny, semantics are occasionally misinterpreted. Be assured however, it will get better. There’s no doubt. Perhaps you are excited by this and look forward to stimulating conversations with some company’s dressed up chatbot, who responds exactly in the tone, humour and insight you could wish for. Your delusion could be dangerous.

Concerns have been raised over the possible dangers of GPT-3's potential for generating spam, misinformation, fraudulent essays, academic writing and so forth - due to the difficulty telling human and machine apart. The concerns are nothing new, in fact it was the researchers and engineers themselves who first pointed these risks out in their GPT-3 paper. *

How bad does misinformation get? Well. Put AI aside, social manipulation is now rampant in every form: as this is being written, news arrived of further planned violent demonstrations following the failed Washington Capitol riot — a riot born out of misinformation promulgated by the outgoing president himself (Caitlin Flanagan‘s excellent piece gets to the heart of the character behind such persuasion). Though newsworthy for the magnitude of national impact, the type of influence that is driving the Capitol riot is fairly apparent - namely, a shouty engine of populism and charisma. The long view approach taken by influencers who wanted the US to get to this point is a notch more interesting in sophistication. Indeed it smacks of the clandestine PR campaign by Bell Pottinger that ripped my home country apart by focusing public attention on a created enemy of “white monopoly capital”, thereby unknowing to the public allowing the state to be captured by nefarious persons, leaving South Africa corrupt, socially fragmented and economically crippled possibly for a generation. Such social manipulation is highly distributed and well disguised. Consider the chess-like propaganda being deployed by what appear to be Russian influencers in this article. Forget the loud obnoxious megaphone type social media account pushing its vitriol on you, or the ISIS accounts testing your radical inclinations, the stooge is now instead dressed up as a polite, liberal, educated socialite who gradually builds an audience and sprinkles gentle mistruths, the intent behind which are difficult to grasp at on first read. Take @PoliteMelanie mentioned in the article, who appears to have been a Twitter fabrication who you’d follow because you agree with her “nice values”, and when, in a twist that would make kayfabe participants have to think twice, she points out conservatives’ apparent bias from a made-up poll, she in fact seeks to instill disgust of the other within her educated urban audience. Why? In order to divide social groups at a national level. In the US there’s plenty more examples each with their own intentions: the rise of made-up news sites such as InfoWars, Facebook promoting whatever news sells, the rise of QAnon, the list goes on. As of Jan 2021, you can’t help but wonder how successful such misinformation campaigns have been.

Digital technologies catalyze some of the problems we face in the misinformation scourge. There’s no apparent immediate resolution and I personally opt for the hard way: inculcating the importance of critical thinking and wisdom born of empathy.

  • The problem of verification and sense checking: In prior times, writing was only disseminated after the review barrier of a publisher or editor who instilled checks and balances in order to distill out the noise and focus on the best. This tempers bad ideology, conspiracy, incitement. It fails, however, when it leads to suppression of free speech and views deviant from norms. Nonetheless, tech media has few such barriers, and it is already clear that the social media giants are facing the grand dilemma of free speech vs. mass incitement as they begin to shut down protestor communications and platforms prior to Joe Biden’s presidential inauguration.
  • The problem of rapid dissemination and galvanization: On the one hand a boon, as Twitter enabled citizens to drive back authoritarianism in the Arab Spring of 2011, on the other hand a problem in the highly distributed reach of recruitment for radical groups such as ISIS, and the current rapid galvanization of violent protest by MAGA members in the US.
  • The problem of geopolitical destabilization: To quote Robert D. Kaplan, noteworthy author and journalist of international affairs, “Technology has not defeated geography, it has shrunk geography … the world is more anxious, more claustrophobic, more nervous and smaller than ever before. We tend to think of interconnectivity as a positive thing: connects markets, creates enlightened global cultures — which is true. But in a geopolitical sense, interconnectivity is very destabilizing”
  • The problem of viewership in follower-ship: Unfortunately most of us are natural followers, easily persuaded by a figurehead. Politicians distorting truth is ages old, however with digital media a new political type emerges: they’re attention seeking, constantly advertising their views, push volume over quality, appeal to base emotions, dial up the sensationalism, and keep audiences “locked in”. This helps dumb down a follower who gets transfixed by streams of information that are constant, mindless, emotional rollercoasters to keep them plugged in.

Now, to theoretically level up for a moment, take all these misinformation problems exacerbated by digital technology — lack of sense-checking, rapid dissemination and galvanization, zealot followers, geopolitical destabilization — and let’s put on our hardhats and go build a weapon on top of them. Yes, a weapon; a weapon with a GPT-3 core. Let’s call it the DESTABILIZER. We’ll build it the same way that some threat actor, perhaps state-sponsored, would build it. Since the following will be “what-if” scenarios, they entail possibilities and speculation that I’ll only substantiate so much. I encourage you to consider the possibilities yourself and, in the spirit of critical reasoning, do the sense-checking on your own.

** DEEP BREATH **

We’ll generalize GPT-3 instead as a Language Prediction Model (LPM) since we predicate this part on some future or alternate version of GPT-3. With the LPM at the core of the DESTABILIZER, we next layer additional capabilities on it. If you have never coded before, be aware that applications nowadays are built upon innumerable layers of engines, languages and libraries; rarely do engineers ever get near base machine language, especially with AI/ML. The ability to modularize executable code and build greater functions on top of smaller functions allows for today’s immersive technology experience and is equally applicable to malware: trojans, viruses and the like reuse all sorts of software libraries to perform their nefarious actions. Often it is only, say, a key exploit to gain access to a system, that is actually coded ground-up, with the rest a cobbling-together of existing hacker tools and libraries. Therefore our LPM core of the DESTABILIZER is easily tacked together with say, Deep Fakes — a synthetic media technology able to create fake videos of you mouthing the words of whatever script someone wishes. Upon that we layer a good AI voice generator enabling you speak words you never uttered. What is it that you speak about in these videos, you ask? Your apparent knowledge of some state treason, of course. Oh, and we simply loop ten thousand variations of keywords through the LPM to create ten thousand well-versed extremist propaganda pieces. The DESTABILIZER now has its ammunition. It is you, in ten thousand ways, playing out someone else’s agenda on video.

My previous workplace liked to gush about the wonder of prediction AI that targets the right consumers with propensity scoring, adding the right promotional messages at the right time to maximize sales. Personalized adverts! Web usage analytics that knows more about you than you yourself! Yes, well. That hyper-optimized targeting AI is also a wonderful way to select, out of ten thousand misinforming videos, the near-perfect video most likely to get your attention and action. The other side of this, selecting the most persuadable audience rather than the most persuadable video, has already been done regularly on Facebook by right-wing extremists, ISIS and others. Anyway, let’s add both targeting capabilities to our munitions mix, and we can now ready the DESTABILIZER to fire.

Take Devon Glazer, an average, two-decade-employed, father of three, who discovers a message in his newsfeed that speaks directly to his beliefs that he is economically suppressed not by regulation, but by actual people in high powers. It offers a sensational but, unbeknownst to him, bullshit explanation for this. This is only step 1 in the DESTABILIZER’s deployment strategy. For Devon, there will be many steps before his wool is fully died and his indoctrination complete. We’ll skip the specific techniques of persuasion, rhetoric and repetition for now to get to the grander deployment process, which uses a series of messages, events, advocates and proponents in a selection sequence that are — my ex-colleagues know this only in the positive — near-perfectly optimized to shift Devon from working dad to man committed to forcefully stopping the president’s nefarious aims. A president who, Devon realized after some deep immersion, is actually an extra-terrestrial alien threat. (I wanted to drop the silly example, but since QAnon’s actual claims are so far beyond absurd, it’s just fine). In fact, the LPT proves remarkable in turning followers into enraptured viewers through its sensationalist, personal style of media bombardment; the first of our aforementioned “problems” (depending on which side you sit) in tech’s ability to catalyze misinformation. Still, we need to scale this up. So we instantly repeat the individual stepwise tailoring process to a vast swathe of people. This guides everyone simultaneously on their own paths to the same ideology:

Steps of individually tailored messaging designed for conversion to a common ideal

You’ll notice that now the the process of rapid dissemination and galvanization of misinformation via online digital media is now in full force. The issue of geo-political instability comes to the fore next with the “shrinking” of geography by the ubiquitous tech media platforms that we are globally mired in. We add the weapon deployment capability to not just localize messages, but position them as coming from an appropriate cultural figure, use the right accents and even speak the local language. Which allows us to rapidly scale a global effort to converge on our common ideology:

Regional convergence to common ideologue

As leaders, media, corporate and large social media platforms begin to cotton onto what is happening, they try to overcome the problem of lack of verification and sense checking by shutting down the misinformation sources. Therefore it is time to deliver the final module of the DESTABILIZER, a portfolio controller AI that counters this act and maximizes audience reach/conversion by flipping between social media platforms. In doing so it keeps up the individual media bombardment and engages existing audiences with while picking up new ones. This is in fact happening as this is being written with MAGA/Trump supporters moving away from platforms that are being shut down. So - back to the DESTABILIZER - for major platforms it assesses whether to pull audiences away from them, or instead infiltrate as far as possible onto the platform. Its final strategy is to take a less-known platform mainstream by proliferating it through sheer user numbers.

Social Media Platform Portfolio Management AI

What remains is simply to call the converted to action. And the weapon has succeeded.

After that example, perhaps you might agree that a GPT-3-like technology could prove problematic in the wrong hands. Perhaps you may even wonder if some variation of this scenario is inevitable. And you may further wonder what will happen and what we should do in such a situation. I guess it plausible that our current idea of democratic free speech may transform somewhat. It is one thing to have free speech, but another thing to weaponize it and deploy it to incite chaos … the problem being that these two acts are closely intertwined in the digital age. Major social media platforms may therefore become regulated as media companies: accountable for what is published. The calls to break-up major communications platforms like the WhatsApp, Instagram and Facebook combination — not just due to monopolization — but singular control of misinformation situations, may also become reality.

But a technology like GPT-3 can also be used to create content that infiltrates most corners of the internet, not just social media. This may create the most jarring world effect: an inability for the average person to tell what is real and what is not real online. For that, I hope news establishments of the greatest integrity and reputation hold firm, the way the BBC has demonstrated on occasions following a principle of trust over profit. There may also be public reactions that begin a gradual unravelling of trust in any online information and a reversion to trusting only traditional and local news, delivered by known persons, politicians and papers. A clear split of the virtual and the physical worlds. Beyond such speculative outcomes however, as before I like to think our best hope is in the long-term sharpening and education of the world’s sense-making, critical reasoning, collective understanding and, dare I say, wisdom. The misinformation tide will stem once it has no takers and no demand. As the stoic emperor Marcus Aurelius said, it’s about:

… doing what we ought to do, not doing what we want to do … acting rationally, not emotionally … we have control over our actions, our judgments, and our feelings ... we can choose how we respond to events outside of our control.

Hang on, I misled you. That was not Marcus Aurelius. It was GPT-3 pretending to be Marcus Aurelius.

* Footnote: I‘d suggest that software engineers are often pretty aware beforehand of the negative outcomes that their inventions could result in. Many creators are however fueled more by the desire to see their creations manifest; outcomes and consequence are dealt with as secondary. OpenAI, it should be noted, is founded with the premise of ensuring long-term AI safety.

--

--

Pursuit of Perspicuity

Offbeat reflections on the convergence of our world in business, tech, society and humanism.