Bots, memes, "fake news", ... Propaganda is all over the internet, these days. We've seen large-scale, coordinated information operations in the US presidential election, the Brexit vote, the French election, and this weekend's German parliamentary election. And we've all seen stories in our own social-media feeds that turned out to be biased, misleading, poorly fact-checked, or just plain lies. In fact, we've probably all shared some of those stories ourselves.
Internet propaganda is such a big problem that it can't go un-addressed. But it's such a big problem that it can be overwhelming to know how to resist it.
So I put together a list of things we can do to fight the propaganda campaigns and PsyOps (psychological warfare operations) coming our way. While a few of these things are for the highly committed or technologically trained, many of them are things that anyone can do right now.
Let's get started.
What is propaganda?
Etymologically, the word propaganda comes from the word propagate — to spread. In its oldest context, it simple refers to the spreading of a message, whether through word of mouth, or through print media. It's similar both to publishing and to evangelizing, in that sense.
In modern history, particularly in the era of mass print publication and now digital media, it has taken on a more insidious meaning. According to Jacques Ellul:
Propaganda is a set of methods employed by an organized group that wants to bring about the active or passive participation in its actions of a mass of individuals, psychologically unified through psychological manipulation and incorporated in an organization. (p. 61)
This comes from Ellul's classic text, Propaganda: The Formation of Men's Attitudes. However, I find it is at once too narrow and too broad for the digital age. The idea of an organization being the core agent, and expansion of that organization being the goal, only accounts for a small part of the propaganda activities we see online. In fact, the idea of being a card-carrying member of an organization has largely been supplanted these days by participation in a movement, with various degrees of possible participation. This is true for politics, religion, social movements, even schooling in some cases. This difference in what movement "membership" entails, as well as the different kinds of messages and media available to modern citizens, requires some different nuances in how we define propaganda.
With that in mind, I define propaganda as the use of one or more media to communicate a message, with the aim of changing someone's mind or actions via psychological manipulation, rather than reasoned discourse. Non-propaganda is not the absence of bias ― we're all biased. Propaganda is the (usually purposeful) attempt to hide the bias, to present non-facts as facts, to steer the mind away from the processes of reason that allow us to read through bias critically and to discern facts from fiction, truth from lies. Propaganda is distinct from scholarly writing, journalism, opinion/persuasive writing, and even many forms of religious instruction, all of which, ideally, are based on a reasoned presentation of facts, interpreted according to a set of predispositions that are acknowledged or a perspective that is readily discerned.
Two other terms are key to understanding internet-based propaganda: misinformation and disinformation. Disinformation (from the Russian dezinformatsiya) is an information operation that deliberately attempts to deceive. Misinformation, on the other hand, is an inadvertent sharing of false or misleading information.
Internet-based propaganda campaigns tend to involve both. The original agent begins a disinformation campaign aimed at deceiving, often employing an army of bots or sockpuppets to signal-boost the message. That message is soon shared by those who believed the original claim. While some of those sharing the message are purposefully trying to deceive, the more viral a claim becomes, the more shares come from people who don't know it is false. So what starts as a disinformation campaign becomes a misinformation campaign, as people unwittingly share something misleading.
This shift to misinformation makes the disinformation campaign more powerful, and is an intentional aspect of the online propaganda campaign. It makes the disinformation more believable, as people tend to evaluate the veracity of a claim on social media based on whether they trust the person who shared the message, rather than the trustworthiness of the source. It can also amount to a process of information laundering, as the source of the disinformation becomes increasingly obscure, particularly as information moves from platform to platform. (Keep in mind, again, the lack of card-carrying institutional membership in these campaigns, which further blurs the lines around who/what we can trust and the relationship trustworthy institutions have to information.)
These fine distinctions (mis/disinformation, online/print propaganda) point to a few key traits of modern propaganda that we need to keep in mind:
- Propaganda is manipulative and social, not reasoned and intellectual. Our resistance techniques, including our counter-narratives, must reflect this.
- Propaganda hides its source. Traditional information literacy techniques are often ill-equipped to deal with this.
- When we believe, and share, these false messages, it may be an innocent mistake. But there's someone who's not innocent behind it.
- The combination of social, psychological, informational, and technological processes that go into spreading propaganda means resistance to propaganda must be a collaborative effort.
So how do we resist this modern brand of propaganda? Here are a few tips, based on the research that my colleagues and I, and many others studying it, have done.
One of the most important things we can do is fact-check. Whether a political campaign, or a mass online harassment campaign, disinformation almost always goes viral because unsuspecting people don't check whether something is true before clicking "retweet" or "share".
But fact-checking is hard work, right? That's why journalists get paid to do it, and why those source-verification methods we learned in school were such major processes.
It's not really all that bad. Mike Caulfield (author of Web Literacy for Student Fact-Checkers) has a simple three-step process:
- Check for previous fact-checking work
- Go upstream to the source
- Read laterally
That first one is a huge time saver: JUST CHECK SNOPES! Can't find a fact-check already done on that issue? Follow the links in the article (or search for them if the author left them out) to find out where they got their information. (If it's an image, do a reverse-google-image search and look for the oldest search result.)
Once you get to the source (or realize it's been successfully laundered), you'll be in a better position to evaluate its veracity. But if you find the source, and it's still not clear, try #3: read laterally ― that is, find other sources that you know to be generally trustworthy, and see what they say. The source of the story in question is Breitbart or TruthFeed, and not a single mainstream media outlet has a story on it? It's fake. But if the NY Times, Wall Street Journal, Washington Post, LA Times all have stories with the same details, you're probably good.
Mike's fact-checking process is pretty fast in most cases, but sometimes we just don't have the time to do it. What do we do then? Don't share the story. And flag it in your mind as something not obviously true that you didn't have time to fact-check.
We know that unconscious familiarity leads to "truthiness", which eventually leads to us thinking something is true, or at least likely. And we know that accidentally sharing things we didn't realize were false is a major component in disinformation campaigns and the virality of conspiracy theories. But as Zoë Quinn says in her book Crash Override, we too often share things on social media out of a desire to boost our own likes/retweets/follower count, rather than a desire to inform. And that really hurts people.
There's no shame in being slow to share. But there's a lot of shame in sharing lies that people suffer from.
Follow the right people
Of course, we often see propaganda not because we were targeted successfully (though that's happening a lot, particularly on Facebook), but because we are following people on social media who were duped themselves. Go ahead and clean up your friends/following list. Don't want them to know you're avoiding their posts? You can "mute" them on Twitter or "unfollow" them on Facebook. Or you can create a special list of people you trust for specific issues and be purposeful about reading that list with news/politics/whatever issue you care about in mind, and be purposeful about skipping over news shared by people in your general feed.
Don't know whom to follow? (Yes, Twitter, it is "whom to follow" not "who to follow".) Here are a few on Twitter I trust on issues relating to propaganda and digital media literacy (who also share good things about politics and current events), many of whom I've already cited in this post: @holden (Mike Caulfield), @zeynep (Zeynep Tufekci, sociologist and digital media expert, esp. for non-Western information/propaganda), @funnymonkey (Bill Fitzgerald, data privacy expert), @d1gi (Jonathan Albright), @zephoria (danah boyd, social media researcher), @broderick (Ryan Broderick, researcher/journalist at Buzzfeed), @DFRLab (Digital Forensics Research Lab), @ProPublica, and @CJR (Columbia Journalism Review). I'm sure I left off many worthy folks, but this will get you started!
Be skeptical of the right things
No matter whom you follow or ignore, weird stuff passes before our eyes. But not everything truthy is true, not everything surreal is false, and even a few conspiracy theories end up exposing real conspiracies!
It's important to be skeptical, but being skeptical about everything is exhausting (and unnecessary), and ultimately will lead to us missing something important. We need to be skeptical at the right time about the right things.
I've found the best way to be skeptical about the right things is simply to do a lot of fact-checking and take note of the kinds of things that keep popping up, and where they keep popping up. As I learn the main characters, the common tricks, and even the tools and platforms they use, patterns emerge, and my defenses go up much higher in some contexts than others, as a result.
For example, while there have been instances of unprovoked ― and in my mind, unnecessary ― violence from Antifa groups, I've seen references to Antifa crop up far more often among the so-called "alt-right", especially from bots and sockpuppets. So when I hear negative things claimed about anti-fascist protestors, I demand more details from the ones making those claims than in other contexts. Similarly, when I see 4chan, Mike Cernovich, or Jack Posobiec are involved, I remember the roles I've seen them play in the purposeful spread of false information in service of the alt-right, and I almost immediately discount it.
On the other hand, I know the good work that the data journalists at ProPublica do. I know one of them personally, and I know that their data sources and analytical processes are generally very trustworthy. They are also less likely than even the Washington Post to use misleading, click-bait headlines and tweets. So when a claim originates with them, even in their Twitter feed, I'm pretty confident it will turn out to be valid. If I am building an argument myself, I'll of course verify a ProPublica claim before including it. But when I scroll by their tweets, I'm not constantly saying "Prove it!" to my screen.
Don't duplicate work
(Now we're getting into the techniques for those more committed and/or technically proficient.)
Let's say you're setting out to put significant effort into battling propaganda online. Remember what I said above: resistance to propaganda must be a collaborative effort. Whether investigating, writing, or coding, it's important that we not wear ourselves and each other down by disproving the same lies repeatedly. In addition to following Mike Caulfield's first point (check for previous fact-checking work), we should also make sure not to do work previously done.
I find a lot of overlap here between fighting propganda and what coders call the hacker ethic or hacker attitude. One of the key elements of the hacker attitude, according to Eric Raymond, is "No problem should ever have to be solved twice." Raymond explains:
To behave like a hacker, you have to believe that the thinking time of other hackers is precious — so much so that it's almost a moral duty for you to share information, solve problems and then give the solutions away just so other hackers can solve new problems instead of having to perpetually re-address old ones.
Note, however, that "No problem should ever have to be solved twice." does not imply that you have to consider all existing solutions sacred, or that there is only one right solution to any given problem. Often, we learn a lot about the problem that we didn't know before by studying the first cut at a solution. It's OK, and often necessary, to decide that we can do better. What's not OK is artificial technical, legal, or institutional barriers (like closed-source code) that prevent a good solution from being re-used and force people to re-invent wheels.
So if you're learning how to fight propaganda with new digital skills, retracing someone else's step can be a valuable learning experience. But we don't all need to write scripts to scrape and parse the same websites. Especially if no one has time left to analyze the data we scrape. While the field of what I call activist data scientists is growing, there's still plenty to do. Let's all find our niche and share what we find, so we can signal-boost (and improve!) each other's work, and coordinate to fight oppression and disinformation operations.
Learn a few good APIs
In the current media landscape, social media is key to the spread of information, and thus disinformation. Learn how to query and parse the APIs (application programming interfaces) for Twitter, Facebook, Reddit, 4chan, etc. (Yes, 4chan does have a public API, and yes, 4chan users post their most horrible stuff publicly.) When you know how they work, it becomes very easy to download a lot of information, and a small group of independent, part-time researchers can break big stories that end up in major news publications. (I'm speaking from experience.) A number of mainstream news publications also have public APIs that provide access to different amounts of content.
Playing around with these APIs can be part of "finding your niche." Test out a few, see what you find, which ones play well together, and which ones seem to be falling through the cracks of exiting research. Then you can start thinking about setting up something more formal for your rigorous research or activism.
Don't know where to start? Check out the tweetmineR tool I developed for mining the Twitter API using Python or Google Scripts and analyzing them in R. Never used an API before? I wrote a series of tutorials just for you! :)
Get comfortable with web scraping
The most nefarious groups, and websites built to host information that gets shared on social media, don't have public APIs. Instead, we need to scrape the content in order to analyze it as a large corpus. Now, scraping isn't always legal, so be sure to check the
robots.txt file and/or the site's Terms of Service before overloading their servers and reproducing all of their content on your personal machine.
That said, assuming it is legal, it's not that difficult. Shortly after the election, I developed a process for scraping whitehouse.gov with
wget, processing it with the Python framework BeautifulSoup, and analyzing it statistically with R. I also know many who swear by Scrapy, another Python-based tool, and there's rvest for R/TidyVerse purists. These are just a few examples of the many web-scraping tools out there. Like APIs, try out a couple, get a process going, and start downloading content to examine for patterns of disinformation flow.
Compare multiple networks
A lot of researchers study disinformation on Twitter, or Facebook, or Reddit, or meme generation on 4chan. Only a few are comparing how disinformation flows across social networks. In fact, that's the reason that my Data for Democracy colleagues and I were able to find what we found in the lead-up to the French presidential election. While others were looking only at Twitter, only on Reddit, only on 4chan, we were comparing links and timing patterns between Twitter and 4chan. As a result, we stumbled on a big find that others missed.
In my assessment, disinformation tactics have advanced since May, and less organizing is happening in the more visible places. However, because these campaigns only work if a large enough "public" gets involved, at least some significant portion of their organizing activity can't be completely in the dark. So monitoring multiple channels and "swimming upstream", and then comparing findings across multiple campaigns (adding new sources for the next campaign each time we find a new possible link in the chain), we can make a lot of progress at discovering and uncovering campaign activities.
Perhaps the most important thing we can uncover in this process, aside from the ultimate source of the disinformation, is the accounts that serve as catalysts for botnets and bridges between platforms. Those accounts are the ones that make the campaign happen, or make it effective. We need to map out who they are, how they work, and keep our closest eye on them, if we hope to combat online propaganda.
That said, I'm not in the business of doxxing people. I don't publish the social media data I scrape (I do publish aggregate statistics), and I only name a user if they are a verified account with a large number of followers who is already in the public eye, or is obviously an automated bot account. I also try to be as careful as possible with anything remotely resembling an accusation of illegal or unsavory activity. As Zoë Quinn says, it's not about who is good and who is bad, or who is/isn't deserving of being harassed; it's about what is/isn't acceptable behavior. And having seen what doxxing can lead to, I think it's rarely, if ever, acceptable.
Now, publishing the official (not home) email address and phone number of a senator is not doxxing. That's public information, and does not increase the vulnerability of that public figure. Sending information to law enforcement (or a trustworthy civil rights group) about the identity of a domestic terrorist you happened to uncover is not doxxing. And I don't think that saying so-and-so (blue check mark, etc.) is the figure most effective at moving a message from 4chan to Reddit or Twitter is doxxing. But calling on a mob of social-media harassers, posting personal/home information, connecting to (possibly innocent) family members and colleagues, etc. is not okay.
Many these days say "sunlight is the best disinfectant." That's fine for process. But for people, it's full of problems. Call-out culture, easily findable home addresses, and vigilante investigators who get it wrong have ruined many lives. I never want to be a part of that. And I discourage you from being part of it, too.
That said, we need sunlight on the processes and channels through which propaganda flows, especially when we're talking about something operating on a national or international, even geopolitical, scale. When I talk about investigating and uncovering things, this is what I'm talking about.
As I said above, this needs to be a collaborative effort. Systemic problems need systemic solutions. Massive, coordinated disinformation needs massive, coordinated resistance. I'm part of an amazing group, Data for Democracy ― over 2000 volunteers from various parts of the tech community, journalism, education, civil rights advocacy, etc., working together to use technology in service of the public good. And we're not the only group out there. So if you're doing this work, I invite you to join us, but highly recommend that you join with somebody. The goal of the alt-right, in particular, is to make us feel isolated and to inflate our perception of the power that they have. Working together in a collective is like double-resistance ― working to counter their propaganda efforts, and their attempts at social engineering.
I'll admit, this post really got away from me. But it's a big, complex issue, with no easy solutions. That said, there are some clear things we can do to counter online propaganda. And as we work together, we can make a big dent in the universe. For good. Hopefully this helps some of you take part in that process!