As the congressional investigation into Russian social-media-based information operations heats up, Twitter issued a response two days ago. In the words of Sen. Mark Warner, that response was "inadequate on almost every level." Lauren Gambino at The Guardian wrote:
Warner accused Twitter of failing to grasp “how serious this issue is, the threat it poses to democratic institutions and again begs many more questions than they offered”.
“There is a lot more work they have to do,” he told reporters on Thursday.
I, too, was bothered by Twitter's response, in particular their "note about third-party research" (you know, the stuff that my colleagues and I have written here and on the Data for Democracy blog ― and yes, I'm sure they're referring, in part, to some of our work):
Studies of the impact of bots and automation on Twitter necessarily and systematically under-represent our enforcement actions because these defensive actions are not visible via our API, and because they take place shortly after content is created and delivered via our streaming API. Furthermore, researchers using an API often overlook the substantial in-product features that prioritize the most relevant content. Based on user interests and choices, we limit the visibility of low-quality content using tools such as Quality Filter and Safe Search ― both of which are on by default for all of Twitter’s users and active for more than 97% of users.
This paragraph (indeed, much of Twitter's response) boils down to this: you can only trust Twitter to tell you what's really going on on Twitter.
This is fallacious. In fact, whenever you tell someone "you can't trust anyone but me no matter the evidence," that's gaslighting ― a psychological abuse tactic.
There's another problem here. Twitter says, essentially, "because we did something to address problems with the platform, you can't legitimately critique the platform for having a problem." Again fallacious. Fixing part of the problem never means there's nothing left to fix. If anything, the fact that they addressed the issue and left so much of the problem standing is worse. They can't plead ignorance or inability, only incompetence or resistance.
Twitter also says that they "limit the visibility of low-quality content," but they offer no evidence that abusive tweets and disinformation campaigns aren't seen by many people even though they have access to that data. Again, this is gaslighting.
More importantly, counterevidence abounds on this point. Not only do spammers, abusers, and propagandists use hashtags and many other tricks to get their content in front of "real" users who would never follow them directly, but there are many documented cases of mis/disinformation spreading from one community to another on Twitter, as well as from Twitter to mainstream consciousness. Here are just a few:
- Pizzagate: A completely false conspiracy theory that began on 4chan and spread through social media (including Twitter ― where it is still actively being shared!) led to a gunman appearing at a DC pizza place and firing an AR-15 multiple times. He has been sentenced to four years in prison.
- #MacronGate: Another false conspiracy theory that began on 4chan was brought to Twitter and gained enough traction that both Macron and Le Pen were questioned about the conspiracy theory during the last week of campaigning for the French presidential election, including during the last official presidential debate. (This message gained traction in part due to bots, but also in large part due to "real" users purposefully trying to game the system and make it trend ― the technical term for these lovely people is shitposters.)
- #unitetheright: High-volume accounts (bots, sockpuppets, and shitposters) during and immediately after the Charlottesville "Unite the Right" rally and terror attack last month pushed several false or misleading narratives about the mainstream media, Antifa, and other things that project a white nationalist victim narrative. President Trump's speech the following week sounded eerily similar, drawing on language he had never used before but which was highly distinctive of the high-volume accounts during #unitetheright. (According to Talking Points Memo, American white nationalists heard the same dogwhistles I did.)
The list could go on... Especially if we included other social platforms like Facebook, where fake news outperformed real news in 2016 by huge margins.
But if you really want evidence that Twitter does not care about abuse of its platform, just wait until you or someone you care about is attacked by a group of trolls or bots on Twitter. Report that activity to Twitter, and see what happens. Hint: they are not helpful, if they respond at all.
What amazes me the most about all this is how easy it was for three unpaid researchers with an API key and about 30 lines of code to download several million tweets and in just a couple hours of data analysis to uncover evidence of a massive influence operation and write it up in time for it to influence the press coverage of the election in multiple countries (see New York Times, Slate, Bloomberg News, Quartz, and 24.hu in Hungary).
When Twitter says "it's hard," or "it's expensive," or "it's impossible," that's simply not true. If we can do it, they can certainly do it. (And while I understand the difference between a researcher saying "I'm pretty sure I found something" and Twitter saying "that's sufficient evidence to suspend your account," there's absolutely nothing stopping Twitter from saying "We think we see something, but we're not confident enough to suspend all these accounts right now, but we'll notify the appropriate law enforcement or counterintelligence agents"! If the latter were all they did, I would be ecstatic!)
Here's the bottom line. There's a clear mis/disinformation problem on all of these corporate social-media platforms. The platforms know about the problem, half-heartedly address the problem, give limited data access to authorities investigating the problem, dismiss the work of independent researchers studying the problem, and, quite frankly, lie about the problem. They try to set themselves up as the only credible authority on the extent of the problem and its possible solutions ― experts and regulators be damned.
Does that sound familiar to anyone?
As I've written about these disinformation campaigns and the ad-tech businesses that support them, "gaslighting is their business model." Whether Russian hackers, white nationalists, or spammers using polarizing headlines to steal clicks and rack up ad dollars, they lie, manipulate, and seek to discredit experts and disempower regulators. The goal may be to convince us of a message, to destabilize our democracy, or simply to get us to click indiscriminately on things that will make them money. Regardless, that's their modus operandi: lie, manipulate, discredit, and disempower.
The biggest problem now, in my eyes, is that this has become the modus operandi of the platform companies themselves. Lie about the problem, manipulate others into dependence on the platform, discredit the experts speaking to the problem, and resist attempts to regulate (or even investigate). And it's not just Twitter.
I'm glad my senator is leading the charge to make changes here. He seems well informed, and he's not buying the snake oil that the social media execs are selling.
But in the mean time, I've got my own plan:
- Keep investigating. I've got a couple un-analyzed datasets, and my collaborators and I have some projects in the works. Because I don't (and can't) do this work full-time, it can be a slow process at times. But we're forging ahead.
- Make an exit plan. I really want out of this ecosystem. (It's been a while since Twitter has been a social network for me anyway.) The problem is that people in general are largely dependent on social-media platforms for their information. That's why social-media propaganda is such a big deal! But it also means that to get the word out about the evils of the platforms, I need to be on the platform(s). But while I'm working on that, I've deleted a bunch of old content, I'm regularly purging new content, and I'm tweeting a lot less.
- Build other bridges. I've got an email newsletter! You can subscribe to The Disinformer for one big update on my research and things I think are worth reading across the web every 2-4 weeks. (Mike Caulfield also has a newsletter that I find really valuable on these issues.) And I'm finding several semi-closed communities on Slack (like Data for Democracy) of value as well.
- Resurrect bookmarks! I've got a folder full of bookmarks for websites I visit regularly to read. Why let an algorithm crafted by a company I distrust determine what I do and don't read (and when)?
- No, I'm not joining Mastadon. I'm over social media. It's not that Twitter the company is bad, but Twitter the service is good. I'm over the whole idea. I still value relationships, serendipitous learning, and allowing others a channel to suggest new ideas that I wouldn't otherwise consider. But the always-on, easily manipulable platform is part of the problem. Even open-source software and non-profit organizations won't solve that aspect of it.
As always, this is a "to be continued..." issue. The congressional hearings are ongoing, as are the influence operations and our attempts to uncover and counter them.
Good night and good luck.