Big tech took on fake news – so how’s that going?

  1. Weaponized opinion: how do you combat fake news? | Politics Chat
  2. Tinfoil hats, duct tape, and bubble gum: Practical precautions when threatened by Fake News
  3. Big tech took on fake news – so how’s that going?

By: Richard W. Sharp

Fake news is used to misinform, and misinformation is dangerous. It is a threat to the ideals of legitimate government because we hold that legitimate governments derive “their just powers from the consent of the governed,” and that hinges on the understanding that consent is informed.

But we witness daily the federal government stand by while fake news continues apace. It’s a tool used by external (i.e., Russia) and internal (i.e., Trump) actors to shape our political process to their ends. While the danger is clear, the solution is not.

But never fear, citizens, Big Tech is on the case!

Facebook has a plan. It includes more fact-checking and “and working to prevent ‘misleading or divisive’ memes from going viral.” Google announced the Google News Initiative to “highlight accurate journalism while fighting misinformation, particularly during breaking news events; help news sites continue to grow from a business perspective; and create new tools to help journalists do their jobs.” In Europe (where the EU told them to propose a plan or otherwise one would be imposed) Google, Facebook, Twitter, Mozilla, and advertisers agreed to tackle fake news together

Big Tech has proven adept at protecting us from depictions of breasts, so how are they faring so far in the battle against fake news?


The results so far

The main tactics to date have been account bans (user and advertiser) and (semi-)automated content identification followed by takedown, relevance demotion, or flagging.1 Google and Facebook forbid fake accounts or false claims about identity and actively demote content deemed to be misinformation. Twitter bans impersonation and bots (there have been some complaints recently from the Tweeter in chief 2).

But the approaches so far don’t seem to be working. Twitter especially has had trouble. A Knight Foundation study shows that fake news outlets continue to put out a million tweets per day despite the company’s promise to crack down. Many of the worst offenders from before the 2016 election brought fake news to the fore are still out there (of 100 spammy accounts that were active before the 2016 election, 90 were still active in spring 2018). The study did find that a concerted cross-platform  ban (Twitter, Reddit, etc.) does significantly reduce an outlet’s audience, but this approach has not been widely adopted.

In the fake news content flagging business, attempts to use AI to spot fake news are starting to show some promise, but have a long way to go. Various approaches have typically yielded 70-80% accuracy, letting a lot slip through (imagine your inbox if spam filters let 20% of your spam get past them). Semi-automatic approaches curated by teams of human fact checkers have been rolled out. Facebook recently showed off its “War Room,” which has produced some good mission control style imagery, but it’s not really working


This isn’t just a technology problem

The bottom line is that groups like Facebook have continually made mistakes while trying to police content. Recently, Facebook misidentified and blocked apolitical Spanish-language and African-American ads as political (flagging such “controversial” topics as promotions for Black History Month), while at the same time allowing political ads from advertisers using fake identities, and even run ads claiming to have been paid for by Cambridge Analytica following their own great disgrace at the hands of that firm. These examples were corrected on appeal, but days ahead of a major election, delay is damaging.

Worse, even when they get it right, they get it wrong. Facebook has a long history of applying their own virtues to your experience. Recently they saved you, loyal reader, from the alarming sight of Delacroix’s Liberty Leading the People3 by refusing to run an advertisement for our recent piece on nationalism. Of course, Facebook has long defended the practice by referring to their “community standards.” Those standards were written by Facebook, not its user community, and herein lies the deeper problem.

Current efforts to control fake news are mainly attempts by the tech powerhouses to identify and silence bad actors, but you cannot simultaneously protect freedom of expression and delegate the determination of what speech qualifies as legitimate to a handful of private, for-profit institutions. Each of these organizations is faced with an inherent conflict of interest: they serve their shareholders, not the public.4

Who gave them the right to set these standards? Are they seriously asking us to rest easy because their opaque, unaccountable standards teams are on the case, deciding what constitutes allowable or forbidden political speech? Was that a setup for follow-up piece in which we explore policy options for controlling fake news from the community’s point-of-view?

Yes. Yes it was.


Notes:
1  And PR, lots of PR.^
2  Honestly, I don’t know if these complaints fake news or real? He may well have had his followers count dip following the mass removal of bots, or, he might just be playing victim without proof, it’s kind of his MO.^
3  According to Facebook: “In the ad’s image/video, we found depiction of nudity. We don’t allow ads that depict nudity, even if it’s not sexual in nature.”^
4  Mozilla, a not-for-profit corporation, has notably gotten into the game with a nod to the user. Part of the Mozilla Information Trust Initiative announcement states “We can’t solve misinformation with technology alone—we also need to educate and empower Internet users, as well as those leading innovative literacy initiatives.” Disclaimer: I am not an employee, but I do have a close personal connection to Mozilla. ^

About The Author

Richard is a Seattle area data scientist who builds predictive models and the services that deliver them. He earned a PhD in Applied and Computational Math from Princeton University, and left academia for the dark side of science (industry) in 2010, following his wife to the land of flannel. Fan of coffee, beer, backpacking and puns. Enjoys a day on the lake fishing, and, better, cooking up the catch for a crowd.

No Comments on "Big tech took on fake news – so how’s that going?"

Leave a Comment