Are Facebook, Twitter, and Google ready for the midterms?
Two years ago, Facebook CEO Mark Zuckerberg said “it was a pretty crazy idea” to think that foreign interference on his platform could have swayed the results of the 2016 U.S. presidential election.
How times have changed. Now, not a week goes by without social media companies — most notably Facebook, Twitter, and Google, — unveiling a policy change designed to thwart foreign actors seeking to use bots or fake accounts to spread misinformation on their platforms.
In fact, 2016 wasn’t the first time bad actors used these platforms to influence voters. But it was the first time the scope of the problem was identified — in the form of thousands of accounts linked to a Russian propaganda machine, the Internet Research Agency, that created tens of thousands of posts seen by millions of Americans.
In the two years since, the long list of initiatives undertaken by Facebook, Twitter, and Google shows just how complex this problem is. First, these companies had to hire more content moderators to help spot bad actors. Then they had to lift their heads out of the sand and actively start combating misinformation, either through algorithmic changes or new fact-checking procedures or both. All three companies changed policies to verify the identify of political ad buyers in the U.S., so that foreign actors couldn’t easily buy ads to spread misinformation. They also created political ad archives so users can better understand how political advertising works on these platforms in general.
Have these steps actually helped? It’s hard to tell, because it’s difficult to spot foreign interference until the damage is already done. Over the summer, Facebook, Google, and Twitter shut down hundreds of accounts exhibiting coordinated inauthentic behavior from Iran, as well as accounts that exhibited behavior consistent with the IRA. This could be considered a success, given that the activity was addressed before the midterm elections.
But by the time this activity was shut down, the accounts had already created real events, and at least one fake job posting. And it’s worth noting that any success in the U.S. doesn’t mean prevent these platforms from using the same practices in other countries — a quick glance at the trove of misinformation currently being spread on WhatsApp during Brazil’s presidential elections shows that this problem is far from solved.
With less than four weeks to go until the U.S. midterms, VentureBeat asked a handful of experts — from researchers studying bots and content moderation to former tech workers and policymakers — whether these companies have learned the lessons of 2016. What more do they need to do in the coming weeks, and what critical steps do they still need to take to stop foreign actors spreading misinformation? The answers were edited for clarity and length.
What’s changed
Renée DiResta, Data for Democracy: I see the intersection of three things as being responsible for the disinformation problem on social channels. One is mass consolidation of audiences on a handful of platforms. Two is precision targeting, and the ability to reach anyone anywhere with a message very inexpensively and very easily.
And then the third is gameable algorithms … in terms of gameable algorithms, there is some understanding within the platforms now about how trending in particular [can be gamed].
Twitter has changed their trending algorithm, they’ve made it much harder to game. Facebook completely eliminated their trending [topics] function, recognizing, I believe, that they couldn’t really do a good enough job coming up with a solution that was not easily gamed … I think the more complex problem is a lot of the fixes negatively impact how they’ve traditionally run their platforms from a business model standpoint.
I’ll use YouTube as an example here. YouTube’s recommendation engine is, at this point, notorious for amplifying misleading propaganda, hoaxes, and videos … YouTube, in particular, I think, is still wrangling with “How do we deal with the fact that autoplay is in our best interest, and that recommending sensational content keeps people on the platform longer, while at the same time taking responsibility for what we show people?”
April Doss, former head of intelligence law at the National Security Agency: Right after the revelations about the way that social media platforms like YouTube and Facebook and Twitter and Reddit have been sort of manipulated for purposes of election messaging in the 2016 election, the platforms were slow to respond and really take seriously the scope and impact of what had happened.
Over the course of the past year, we’ve seen things change. Like the fact that Congressional hearings that initially had lower-level representatives from those companies over time have started to shift so that representation came from further up in the C-Suite. We’ve certainly seen all the companies say a lot about additional transparency and efforts to look for fake accounts.
But I think there’s still more work to be done. This is a complex problem in trying to walk the line between allowing users to make use of the platform in unfettered ways while also warning users about the ways that fake accounts can send manipulative or false messages.
Ben Scott, Omidyar Network: Increases in operational security are clearly making a difference. How big of a difference is hard to say, but they’re clearly making a difference.
In the case of Facebook, in particular, I think the way in which they have chosen to downgrade particular types of disinformation through the clickbait factories that were so prevalent in the 2016 cycle has had an effect … On the negative side, there’s very little transparency into how the heck that’s happening. And that should make everybody nervous, I think.
None of the companies has been very forthcoming about providing journalists or researchers with independent access to data. We have very few ways to verify what’s going on with the platforms, how they’re making content decisions, whether it’s to take down or downgrade. And it is a “Trust us, we’re fixing this” message, rather than “Come have a look and see how we’re doing.”
Sandy Parakilas, Center for Humane Technology, and a former platform operations manager for Facebook: I think they are busy fighting the last war, meaning that they have systematically gone through and made some attempt to address the bulk of the things that have happened in 2016. And the problem with that — I mean, obviously, it’s better that they did that than not do that — but the problem with that, as they admit themselves, is security is an arms race.
So we still don’t have good transparency into whether they are sufficiently investing in thinking through and testing all the vulnerabilities that they have. I think a great example of a failure in this regard is this massive new Facebook breach.
Facebook announced a couple of months ago that they had found and shut down a number of Russian and Iranian pages and events that were a part of a misinformation campaign. And so that’s a much more proactive approach into the kind of operation than we had seen in 2016 by a long shot. So that’s a good thing.
But simultaneously they were not paying enough attention to the security vulnerabilities, these three bugs, that led to this massive breach of 50 million users’ OAuth tokens. And that breach is probably a much greater security issue than the relatively small number of pages and events that they shut down.
What to watch for during the midterms
Senator Mark Warner, (D-VA), and vice chair of the Senate Intelligence Committee: Will we get regular updates [on what’s happening]? One of the things we’re also trying to figure out — and I think [social media companies] have gotten better on this — is sharing of information between platforms as they see Russian or other foreign entities try to interfere in our elections. Are they doing a good job of sharing that with each other, are they doing a good job of sharing that with law enforcement and intel communities? I think those are again open questions. They’ve gotten better, but in many ways we won’t know this until we’ve gone through the whole cycle.
Doss: How quickly and how often are social media problems catching these inauthentic accounts and advising the public about what they’ve found … really that kind of education of the public is a critical piece in helping ordinary everyday users make sense of what they’re seeing in their social media feeds.
Robyn Caplan, Data & Society: I’m an organizational researcher, and I’m looking at content moderation practices more broadly. So I’ll be looking at resources — are they actually expanding resources and enforcing content moderation? How are they paying these workers — are they full-time workers, do they work within the U.S., or have enough context of the U.S. political situation to address content of ads?
Have they started amending how they’re allowing people to target political communications? Are they starting to move away from discriminatory practices that they’ve been caught on in the past, like race? I’d like to see them limit the amount of targeting that anybody can do with political communications to only a few categories, instead of the myriad categories you can use to microtarget users.
And then I’ll be judging them on the extent to which transparency or authenticity markers are being made available for any sort of political communication that I’m seeing in my network. Am I seeing who is paying for an ad? Am I seeing an alert saying “This is a political advertisement?” How well are they applying those kinds of techniques to ads that don’t look like a [traditional] political advertisement? — so, a meme or a short-form video.
Parakilas: I think my overarching point is that I think you shouldn’t expect the information operations of 2018 to look like the information operations of 2016 … There are huge, huge vulnerabilities not exploited in 2016 that may very well be exploited right now … A successful outcome is that there is not obvious and effective foreign interference in the 2018 midterms. I think it was fair to say there was obvious and effective foreign interference in the 2016 elections.
What more needs to be done
DiResta: We know that the Internet Research Agency reached out to real Americans. We know that they connected with them to serve as activists on the ground, to perform actions for them, to amplify their messages. What we believe is happening now is that there’s more narrative laundering through real people. And that is going to be very hard to detect and that is going to be very hard to mitigate because it touches on free speech issues [of] real Americans.
This is where we feel that there’s a real need to try to understand inauthentic cross-platform distribution, and the only way that we can do that, that we can detect inauthentic narratives across platforms, is through collaboration with third-party researchers and government.
Caplan: There are a few large, broad concerns that I think platforms are going to still have problems addressing — how to make policies against disinformation consistent and contextual while allowing and enabling other forms of communication to flourish … Microtargeting has also got to stop.
That is really where we’re seeing a lot of disinformation spread continue to flourish. … These platforms say as long as people are advertising on their networks they want it to be relevant to individual users, and I completely understand that. As a user, I tend to actually like when ads are targeted specifically toward me. When that comes to political issue ads though, that does not need to be the case. We should not have the functionality to be able to do microtargeting for political ads.
Scott: Ad transparency. That’s the easiest place for all the companies to be transparent, because there’s a transaction behind every one of those content websites. Someone bought the right to put an ad in front of you. And that’s much easier to regulate and control as a platform than user-generated content from any given account. They have all put forward different forms of transparency on political ads. But they’re very different in terms of how far they’ve been willing to go.
Senator Warner I think there are still more questions about identification [of bots]. These are technically hard to do, I will acknowledge, but if somebody is tweeting or posting a message on Facebook that says “This is Mark Warner from Alexandria Virginia,” but the post is originating in Saint Petersburg, Russia — hell, maybe you don’t take it down, but maybe you indicate this is not being pushed from where this person says it is.
Source: VentureBeat
To Read Our Daily News Updates, Please visit Inventiva or Subscribe Our Newsletter & Push.