“Resist trying to make things better”: A conversation with internet security expert Alex Stamos.
I’m old enough to remember when the internet was going to be great news for everyone. Things have gotten more complex since then: We all still agree that there are lots of good things we can get from a broadband connection. But we’re also likely to blame the internet — and specifically the big tech companies that dominate it — for all kinds of problems.
And that blame-casting gets intense in the wake of major, calamitous news events, like the spectacle of the January 6 riot or its rerun in Brazil this month, both of which were seeded and organized, at least in part, on platforms like Twitter, Facebook, and Telegram. But how much culpability and power should we really assign to tech?
I think about this question all the time but am more interested in what people who actually study it think. So I called up Alex Stamos, who does this for a living: Stamos is the former head of security at Facebook who now heads up the Stanford Internet Observatory, which does deep dives into the ways people abuse the internet.
The last time I talked to Stamos, in 2019, we focused on the perils of political ads on platforms and the tricky calculus of regulating and restraining those ads. This time, we went broader, but also more nuanced: On the one hand, Stamos argues, we have overestimated the power that the likes of Russian hackers have to, say, influence elections in the US. On the other hand, he says, we’re likely overlooking the impact state actors have to influence our opinions on stuff we don’t know much about.
You can hear our entire conversation on the Recode Media podcast. The following are edited excerpts from our chat.
Peter Kafka
I want to ask you about two very different but related stories in the news: Last Sunday, people stormed government buildings in Brazil in what looked like their version of the January 6 riot. And there was an immediate discussion about what role internet platforms like Twitter and Telegram played in that incident. The next day, there was a study published in Nature that looked at the effect of Russian interference on the 2016 election, specifically on Twitter, which concluded that all the misinformation and disinformation the Russians tried to sow had essentially no impact on that election or on anyone’s views or actions. So are we collectively overestimating or underestimating the impact of misinformation and disinformation on the internet?
Alex Stamos
I think what has happened is there was a massive overestimation of the capability of mis- and disinformation to change people’s minds — of its actual persuasive power. That doesn’t mean it’s not a problem, but we have to reframe how we look at it — as less of something that is done to us and more of a supply and demand problem. We live in a world where people can choose to seal themselves into an information environment that reinforces their preconceived notions, that reinforces the things they want to believe about themselves and about others. And in doing so, they can participate in their own radicalization. They can participate in fooling themselves, but that is not something that’s necessarily being done to them.
Peter Kafka
But now we have a playbook for whenever something awful happens, whether it’s January 6 or what we saw in Brazil or things like the Christchurch shooting in New Zealand: We say, “what role did the internet play in this?” And in the case of January 6 and in Brazil, it seems pretty evident that the people who are organizing those events were using internet platforms to actually put that stuff together. And then before that, they were seeding the ground for this disaffection and promulgating the idea that elections were stolen. So can we hold both things in our head at the same time — that we’ve both overestimated the effect of Russians reinforcing our filter bubble versus state and non-state actors using the internet to make bad things happen?
Alex Stamos
I think so. What’s going on in Brazil is a lot like January 6 in that the interaction of platforms with what’s happening there is that you have kind of the broad disaffection of people who are angry about the election, which is really being driven by political actors. So for all of these things, almost all of it we’re doing to ourselves. The Brazilians are doing [it] to themselves. We have political actors who don’t really believe in democracy anymore, who believe that they can’t actually lose elections. And yes, they are using platforms to get around the traditional media and communicate with people directly. But it’s not foreign interference. And especially in the United States, direct communication with your political supporters via these platforms is First Amendment-protected.
Separately from that, in a much smaller timescale, you have the actual kind of organizational stuff that’s going on. On January 6, we have all this evidence coming out from all these people who have been arrested and their phones have been grabbed. And so you can see Telegram chats, WhatsApp chats, iMessage chats, Signal, all of these real-time communications. You see the same thing in Brazil.
And for that, I think the discussion is complicated because that is where you end up with a straight trade-off on privacy — that the fact that people can now create groups where they can privately communicate, where nobody can monitor that communication, means that they have the ability to put together what are effectively conspiracies to try to overthrow elections.
Peter Kafka
The throughline here is that after one of these events happens, we collectively say, “Hey, Twitter or Facebook or maybe Apple, you let this happen, what are you going to do to prevent it from happening again?” And sometimes the platforms say, “Well, this wasn’t our fault.” Mark Zuckerberg famously said that idea was crazy after the 2016 election.
Alex Stamos
And then [former Facebook COO Sheryl Sandberg] did that again, after January 6.
“Resist trying to make things better”
Peter Kafka
And then you see the platforms do whack-a-mole to solve the last problem.
I’m going to further complicate it because I wanted to bring the pandemic into this — where at the beginning, we asked the platforms, “what are you going to do to help make sure that people get good information about how to handle this novel disease?” And they said, “We’re not going to make these decisions. We’re not not epidemiologists. We’re going to follow the advice of the CDC and governments around the world.” And in some cases, that information was contradictory or wrong and they’ve had to backtrack. And now we’re seeing some of that play out with the release of the Twitter Files where people are saying, “I can’t believe the government asked Twitter to take down so-and-so’s tweet or account because they were telling people to go use ivermectin.”
I think the most generous way of viewing the platforms in that case — which is a view I happen to agree with — is that they were trying to do the right thing. But they’re not really built to handle a pandemic and how to handle both good information and bad information on the internet. But there’s a lot of folks who believe — I think quite sincerely — that the platforms really shouldn’t have any role moderating this at all. That if people want to say, “go ahead and try this horse dewormer, what’s the worst that could happen?” they should be allowed to do it.
So you have this whole stew of stuff where it’s unclear what role the government should have in working with the platforms, what role the platforms should have at all. So should platforms be involved in trying to stop mis- or disinformation? Or should we just say, “this is like climate change and it’s a fact of life and we’re all going to have to sort of adapt to this reality”?
Alex Stamos
The fundamental problem is that there’s a fundamental disagreement inside people’s heads — that people are inconsistent on what responsibility they believe information intermediaries should have for making society better. People generally believe that if something is against their side, that the platforms have a huge responsibility. And if something is on their side, [the platforms] should have no responsibility. It’s extremely rare to find people who are consistent in this.
As a society, we have gone through these information revolutions — the creation of the printing press created hundreds of years of religious war in Europe. Nobody’s going to say we should not have invented the printing press. But we also have to recognize that allowing people to print books created lots of conflict.
I think that the responsibility of platforms is to try to not make things worse actively — but also to resist trying to make things better. If that makes sense.
Peter Kafka
No. What does “resist trying to make things better” mean?
Alex Stamos
I think the legitimate complaint behind a bunch of the Twitter Files is that Twitter was trying too hard to make American society and world society better, to make humans better. That what Twitter and Facebook and YouTube and other companies should focus on is, “are we building products that are specifically making some of these problems worse?” That the focus should be on the active decisions they make, not on the passive carrying of other people’s speech. And so if you’re Facebook, your responsibility is — if somebody is into QAnon, you do not recommend to them, “Oh, you might want to also storm the Capitol. Here’s a recommended group or here’s a recommended event where people are storming the Capitol.”
That is an active decision by Facebook — to make a recommendation to somebody to do something. That is very different than going and hunting down every closed group where people are talking about ivermectin and other kinds of folk cures incorrectly. That if people are wrong, going and trying to make them better by hunting them down and hunting down their speech and then changing it or pushing information on them is the kind of impulse that probably makes things worse. I think that is a hard balance to get to.
Where I try to come down on this is: Be careful about your recommendation algorithms, your ranking algorithms, about product features that make things intentionally worse. But also draw the line at going out and trying to make things better.
The great example that everyone is spun up about is the Hunter Biden laptop story. Twitter and Facebook, in doing anything about that, I think overstepped, because whether the New York Post does not have journalistic ethics or whether the New York Post is being used as part of a hacking leak campaign is the New York Post’s problem. It is not Facebook’s or Twitter’s problem.
“The reality is that we have to have these kinds of trade-offs”
Peter Kafka
Something that people used to say in tech out loud, prior to 2016, was that when you make a new thing in the world, ideally you’re trying to make it so it’s good. It’s to the benefit of the world. But there are going to be trade-offs, pros and cons. You make cars, and cars do lots of great things, and we need them — and they also cause lots of deaths. And we live with that trade-off and we try to make cars safer. But we live with the idea that there’s going to be downsides to this stuff. Are you comfortable with that framework?
Alex Stamos
It’s not whether I’m comfortable or not. That’s just the reality. Any technological innovation, you’re going to have some kind of balancing act. The problem is, our political discussion of these things never takes those balances into effect. If you are super into privacy, then you have to also recognize that when you provide people private communication, that some subset of people will use that in ways that you disagree with, in ways that are illegal in ways, and sometimes in some cases that are extremely harmful. The reality is that we have to have these kinds of trade-offs.
These trade-offs have been obvious in other areas of public policy: You lower taxes, you have less revenue. You have to spend less.
Those are the kinds of trade-offs that in the tech policy world, people don’t understand as well. And certainly policymakers don’t understand as well.
Peter Kafka
Are there practical things that government can impose in the US and other places?
Alex Stamos
The government in the United States is very restricted by the First Amendment [from] pushing of the platforms to change speech. Europe is where the rubber’s really hitting the road. The Digital Services Act creates a bunch of new responsibilities for platforms. It’s not incredibly specific on this area, but that is where, from a democratic perspective, there will be the most conflict over responsibility. And then you see in Brazil and India and other democracies that are backsliding toward authoritarianism, you see much more aggressive censorship of political enemies. That is going to continue to be a real problem around the world.
Peter Kafka
Over the years, the big platforms built pretty significant apparatuses to try to moderate themselves. You were part of that work at Facebook. And we now seem to be going through a real-time experiment at Twitter, where Elon Musk has said ideologically, he doesn’t think Twitter should be moderating anything beyond actual criminal activity. And beyond that, it costs a lot of money to employ these people and Twitter can’t afford it, so he’s getting rid of basically everyone who was involved in disinformation and in moderation. What do you imagine the effect that will have?
Alex Stamos
It is open season. If you are the Russians, if you’re Iran, if you’re the People’s Republic of China, if you are a contractor working for the US Department of Defense, it is open season on Twitter. Twitter’s absolutely your best target.
Again, the quantitative evidence is that we don’t have a lot of great examples where people have made massive changes to public beliefs [because of disinformation]. I do believe there are some exceptions, though, where this is going to be really impactful on Twitter. One is on areas of discussion that are “thinly traded.”
The battle between Hillary Clinton and Donald Trump was the most discussed topic on the entire planet Earth in 2016. So no matter what [Russians] did with ads and content was nothing, absolutely nothing compared to the amount of content that was on social media about the election. It’s just a tiny, tiny, tiny drop in the ocean. One article about Donald Trump is not going to change your mind about Donald Trump. But one article about Saudi Arabia’s war [against Yemen] might be the only thing you consume on it.
The other area where I think it’s going to be really effective is in attacking individuals and trying to harass individuals. This is what we’ve seen a lot out of China. Especially if you’re a Chinese national and you leave China and you’re critical of the Chinese government, there will be massive campaigns lying about you. And I think that is what’s going to happen on Twitter — if you disagree, if you take a certain political position, you’re going to end up with hundreds or thousands of people saying you should be arrested, that you’re scum, that you should die. They’ll do things like send photos of your family without any context. They’ll do it over and over again. And this is the kind of harassment we’ve seen out of QAnon and such. And I think that Twitter is going to continue down that direction — if you take a certain political position, massive troll farms have the ability to try to drive you offline.
“Gamergate every single day”
Peter Kafka
Every time I see a story pointing out that such-and-such disinformation exists on YouTube or Twitter, I think that you could write these stories in perpetuity. Twitter or YouTube or Facebook may crack down on a particular issue, but it’s never going to get out of this cycle. And I wonder if our efforts aren’t misplaced here and that we shouldn’t be spending so much time trying to point out this thing is wrong on the internet and instead doing something else. But I don’t know what the other thing is. I don’t know what we should be doing. What should we be thinking about?
Alex Stamos
I’d like to see more stories about the specific attacks against individuals. I think we’re moving into a world where effectively it is Gamergate every single day — that there are politically motivated actors who feel like it is their job to try to make people feel horrible about themselves, to drive them off the internet, to suppress their speech. And so that is less about broad persuasion and more about the use of the internet as a pitched battlefield to personally destroy people you disagree with. And so I’d like to see more discussion and profiles of the people who are under those kinds of attacks. We’re seeing this right now. [Former FDA head] Scott Gottlieb, who is on the Pfizer board, is showing up in the [Twitter Files] and he’s getting dozens and dozens of death threats.
Peter Kafka
What can someone listening to this conversation do about any of this? They’re concerned about the state of the internet, the state of the world. They don’t run anything. They don’t run Facebook. They’re not in government. Beyond checking on their own personal privacy to make sure their accounts haven’t been hacked, what can and should someone do?
Alex Stamos
A key thing everybody needs to do is to be careful with their own social media use. I have made the mistake of retweeting the thing that tickled my fancy, that fit my preconceived notions and then turned out not to be true. So I think we all have an individual responsibility — if you see something amazing or radical that makes you feel something strongly, that you ask yourself, “Is this actually true?”
And then the hard part is, if you see members of your family doing that, having a hard conversation about that with them. Because part of this is there’s good social science evidence that a lot of this is a boomer problem. Both on the left and the right, a lot of this stuff is being spread by folks who are our parents’ generation.
Peter Kafka
I wish I could say that’s a boomer problem. But I’ve got a teen and a pre-teen and I don’t think they’re necessarily more savvy about what they’re consuming on the internet than their grandparents.
Alex Stamos
Interesting.
Peter Kafka
I’m working on it.