“This is *much* bigger than deplatforming”
Nicole Schneidman on how tech can defang election threats
We’re stuck in a tough place on tech platforms and democracy. Arguments around things like deplatforming and content moderation are angry, partisan and bitter.
But these conversations often miss the forest for the trees. 2024 is a pivotal election where threats abound — from violence and voter intimidation to election disinformation and foreign interference. And there are a bunch of practical, implementable things that tech platforms can be doing to prepare for and defang those threats. They just have to start doing them now.
That’s the conclusion of a new report by Nicole Schneidman, a technology policy strategist at Protect Democracy. Read her report here: The Shortlist: Seven Ways Platforms Can Prepare for the U.S. 2024 Election.
I asked her to talk through the surprisingly nuanced and tricky decision points facing Meta, Discord, Tik Tok, OpenAI and the other platforms in this unprecedented election.
Nicole, welcome to If you can keep it. Do you think it’s even possible for tech to handle a fraught election well?
Thanks for having me, Ben! I think it depends on how you define “well.” The truth is that there is no way to fully address the digital threats surrounding the 2024 election cycle. Period. Some bad actors will find ways to produce and distribute election-threatening content online, no matter what platforms do.
So I think of the question as risk management. What is the most that platforms can do to make bad outcomes less likely? That’s a reasonable standard.
Give us some flavor: what are some examples of what you think platforms need to be doing?
The good news is there is precedent we can look to! For example, platforms should again make sure users can get accurate election information. Social media platforms should do things like set usage rate limits, i.e. caps on how much any individual accounts can spam their sites. Messaging platforms should go after fake accounts and networks. Generative AI platforms, who are the new kids on the block, should make it clear you can’t use their models for election interference.
This is the first time Protect Democracy has put out public recommendations for tech platforms. Why now?
U.S. elections and tech platforms have become increasingly intertwined. This is something we all see happening on our own feeds. Social media is both a critical tool — it’s how candidates and election officials communicate with voters — but can be a vector for election subversion narratives.
This double-edged nature of tech isn’t new, but the role of tech platforms is even more critical today. Believe it or not, the threat landscape is worse than in 2020, as challenging as that election was. Election officials face threats and harassment and political violence is more likely. Platforms absolutely will be sources and channels for election information this cycle and in this threat environment, their choices are more critical than ever.
So bottom line – we’re not asking them to do everything. We’re just asking them to take simple, meaningful steps that can be fully executed before voting begins.
You strike me as a rare advocate in this space who has seen how the sausage is made. (Readers: Nicole was Head of Community Product Partnerships at Facebook through the 2022 election.) How does that impact your view on what platforms can and should be doing?
First and foremost, working at a platform helped me appreciate the complexity of the position they are in — how there aren’t easy answers or solutions when it comes to safeguarding democracy online. Trust and safety at the platforms is the ultimate cat-and-mouse game. There is no silver bullet and with finite time and resources, they have to prioritize and be pragmatic.
I’m also, frankly, quite aware of the limitations of reactive approaches, like content moderation and deplatforming. It’s like trying to pick out snowflakes in an avalanche. You simply can’t keep up with the sheer volume of usage of major platforms.
That’s what convinced me how important proactive, not reactive, risk mitigation is. When you look at the threats, this is *so* much bigger than deplatforming, or what reactive approaches in general can address. So we have to think bigger.
What’s changed since 2020 in how the tech platforms think about and prepare for elections?
Since 2020, decisions by platforms, especially related to content moderation, have been subject to increased scrutiny, such as that from the Select Subcommittee on the Weaponization of the Federal Government.
In addition, the digital landscape has become much more fragmented. Today, the field of online platforms is no longer dominated as it was in 2020 by a few legacy social media platforms. Instead, there are far more platforms in usage that vary widely in terms of design, content formats, user base and company size. They no longer are limited to primarily social media platforms, but also include messaging platforms (both encrypted and not encrypted) and generative AI platforms. What’s more, all of these platforms are interconnected – they don’t operate in silos, but instead together drive the dynamics of online information.
What’s going on with this Supreme Court case on the government coordinating with social media?
Next Monday, the Supreme Court is scheduled to hear oral arguments in Murthy v. Missouri. This lawsuit claimed that federal officials violated the First Amendment by “significantly encouraging” social media platforms to suppress speech on their platforms. As a result, from July to October last year, parts of the federal government were prohibited from communicating with the social media companies regarding removing, deleting or dowranking content (with some specific exceptions).
No matter where you come down on its merits, this has chilled the public-private coordination that protected American voters in past elections from malicious foreign actors. For example, the FBI is reportedly no longer sharing information with social media companies regarding influence campaigns from Russia, China and Iran.
This is another example of the difficulty of platforms relying on content-specific judgments – and one more reason for platforms to also adopt proactive risk mitigation strategies. For example, one of our recommendations focuses on usage-rate limits that place a ceiling on how much a particular feature can be used in a defined period. This doesn’t rely on platforms to evaluate individual pieces of content, but instead places upfront guardrails to prevent extreme outlier usage of a feature.
Reading through the recommendations, this is really a short list! There are only four recs for each platform type (social, messaging, generative AI). Is this really all platforms need to do to help protect the 2024 election from threats?
In my time at Facebook, I would get recommendations from a range of external stakeholders. While I always appreciated their insights, the number of recommendations offered didn’t match the reality I faced — I had limited resources and had to focus on a few essential, but actionable, priorities.
Each of the recommendations are based on at least one platform’s precedent of adopting or, in the case of genAI platforms, committing to adopting the measure. In addition, they were validated by consulting a range of experts, including engineers and Trust and Safety professionals, to assess the feasibility of executing them before voting starts. These are things that can absolutely be done in time to be ready for the November election.
So, by design, this list was not intended to be comprehensive — it’s the things that can actually get done that would make a meaningful difference.
Over the last few months, we’ve seen a few sets of recommendations made by other experts, pro-democracy players, think tanks and advocacy groups. What makes your recommendations different from other proposals?
First of all, there is lots of good wisdom out there and it’s great to see areas of overlap around some practical, impactful interventions!
A few differences: first, we broke down recommendations for each of three major platform category types – recognizing that no two platforms are designed or managed the same and social media, messaging, and genAI platforms offer totally different products. Second, this is just a priority list, not a wish list. Third, this is designed to be implementable — the recommendations require resources and tradeoffs, but they can be done. And finally, we try to move past largely content moderation-focused approaches to embrace risk mitigation — recognizing that perfect enforcement isn’t possible, but thoughtful and robust proactive guardrails for scaled production or viral distribution are.
Finally, just to reiterate. Many of these recommendations are based on best practices that have been used by platforms in election contexts dating back, in some cases, to before 2020. The generative AI recommendations are built on steps that platforms have either already taken or committed to doing this cycle. I’m hopeful that platformers like folks on the teams I was on and worked with will find the recommendations a practical list that help them push for where to invest in election protection.
Again, you can find Nicole’s full report here.
For a taste of what else Nicole is thinking about, read: How generative AI could make existing election threats worse.
More soon.