“A nice little database” of “domestic terrorists”
ICE, the Anthropic case, and how to stop mass domestic surveillance

On January 23, Colleen Fagan stood in a parking lot in Maine and watched federal agents conduct an immigration operation. Fagan, who would tell you with great pride that she is a “lifelong Mainer,” didn’t interfere. She didn’t block anyone. She recorded what she saw — something Americans have an unambiguous constitutional right to do.
A masked man started recording her back. She asked why.
“Cause we have a nice little database,” he sneered. “And now you’re considered a domestic terrorist.”
Two days earlier, Elinor Hilton was observing ICE agents outside of a Home Depot when agents similarly started filming her.
“He said ‘We’re putting you on a domestic terrorist watch list, and if you keep coming to things like this, we’re going to come to your house later and arrest you,’” Hilton recounted in an interview.
Protect Democracy is suing on behalf of Hilton, Fagan, and other Mainers who have faced similar intimidation attempts. (Read more: Hilton v. Noem et al.)
The same tactics have also been documented in Oregon and Minnesota. Federal agents in multiple cities have surveilled and intimidated community members who observed immigration enforcement operations. The coordination is too consistent to be coincidental.
We don’t know exactly how and to what extent the Trump administration is already surveilling American citizens as the president seeks to intimidate and retaliate against critics. We do, however, know that many in our government are hell-bent on developing unprecedented tools, systems, and databases to keep watch on those they see as “enemies from within.”
The surveillance we’re already seeing in places like Maine may just be the beginning.
If the president gets his way in the months ahead — especially in his administration’s ongoing campaign to control the AI industry — we could be entering a new and Orwellian era for domestic surveillance.
The Anthropic fight is over the future of surveillance and democracy
Earlier this week, AI company Anthropic filed suit against the Defense Department for designating the company a security threat.
On the surface, the dispute concerns the Trump administration’s attempt to essentially take over the company “by force and coerce them into maintaining a product for uses they fundamentally oppose,” as Ian Bassin and Nicole Schneidman explained last week: A steel mill the president might not be able to seize.
In a larger sense, though, this fight is over the future of artificial intelligence in our democracy. The Defense Department attempted to force Anthropic to provide AI technology without agreeing that it would not use the technology for two use cases:
Mass domestic surveillance.
Lethal autonomous weapons without a human operator.
Anthropic refused to provide its technology without the Defense Department agreeing to these limitations. And so the federal government is attempting, in retaliation, to destroy Anthropic.
Shortly after Anthropic filed their lawsuit, a group of 37 AI scientists and researchers at competitors OpenAI and Google — including Jeff Dean, the chief scientist of Google’s AI division — filed an amicus brief supporting Anthropic’s position. (Disclosure: My colleagues at Protect Democracy are counsel for the brief.)
Remember, these are people who work for Anthropic’s competitors in the most intense corporate arms race in modern history. And they stood up to defend Anthropic and the red lines that it has drawn against the federal government.
The entire brief is worth your time, but I especially recommend the section on surveillance. Here are the most important parts:
At its core, AI-enabled mass surveillance means the ability to monitor, analyze, and act on the behavior of an entire population continuously and in real time. The devices and data streams required to do this already exist. As of 2018, there were approximately 70 million surveillance cameras operating in the United States across airports, subway stations, parking lots, storefronts, and street corners. Every smartphone continuously broadcasts location data to carriers and dozens of applications. Credit and debit cards generate a timestamped record of nearly every commercial transaction Americans make. Social media platforms log not just what people post, but what they read, how long they browse, and what they posted before deleting it. Employers, insurers, and data brokers have assembled behavioral profiles on most American adults that are already, in many cases, available for government purchase without a warrant. What does not yet exist is the AI layer that transforms this sprawling, fragmented data landscape into a unified, real-time surveillance apparatus. Today, these streams are siloed, inconsistent, and require significant human effort to connect. From our vantage point at frontier AI labs, we understand that an AI system used for mass surveillance could dissolve those silos, correlating face recognition data with location history, transaction records, social graphs, and behavioral patterns across hundreds of millions of people simultaneously.
The mere existence of such a capability in government hands — even if never activated against a specific individual — changes the character of public life in a democracy. Behavioral scientists and legal scholars have long documented what is sometimes called the “panopticon effect”: when people believe they may be observed, they modify their behavior as if they are always being observed, regardless of whether anyone is actually watching. The journalist thinks twice before calling a source inside the military, knowing the call could be logged and crossreferenced. The activist softens her public messaging, calculating that visibility now carries risk it didn’t carry before. The academic researcher avoids certain search terms — not because the research is wrong, but because she doesn’t want to surface in a database. None of these people have been targeted. None have been punished. But their behavior has already been constrained, and with it the democratic functions they serve — a free press, political organizing, open intellectual inquiry — have been quietly degraded. These chilling effects require no abuse, only the awareness that the capability exists.
History offers ample warning. The FBI’s COINTELPRO program, which ran from 1956 to 1971 and was exposed years later, demonstrated how domestic intelligence powers justified by security concerns were systematically turned against civil rights leaders, journalists, and political dissidents. The program did not merely surveil its targets. It fabricated evidence, sent anonymous letters designed to destroy marriages and careers, tipped off employers, and worked to discredit Martin Luther King, Jr. after he was awarded the Nobel Peace Prize. It operated for fifteen years before Congress learned of its existence. AI does not merely replicate those dangers — it multiplies them by orders of magnitude, automating at national scale what previously required hundreds of human operatives.
Further enhancing the risk terrain for AI’s deployment in this context, the Pentagon operates under a legal framework oriented toward external threats and warfighting, not domestic civil life. The Posse Comitatus Act, passed in 1878 in direct response to the use of federal troops to police American civilians during Reconstruction, reflects a constitutional tradition of keeping military power categorically separate from domestic governance.6 When the Pentagon acts domestically, it is operating in legal territory it was not designed for, with oversight structures that were not built to catch domestic abuses. That is in part why the bulk data collection programs by the Pentagon’s own National Security Agency (NSA), revealed by Edward Snowden in 2013, were so shocking and produced measurable chilling effects on lawful speech and inquiry. A study published in the Berkeley Technology Law Journal found statistically significant drops in traffic to Wikipedia articles on terrorism-related topics following the Snowden revelations, likely as ordinary people adjusted their online behavior in response to awareness that their searches were potentially being monitored.
The harms from building this infrastructure are not easily undone, as we understand in our field. Data collected on a population does not expire. A database of location records, behavioral profiles, and social graphs built today will still exist years from now, accessible to whoever controls it under whatever political conditions prevail then. That data would feed into an AI-powered surveillance infrastructure that, once constructed, tends to expand rather than contract. Agencies find new uses for existing capabilities, authorities get quietly reinterpreted, and the political cost of dismantling something already built is almost always higher than the cost of letting it continue and grow.
…We do not suggest that the Defendants intend to misuse such capabilities. We suggest that the question of intent is the wrong question. Democratic governance does not rest on the good intentions of those in power. It rests on structural constraints that make abuse difficult regardless of intent. AI-enabled mass domestic surveillance, deployed without transparent legal constraints and independent oversight, removes those structural protections in ways that no amount of good faith can replace.
[Read the whole brief.]
Six ways to keep the genie of AI-powered mass surveillance in the bottle
Despite what the masked agent told Colleen Fagan, Trump has probably not yet managed to weaponize domestic surveillance to its maximum potential.
The main legal safeguards that protect you from surveillance — the Fourth Amendment, the Electronic Communications Privacy Act (ECPA), the Foreign Intelligence Surveillance Act (FISA), the Privacy Act — are all still relatively intact. But none of these were designed with the immense surveillance capabilities of AI in mind.
Here are six strategies that can help ensure further dangerous lines do not get crossed:
First, legal vigilance — Courts and litigators must firmly stand up for First and Fourth Amendment rights, not just in cases like these where surveillance is directly at issue, but across all of the federal government’s ongoing attempts to intimidate and retaliate against critics. And the courts must uphold the industry guardrails that shore up vulnerabilities in the law when it comes to constraining AI-powered surveillance capabilities. Plus, the government’s targets — like Colleen Fagan and Elinor Hilton, like Anthropic — must refuse to be intimidated and instead have the courage to take their government to court to stand up for their own rights, including in the case of private companies’ right to self-imposed guardrails that on the technology they are developing.
The Temporary Restraining Order hearing for Hilton v. Noem et al., the Maine ICE lawsuit, is scheduled for March 16. Read the full complaint and motion here.
Anthropic v. U.S. Department of War et al. will likely be heard in the coming weeks.
Second, solidarity and collective action — As with Anthropic, the Trump administration will continue to use a divide-and-conquer strategy to build surveillance capacity. Instead of targeting everyone at once, they will continue to pick out individual targets — whether it’s a company or a person — and try to intimidate them.
When someone is targeted, either by intimidation or surveillance, all of us must rush to their defense.
That’s exactly what the senior researchers at OpenAI and Google did when their competitors came under attack. We must all be ready to do the same.
Third, democratic oversight — While courts can protect individuals from the effects of surveillance and intimidation, our democratically elected representatives are the ones who must provide oversight and accountability to ensure that the federal government is staying within the law.
Last week, an impressive list of 35 national security, business, civil society, and technology leaders wrote to the House and Senate Armed Services Committees urging them to uphold their oversight responsibilities as well as to “establish clear statutory policy governing the use of artificial intelligence” on autonomous weapons and surveillance.
Until Congress acts, our freedoms will never be safe. Read their whole letter here.
Fourth, security — If you’re someone that the Trump administration might consider to be a political enemy (and really, even if you’re not), now is the time to review your digital and information security practices. How easy would it be for a malicious government actor to get their hands on your private information?
For tips on how to keep your digital house in order, I recommend Protect Democracy’s new operational security best practices (this document is for coalitions specifically, but it’s useful for everyone).
Fifth, data protections — Surveillance is only as good as the data it has access to, and data access breaches can be catastrophic (for instance, the former DOGE-er who allegedly stole Social Security info on a thumb drive to take to his next job). By focusing on data weak points, we can protect against some of the worst abuses.
Last month, Nicole Schneidman and Edison Forman wrote about how to weaken ICE’s surveillance apparatus by protecting DMV data and interrogating companies that share automated license plate recognition (ALPR) data with ICE.
Read more: When ICE’s surveillance machine comes for Americans.
Sixth, public outcry — All of the above are important and effective, but they pale in comparison to the effect of widespread public outcry. If the American people firmly and loudly reject mass government surveillance, if they protest and confront their elected officials and even vote them out if necessary, then I feel confident that we will never have to live under mass domestic surveillance regime.
But if we shrug our shoulders, perhaps thinking there’s nothing we can do? Then I’m not so sure.
Did that video at the top of the agent talking about a “nice little database” of domestic terrorists make you angry?
If so — don’t keep it to yourself.





hi! Please watch this video from the makers of the Proton suite on the technology that ICE is using to spy on Americans: https://youtu.be/b1K7yLWs2DM
Perhaps you can contact them to get some of their research to help you with your lawsuits.
Thank you for covering this in such easily accessible detail. Turn off the geolocators on your phone, all of them. Use cash. Better yet, trade and barter with your neighbors. And turn on every ad privacy protection available on the browsers you use.