The recent flashpoint in this evolving standoff occurred when the Pentagon expressed an interest in utilizing Anthropic’s AI, Claude, to analyze vast quantities of commercial data that had been collected from Americans. Anthropic, however, stipulated that its AI must not be employed for mass domestic surveillance purposes or for the operation of autonomous weapons systems – machines capable of lethal action without direct human oversight. Merely a week after these negotiations faltered, the Pentagon took the unusual step of designating Anthropic as a "supply chain risk." This classification is typically reserved for foreign entities perceived as posing a threat to national security, a move that raised eyebrows and amplified concerns about the government’s intentions.

In the wake of this development, OpenAI, the rival AI company renowned for its ChatGPT, swiftly secured a deal with the Pentagon. This agreement reportedly granted the Department of War permission to use its AI for "all lawful purposes." Critics immediately decried this broad language, arguing that it left the door wide open for potential domestic surveillance. The public reaction was swift and significant. Over the subsequent weekend, a surge of users began uninstalling ChatGPT, with reports indicating a nearly 300% increase in uninstalls. Protesters took to the streets of San Francisco, chalking poignant messages like "What are your redlines?" around OpenAI’s headquarters, directly challenging the company’s ethical boundaries.

Responding to the intense public scrutiny and outcry, OpenAI announced on Monday that it had revised its agreement with the Department of War. The company affirmed that its AI would not be used for domestic surveillance and explicitly stated that its services would not be made available to intelligence agencies such as the NSA. OpenAI CEO Sam Altman suggested that existing U.S. law already prohibits domestic surveillance by the Department of Defense, and that the company’s contract simply needed to reflect this established legal framework. He articulated this on X (formerly Twitter), stating, "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement." Conversely, Anthropic CEO Dario Amodei offered a contrasting perspective. In a policy statement, he argued, "To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI." This divergence of opinions from prominent AI leaders underscores the complexity and ambiguity of the current legal situation.

So, who is correct? Does the law indeed permit the Pentagon to surveil Americans using AI? The answer hinges critically on what constitutes "surveillance" in the eyes of the law, a distinction that often diverges from common understanding. Alan Rozenshtein, a law professor at the University of Minnesota Law School, points out that "A lot of stuff that normal people would consider a search or surveillance… is not actually considered a search or surveillance by the law." This legal interpretation means that information readily available in the public domain – such as social media posts, surveillance camera footage, and voter registration records – is generally considered fair game for government access. Furthermore, data pertaining to Americans that is incidentally collected during the surveillance of foreign nationals can also be legally acquired and utilized.

Perhaps most significantly, the U.S. government possesses the authority to purchase commercial data from private companies. This data can encompass highly sensitive personal information, including precise mobile location data and detailed web browsing histories. In recent years, various federal agencies, ranging from Immigration and Customs Enforcement (ICE) and the Internal Revenue Service (IRS) to the Federal Bureau of Investigation (FBI) and the NSA, have increasingly tapped into this burgeoning "data marketplace." This market, fueled by an internet economy that thrives on harvesting user data for targeted advertising, provides government entities with access to information that might otherwise be inaccessible without warrants or subpoenas – legal instruments typically required to obtain such sensitive personal data.

As Rozenshtein elaborates, "There’s a huge amount of information that the government can collect on Americans that is not itself regulated either by the Constitution, which is the Fourth Amendment, or statute." Compounding this issue is the alarming lack of meaningful legal limitations on what the government can subsequently do with all this acquired data. This regulatory gap stems from the fact that, until the latter half of the 20th century, individuals were not generating the massive, interconnected clouds of digital data that characterize the modern era, thus creating unprecedented opportunities for surveillance. The Fourth Amendment, intended to protect against unreasonable searches and seizures, was drafted in an era where information gathering primarily involved physically entering people’s homes.

Subsequent legislative efforts, such as the Foreign Intelligence Surveillance Act of 1978 (FISA) and the Electronic Communications Privacy Act of 1986 (ECPA), were enacted during a time when surveillance largely entailed wiretapping phone calls or intercepting emails. The bulk of surveillance laws on the books predates the widespread adoption of the internet. Consequently, these laws were not designed to address the vast trails of online data being generated or the government’s burgeoning capacity to analyze it using sophisticated tools.

Now, with the advent of advanced AI, the scope and intensity of surveillance have been dramatically amplified. "What AI can do is it can take a lot of information, none of which is by itself sensitive, and therefore none of which by itself is regulated, and it can give the government a lot of powers that the government didn’t have before," explains Rozenshtein. AI possesses the remarkable ability to aggregate disparate pieces of information, identify subtle patterns, draw complex inferences, and construct highly detailed profiles of individuals, all on an unprecedented scale. As long as the government collects this information through legally permissible channels, it faces few restrictions on how it can be utilized, including feeding it into AI systems for analysis. Rozenshtein’s stark assessment is that "The law has not caught up with technological reality."

While the potential for widespread surveillance understandably raises significant privacy concerns, it is also acknowledged that the Pentagon may have legitimate national security interests that necessitate the collection and analysis of data pertaining to Americans. Loren Voss, a former military intelligence officer at the Pentagon, notes, "In order to collect information on Americans, it has to be for a very specific subset of missions." For instance, counterintelligence operations might require gathering information on an American citizen collaborating with a foreign adversary or planning acts of international terrorism. However, even targeted intelligence gathering can sometimes involve the collection of broader datasets, a reality that Voss concedes "does make people nervous."

Regarding the specifics of the OpenAI-Pentagon agreement, the company stated that its revised contract now includes language stipulating that its AI system "shall not be intentionally used for domestic surveillance of U.S. persons and nationals," in accordance with applicable laws. This amendment further clarifies that such use is prohibited, including "deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."

However, legal experts caution that this added language might not effectively override the existing clause allowing the Pentagon to use the company’s AI for "all lawful purposes." Jessica Tillipman, a law professor at George Washington University Law School, suggests, "OpenAI can say whatever it wants in its agreement… but the Pentagon’s gonna use the tech for what it perceives to be lawful." This interpretation could still permit domestic surveillance. As Tillipman starkly puts it, "Most of the time, companies are not going to be able to stop the Pentagon from doing anything."

Furthermore, the amended language leaves unresolved critical questions regarding inadvertent surveillance, as well as the surveillance of foreign nationals or undocumented immigrants residing within the United States. "What happens when there’s a disagreement about what the law is, or when the law changes?" Tillipman poses, highlighting the inherent uncertainties. OpenAI has not publicly released the full text of its revised contract, nor did it respond to a request for comment.

Beyond contractual stipulations, OpenAI has indicated its intention to implement technical safeguards, including a "safety stack," designed to monitor and block prohibited uses of its AI. The company also plans to deploy its own employees to collaborate with the Pentagon and maintain oversight. However, the efficacy of such a "safety stack" in constraining the Pentagon’s use of AI remains unclear, as does the extent to which OpenAI employees will have genuine visibility into the operational deployment of its systems. More fundamentally, it is uncertain whether the contract grants OpenAI the unilateral power to halt a legally sanctioned use of the technology by the government.

Yet, the absence of such absolute power from a private company might not be entirely undesirable. Granting an AI company the authority to unilaterally disable its technology during critical government operations carries its own set of significant risks. Voss articulates this concern: "You wouldn’t want the US military to ever be in a situation where they legitimately needed to take actions to protect this country’s national security, and you had a private company turn off technology." Nevertheless, she emphasizes that this does not negate the need for clear boundaries to be established by Congress.

Ultimately, none of these questions are simple. They involve deeply challenging trade-offs between individual privacy and national security. This complexity suggests that these critical decisions should ideally be made through public discourse and legislative action, rather than through closed-door negotiations between the executive branch and a select group of AI companies. For the present moment, the regulation of AI’s role in surveillance is largely being dictated by contracts, not comprehensive legislation.

However, some lawmakers are beginning to address this burgeoning issue. Senator Ron Wyden of Oregon has indicated his intention to seek bipartisan support for legislation aimed at curbing mass surveillance. He has been a vocal champion of bills designed to restrict the government’s ability to purchase commercial data, including the Fourth Amendment Is Not For Sale Act, first introduced in 2021 but yet to be enacted into law. In a recent statement, Wyden declared, "Creating AI profiles of Americans based on that data represents a chilling expansion of mass surveillance that should not be allowed." The debate over AI and surveillance is far from over, and the legal and ethical frameworks governing it are still very much under construction.