
By Hannah Zhao and Matthew Guariglia / Electronic Frontier Foundation (EFF)
EFF, Just Futures Law, and 140 other groups have sent a letter to Secretary Alejandro Mayorkas that the Department of Homeland Security (DHS) must stop using artificial intelligence (AI) tools in the immigration system. For years, EFF has been monitoring and warning about the dangers of automated and so-called “AI-enhanced” surveillance at the U.S.-Mexico border. As we’ve made clear, algorithmic decision-making should never get the final say on whether a person should be policed, arrested, denied freedom, or, in this case, are worthy of a safe haven in the United States.
The letter is signed by a wide range of organizations, from civil liberties nonprofits to immigrant rights groups, to government accountability watchdogs, to civil society organizations. Together, we declared that DHS’s use of AI, defined by the White House as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” appeared to violate federal policies governing its responsible use, especially when it’s used as part of the decision-making regarding immigration enforcement and adjudications.
Read the letter here.
The letter highlighted the findings from a bombshell report published by Mijente and Just Futures Law on the use of AI and automated decision-making by DHS and its sub-agencies, U.S. Citizenship and Immigration Services (USCIS), Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP). Despite laws, executive orders, and other directives to establish standards and processes for the evaluation, adoption, and use of AI by DHS—as well as DHS’s pledge that pledge that it “will not use AI technology to enable improper systemic, indiscriminate, or large-scale monitoring, surveillance or tracking of individuals”—the agency has seemingly relied on the loopholes for national security, intelligence gathering, and law enforcement to avoid compliance with those requirements. This completely undermines any supposed attempt on the part of the federal government to use AI responsibly and contain the technology’s habit of merely digitizing and accelerating decisions based preexisting on biases and prejudices.
Even though AI is unproven in its efficacy, DHS has frenetically incorporated AI into many of its functions. These products are often a result of partnerships with vendors who have aggressively pushed the idea that AI will make immigration processing more efficient, more objective and less biased
Yet the evidence begs to differ, or, at best, is mixed.
As the report notes, studies, including those conducted by the government, have recognized that AI has often worsened discrimination due to the reality of “garbage in, garbage out.” This phenomenon was visible in Amazon’s use—and subsequent scrapping—of AI to screen résumés, which highlighted male applicants more often because the data on which the program had been trained included more applications from men. The same pitfalls arises in predictive policing products, something EFF categorically opposes, which often “predicts” crimes more likely to occur in Black and Brown neighborhoods due to the prejudices embedded in the historical crime data used to design that software. Furthermore, AI tools are often deficient when used in complex contexts, such as the morass that is immigration law.
In spite of these grave concerns, DHS has incorporated AI decision-making into many levels of its operation with without taking the necessary steps to properly vet the technology. According to the report, AI technology is part of USCIS’s process to determine eligibility for immigration benefit or relief, credibility in asylum applications, and public safety or national security threat level of an individual. ICE uses AI to automate its decision-making on electronic monitoring, detention, and deportation.
At the same time, there is a disturbing lack of transparency regarding those tools. We urgently need DHS to be held accountable for its adoption of opaque and untested AI programs promulgated by those with a financial interest in the proliferation of the technology. Until DHS adequately addresses the concerns raised in the letter and report, the Department should be prohibited from using AI tools.
Please share this story and help us grow our network!

Hannah Zhao
Hannah Zhao is a staff attorney who focuses on criminal justice, privacy, and cybersecurity issues, and is part of the Coders’ Rights Project. Prior to joining EFF, she represented criminal defendants on appeal in state and federal courts in New York, Illinois, and Missouri, and also worked at the human rights NGO, Human Rights in China. While pursuing her law degree at Washington University in St. Louis, Hannah represented indigent defendants and refugee applicants in Durban, South Africa, and studied international law at Utrecht University in the Netherlands. In college, Hannah studied Computer Science and Management at Rensselaer Polytechnic Institute. In her spare time, she likes to climb things.

Matthew Guariglia
Matthew Guariglia is a policy analyst working on issues of surveillance and policing at the local, state, and federal level. He received a PhD in history at the University of Connecticut where his research focused on the intersection of race, immigration, U.S. imperialism, and policing in New York City. He is the co-editor of The Essential Kerner Commission Report (Liveright, 2021) and his book Police in the Empire City is forthcoming from Duke University Press and his bylines have appeared in NBC News, the Washington Post, Slate, Motherboard, and the Freedom of Information-centered outlet Muckrock. Matthew is an affiliated scholar at University of California, San Francisco School of Law and serves as an editor of “Disciplining the City,” a series on the history of urban policing and incarceration at the Urban History Association’s blog The Metropole.
Editor’s Note: At a moment when the once vaunted model of responsible journalism is overwhelmingly the play thing of self-serving billionaires and their corporate scribes, alternatives of integrity are desperately needed, and ScheerPost is one of them. Please support our independent journalism by contributing to our online donation platform, Network for Good, or send a check to our new PO Box. We can’t thank you enough, and promise to keep bringing you this kind of vital news.
You can also make a donation to our PayPal or subscribe to our Patreon.
