The face-recognition app Mobile Fortify, now used by United States immigration agents in cities and cities throughout the US, will not be designed to reliably determine individuals within the streets and was deployed with out the scrutiny that has traditionally ruled the rollout of applied sciences that impression individuals’s privacy, in line with information reviewed by WIRED.
The Department of Homeland Security launched Cell Fortify within the spring of 2025 to “determine or verify” the identities of people stopped or detained by DHS officers throughout federal operations, information present. DHS explicitly linked the rollout to an executive order, signed by President Donald Trump on his first day in workplace, which known as for a “total and efficient” crackdown on undocumented immigrants by way of using expedited removals, expanded detention, and funding strain on states, amongst different techniques.
Regardless of DHS repeatedly framing Cell Fortify as a software for figuring out individuals by way of facial recognition, nonetheless, the app doesn’t really “verify” the identities of individuals stopped by federal immigration brokers—a well known limitation of the know-how and a operate of how Cell Fortify is designed and used.
“Every manufacturer of this technology, every police department with a policy makes very clear that face recognition technology is not capable of providing a positive identification, that it makes mistakes, and that it’s only for generating leads,” says Nathan Wessler, deputy director of the American Civil Liberties Union’s Speech, Privateness, and Expertise Mission.
Information reviewed by WIRED additionally present that DHS’s hasty approval of Fortify final Could was enabled by dismantling centralized privateness evaluations and quietly eradicating department-wide limits on facial recognition—adjustments overseen by a former Heritage Basis lawyer and Mission 2025 contributor, who now serves in a senior DHS privateness function.
DHS—which has declined to element the strategies and instruments that brokers are utilizing, regardless of repeated calls from oversight officials and nonprofit privacy watchdogs—has used Cell Fortify to scan the faces not solely of “targeted individuals,” but in addition individuals later confirmed to be US citizens and others who had been observing or protesting enforcement exercise.
Reporting has documented federal brokers telling residents they had been being recorded with facial recognition and that their faces can be added to a database with out consent. Different accounts describe brokers treating accent, perceived ethnicity, or skin color as a foundation to escalate encounters—then utilizing face scanning as the next step as soon as a cease is underway. Collectively, the circumstances illustrate a broader shift in DHS enforcement towards low-level road encounters adopted by biometric seize like face scans, with restricted transparency across the software’s operation and use.
Fortify’s know-how mobilizes facial seize a whole bunch of miles from the US border, permitting DHS to generate nonconsensual face prints of people that, “it is conceivable,” DHS’s Privateness Workplace says, are “US citizens or lawful permanent residents.” As with the circumstances surrounding its deployment to brokers with Customs and Border Safety and Immigration and Customs Enforcement, Fortify’s performance is seen primarily as we speak by way of court docket filings and sworn agent testimony.
In a federal lawsuit this month, attorneys for the State of Illinois and the Metropolis of Chicago mentioned the app had been used “in the field over 100,000 times” since launch.
In Oregon testimony final 12 months, an agent mentioned two pictures of a lady in custody taken along with his face-recognition app produced completely different identities. The girl was handcuffed and searching downward, the agent mentioned, prompting him to bodily reposition her to acquire the primary picture. The motion, he testified, prompted her to yelp in ache. The app returned a reputation and photograph of a lady named Maria; a match that the agent rated “a maybe.”
Brokers known as out the identify, “Maria, Maria,” to gauge her response. When she failed to reply, they took one other photograph. The agent testified the second consequence was “possible,” however added, “I don’t know.” Requested what supported possible trigger, the agent cited the lady talking Spanish, her presence with others who seemed to be noncitizens, and a “possible match” via facial recognition. The agent testified that the app did not indicate how confident the system was in a match. “It’s just an image, your honor. You have to look at the eyes and the nose and the mouth and the lips.”

