MoreRSS

site icon404 MediaModify

A journalist-founded digital media company exploring the ways technology is shaping–and is shaped by–our world.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of 404 Media

ICE to Buy Tool that Tracks Locations of Hundreds of Millions of Phones Every Day

2025-10-01 05:46:34

ICE to Buy Tool that Tracks Locations of Hundreds of Millions of Phones Every Day

Immigration and Customs Enforcement (ICE) has bought access to a surveillance tool that is updated every day with billions of pieces of location data from hundreds of millions of mobile phones, according to ICE documents reviewed by 404 Media.

The documents explicitly show that ICE is choosing this product over others offered by the contractor’s competitors because it gives ICE essentially an “all-in-one” tool for searching both masses of location data and information taken from social media. The documents also show that ICE is planning to once again use location data remotely harvested from peoples’ smartphones after previously saying it had stopped the practice. 

💡
Do you know anything else about this contract or others? Do you work at Penlink or ICE? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at [email protected].

Surveillance contractors around the world create massive datasets of phones’, and by extension people’s movements, and then sell access to the data to government agencies. In turn, U.S. agencies have used these tools without a warrant or court order.

“The Biden Administration shut down DHS’s location data purchases after an inspector general found that DHS had broken the law. Every American should be concerned that Trump's hand-picked security force is once again buying and using location data without a warrant,” Senator Ron Wyden told 404 Media in a statement.

In Unhinged Speech, Pete Hegseth Says He's Tired of ‘Fat Troops,’ Says Military Needs to Go Full AI

2025-09-30 23:14:42

In Unhinged Speech, Pete Hegseth Says He's Tired of ‘Fat Troops,’ Says Military Needs to Go Full AI

Last week, Secretary of War Pete Hegseth called America’s Generals to Quantico to meet for an unknown reason. America’s top civilian military leader calling the generals home all at once is strange and unprecedented. It’s the kind of move that often presages something like a major war. But that’s not what he wanted. During a bizarre, unhinged speech before America’s military leadership, Hegseth focused almost entirely on the culture wars and called for the restoration of what he called a “warrior ethos.” He said some of America’s generals are fat, demanded the Pentagon go all in on AI, whined about beards and accountability, told the troops they “kill people and break things for a living,” and plugged his book.

Google Just Removed Seven Years of Political Advertising History from 27 Countries

2025-09-30 21:36:40

Google Just Removed Seven Years of Political Advertising History from 27 Countries

Google’s Ad Transparency tool no longer shows political online advertisements that ran on its platforms, in the past or present, from any countries in the European Union, making seven years of data from 27 different countries inaccessible.

Liz Carolan, who publishes Irish technology and politics newsletter The Briefing, spotted the change on September 28. Carolan noticed that until last week, Google’s Ad Transparency tool would allow visitors to search ads that have run in countries in the EU going back to 2018, including data about who was targeted, how much was spent on each ad, and for what candidates or parties. This week, political ads from Ireland as well as the other 26 countries in the EU are gone from the Ad Transparency political ads country selection page. 

“We had been told that Google would try to stop people placing political ads, a ‘ban’ that was to come into effect this week. I did not read anywhere that this would mean the erasure of this archive of our political history,” Carolan wrote. 

The change is in response to the EU’s upcoming Regulation on Transparency and Targeting of Political Advertising (TTPA), a law set to enter full force on October 10. The TTPA lays out new regulations for advertisers in the EU, including requirements that political ads “must be clearly labelled as such and include information on who paid for it, to which election, referendum, legislative or regulatory process it is linked and whether targeting or ad-delivery techniques have been used,” according to an EU summary of the law, and limits targeting and ad delivery of political advertising to strict conditions, including requiring consent from ads’ targets that their data be used for political advertising. Certain categories of demographic data, like racial or ethnic origin or political opinions, can’t be used for profiling by advertisers.

Google Just Removed Seven Years of Political Advertising History from 27 Countries
How Google's Ad Transparency Center supported countries list looks as of today.

On August 5, Google posted new guidelines for political ads in EU countries, and said that past ads would still be accessible in the Transparency Center: “As of September 2025, the EU Political Ads Transparency report will be no longer available. However, EU Election Ads previously shown in the Political Ads Transparency Report will remain publicly accessible in the Ads Transparency Center, subject to retention policies.” 

In July, Meta also announced it would no longer allow “political, electoral and social issue ads” on its platforms in the EU, “given the unworkable requirements and legal uncertainties” introduced by the TTPA. Past ads from the EU are still visible on Meta’s ad library.

The law dictates that online ads will be available in “an online European repository,” but that repository hasn’t launched yet. Researchers and journalists rely on tools like Google’s Ad Transparency platform and Meta’s similar platform for information on who was running political ads and how; now, they’ll have to wait for that repository to launch.

Google announced in November 2024 that it would stop serving political ads in the EU in October 2025, ahead of the TTPA. “Additionally, paid political promotions, where they qualify as political ads under the TTPA, will no longer be permitted on YouTube in the EU,” Google’s Vice President for Government Affairs and Public Policy for Europe Annette Kroeber-Riel wrote in a company blog post.

“The European Union’s upcoming Regulation on Transparency and Targeting of Political Advertising (TTPA) unfortunately introduces significant new operational challenges and legal uncertainties for political advertisers and platforms,” Kroeber-Riel wrote. “For example, the TTPA defines political advertising so broadly that it could cover ads related to an extremely wide range of issues that would be difficult to reliably identify at scale. There is also a lack of reliable local election data permitting consistent and accurate identification of all ads related to any local, regional or national election across any of 27 EU Member States. And key technical guidance may not be finalized until just months before the regulation comes into effect.” The law is vague, but doesn’t specifically require platforms to delete past ads. It’s likely that many of the ads stored by Google in the Transparency Center would be in violation of the law today, however; instead of combing through hundreds of thousands of ads, it’s possible Google just removed the entire EU. 

Google did not respond to 404 Media’s request for comment. 

18 Lawyers Caught Using AI Explain Why They Did It

2025-09-30 21:03:31

18 Lawyers Caught Using AI Explain Why They Did It

Earlier this month, an appeals court in California issued a blistering decision and record $10,000 fine against a lawyer who submitted a brief in which “nearly all of the legal quotations in plaintiff’s opening brief, and many of the quotations in plaintiff’s reply brief, are fabricated” through the use of ChatGPT, Claude, Gemini, and Grok. The court said it was publishing its opinion “as a warning” to California lawyers that they will be held responsible if they do not catch AI hallucinations in their briefs. 

In that case, the lawyer in question “asserted that he had not been aware that generative AI frequently fabricates or hallucinates legal sources and, thus, he did not ‘manually verify [the quotations] against more reliable sources.’ He accepted responsibility for the fabrications and said he had since taken measures to educate himself so that he does not repeat such errors in the future.”

As the judges remark in their opinion, the use of generative AI by lawyers is now everywhere, and when it is used in ways that introduce fake citations or fake evidence, it is bogging down courts all over America (and the world). For the last few months, 404 Media has been analyzing dozens of court cases around the country in which lawyers have been caught using generative AI to craft their arguments, generate fictitious citations, generate false evidence, cite real cases but misinterpret them, or otherwise take shortcuts that has introduced inaccuracies into their cases. Our main goal was to learn more about why lawyers were using AI to write their briefs, especially when so many lawyers have been caught making errors that lead to sanctions and that ultimately threaten their careers and their standings in the profession.

To do this, we used a crowdsourced database of AI hallucination cases maintained by the researcher Damien Charlotin, which so far contains more than 410 cases worldwide, including 269 in the United States. Charlotin’s database is an incredible resource, but it largely focuses on what happened in any individual case and the sanctions against lawyers, rather than the often elaborate excuses that lawyers told the court when they were caught. Using Charlotin’s database as a starting point, we then pulled court records from around the country for dozens of cases where a lawyer offered a formal explanation or apology. Pulling this information required navigating clunky federal and state court record systems and finding and purchasing the specific record where the lawyer in question tried to explain themselves (these were often called “responses to order to show cause.”) We also reached out to lawyers who were sanctioned for using AI to ask them why they did it. Very few of them responded, but we have included explanations from the few who did. 

What we found was incredibly fascinating, and reveals a mix of lawyers blaming IT issues, personal and family emergencies, their own poor judgment and carelessness, and demands from their firms and the industry to be more productive and take on more casework. But most often, they simply blame their assistants. 

Few dispute that the legal industry is under great pressure to use AI. Legal giants like Westlaw and LexisNexis have pitched bespoke tools to law firms that are now regularly being used, but Charlotin’s database makes clear that lawyers are regularly using off-the-shelf generalized tools like ChatGPT and Gemini as well. There’s a seemingly endless number of startups selling AI legal tools that do research, write briefs, and perform other legal tasks. While working on this article, it became nearly impossible to keep up with new cases of lawyers being sanctioned for using AI. Charlotin has documented 11 new cases within the last week alone. 

This article is the first of several 404 Media will write exploring the use of AI in the legal profession. If you’re a lawyer and have thoughts or firsthand experiences, please get in touch. Some of the following anecdotes have been lightly edited for clarity.

💡
Are you a lawyer or do you work in the legal industry? We want to know how AI is impacting the industry, your firm, and your job. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at [email protected].

A lawyer in Indiana blames the court (Fake case cited)

A judge stated that the lawyer “took the position that the main reason for the errors in his brief was the short deadline (three days) he was given to file it. He explained that, due to the short timeframe and his busy schedule, he asked his paralegal (who once was, but is not currently, a licensed attorney) to draft the brief, and did not have time to carefully review the paralegal's draft before filing it.”

A lawyer in New York blamed vertigo, head colds, and malware

"He acknowledges that he used Westlaw supported by Google Co-Pilot which is an artificial intelligence-based tool as preliminary research aid." The lawyer “goes on to state that he had no idea that such tools could fabricate cases but acknowledges that he later came to find out the limitation of such tools. He apologized for his failure to identify the errors in his affirmation, but partly blames ‘a serious health challenge since the beginning of this year which has proven very persistent which most of the time leaves me internally cold, and unable to maintain a steady body temperature which causes me to be dizzy and experience bouts of vertigo and confusion.’ The lawyer then indicates that after finding about the ‘citation errors’ in his affirmation, he conducted a review of his office computer system and found out that his system was ‘affected by malware and unauthorized remote access.’ He says that he compared the affirmation he prepared on April 9, 2025, to the affirmation he filed to [the court] on April 21, 2025, and ‘was shocked that the cases I cited were substantially different.’”

A lawyer in Florida blames a paralegal and the fact they were doing the case pro bono (Fake cases and hallucinated quotes)

The lawyer “explained that he was handling this appeal pro bono and that as he began preparing the brief, he recognized that he lacked experience in appellate law. He stated that at his own expense, he hired ‘an independent contractor paralegal to assist in drafting the answer brief.’ He further explained that upon receipt of a draft brief from the paralegal, he read it, finalized it, and filed it with this court. He admitted that he ‘did not review the authority cited within the draft answer brief prior to filing’ and did not realize it contained AI generated content.

A lawyer in South Carolina said he was rushing (Fake cases generated by Microsoft CoPilot)

“Out of haste and a naïve understanding of the technology, he did not independently verify the sources were real before including the citations in the motion filed with the Court seeking a preliminary injunction”

A lawyer in Hawaii blames a New Yorker they hired

This lawyer was sanctioned $100 by a court for one AI-generated case, as well as quoting multiple real cases and misattributing them to that fake case. They said they had hired a per-diem attorney—“someone I had previously worked with and trusted,” they told the court—to draft the case, and though they “did not personally use AI in this case, I failed to ensure every citation was accurate before filing the brief.” The Honolulu Civil Beat reported that the per-diem attorney they hired was from New York, and that they weren’t sure if that attorney had used AI or not. 

The lawyer told us over the phone that the news of their $100 sanction had blown up in their district thanks to that article. “ I was in court yesterday, and of course the [opposing] attorney somehow brought this up,” they said in a call. According to them, that attorney has also used AI in at least seven cases. Nearly every lawyer is using AI to some degree, they said; it’s just a problem if they get caught. “The judges here have seen it extensively. I know for a fact other attorneys have been sanctioned. It’s public, but unless you know what to search for, you’re not going to find it anywhere. It’s just that for some stupid reason, my matter caught the attention of a news outlet. It doesn’t help with business.”

A lawyer in Arizona blames someone they hired

A judge wrote “this is a case where the majority of authorities cited were either fabricated, misleading, or unsupported. That is egregious … this entire litigation has been derailed by Counsel’s actions. The Opening Brief was replete with citation-related deficiencies, including those consistent with artificial intelligence generated hallucinations.”

The attorney claimed “Neither I nor the supervising staff attorney knowingly submitted false or non-existent citations to the Court. The brief writer in question was experienced and credentialed, and we relied on her professionalism and prior performance. At no point did we intend to mislead the Court or submit citations not grounded in valid legal authority.”

A lawyer in Louisiana blames Westlaw (a legal research tool)

The lawyer “acknowledge[d] the cited authorities were inaccurate and mistakenly verified using Westlaw Precision, an AI-assisted research tool, rather than Westlaw’s standalone legal database.” The lawyer further wrote that she “now understands that Westlaw Precision incorporates AI-assisted research, which can generate fictitious legal authority if not independently verified. She testified she was unable to provide the Court with this research history because the lawyer who produced the AI-generated citations is currently suspended from the practice of law in Louisiana:

“In the interest of transparency and candor, counsel apologizes to the Court and opposing counsel and accepts full responsibility for the oversight. Undersigned counsel now understands that Westlaw Precision incorporates AI-assisted research, which can generate fictitious legal authority if not independently verified. Since discovering the error, all citations in this memorandum have been independently confirmed, and a Motion for Leave to amend the Motion to Transfer has been filed to withdraw the erroneous citations. Counsel has also implemented new safeguards, including manual cross-checking in non AI-assisted databases, to prevent future mistakes.”

“At the time, undersigned counsel understood these authorities to be accurate and reliable. Undersigned counsel made edits and finalized the pleading but failed to independently verify every citation before filing it. Undersigned counsel takes responsibility for this oversight. 

Undersigned counsel wants the Court to know that she takes this matter extremely seriously. Undersigned counsel holds the ethical obligations of our profession in the highest regard and apologizes to opposing counsel and the Court for this mistake. Undersigned counsel remains fully committed to the ethical obligations as an officer of the court and the standards expected by this Court going forward, which is evidenced by requesting leave to strike the inaccurate citations. Most importantly, undersigned counsel has taken steps to ensure this oversight does not happen again.”

A lawyer in New York says the death of their spouse distracted them

“We understand the grave implications of misreporting case law to the Court. It is not our intention to do so, and the issue is being investigated internally in our office,” the lawyer in the case wrote.

“The Opposition was drafted by a clerk. The clerk reports that she used Google for research on the issue,” they wrote. “The Opposition was then sent to me for review and filing. I reviewed the draft Opposition but did not check the citations. I take full responsibility for failing to check the citations in the Opposition. I believe the main reason for my failure is due to the recent death of my spouse … My husband’s recent death has affected my ability to attend to the practice of law with the same focus and attention as before.”

A lawyer in California says it was ‘a legal experiment’

This is a weird one, and has to do with an AI-generated petition filed three times in an antitrust lawsuit brought against Apple by the Coronavirus Reporter Corporation. The lawyer in the case explained that he created the document as a “legal experiment.” He wrote:

“I also ‘approved for distribution’ a Petition which Apple now seeks to strike. Apple calls the Petition a ‘manifesto,’ consistent with their five year efforts to deride us. But the Court should be aware that no human ever authored the Petition for Tim Cook’s resignation, nor did any human spend more than about fifteen minutes on it. I am quite weary of Artificial Intelligence, as I am weary of Big Tech, as the Court knows. We have never done such a test before, but we thought there was an interesting computational legal experiment here. 

Apple has recently published controversial research that AI LLM's are, in short, not true intelligence. We asked the most powerful commercially available AI, ChatGPT o3 Pro ‘Deep Research’ mode, a simple question: ‘Did Judge Gonzales Rogers’ rebuke of Tim Cook’s Epic conduct create a legally grounded impetus for his termination as CEO, and if so, write a petition explaining such basis, providing contextual background on critics’ views of Apple’s demise since Steve Jobs’ death.’ Ten minutes later, the Petition was created by AI. I don't have the knowledge to know whether it is indeed 'intelligent,' but I was surprised at the quality of the work—so much so that (after making several minor corrections) I approved it for distribution and public input, to promote conversation on the complex implications herein. This is a matter ripe for discussion, and I request the motion be granted.”

Lawyers in Michigan blame an internet outage

“Unfortunately, difficulties were encountered on the evening of April 4 in assembling, sorting and preparation of PDFs for the approximately 1,500 pages of exhibits due to be electronically filed by Midnight. We do use artificial intelligence to supplement their research, along with strict verification and compliance checks before filing. 

AI is incorporated into all of the major research tools available, including West and Lexis, and platforms such as ChatGPT, Claude, Gemini, Grok and Perplexity. [We] do not rely on AI to write our briefs. We do include AI in their basic research and memorandums, and for checking spelling, syntax, and grammar. As Midnight approached on April 4, our computer system experienced a sudden and unexplainable loss of internet connection and loss of connection with the ECF [e-court filing] system … In the midst of experiencing these technical issues, we erred in our standard verification process and missed identifying incorrect text AI put in parentheticals in four cases in footnote 3, and one case on page 12, of the Opposition.”

Lawyers in Washington DC blame Grammarly, ProWritingAid, and an IT error

“After twenty years of using Westlaw, last summer I started using Lexis and its protege AI product as a natural language search engine for general legal propositions or to help formulate arguments in areas of the law where the courts have not spoken directly on an issue. I have never had a problem or issue using this tool and prior to recent events I would have highly recommended it. I failed to heed the warning provided by Lexis and did not double check the citations provided. Instead, I inserted the quotes, caselaw and uploaded the document to ProWritingAid. I used that tool to edit the brief and at one point used it to replace all the square brackets ( [ ) with parentheses.

In preparing and finalizing the brief, I used the following software tools: Pages with Grammarly and ProWritingAid ... through inadvertence or oversight, I was unaware quotes had been added or that I had included a case that did not actually exist … I immediately started trying to figure out what had happened. I spent all day with IT trying to figure out what went wrong.”

A lawyer in Texas blames their email, their temper, and their legal assistant

“Throughout May 2025, Counsel's office experienced substantial technology related problems with its computer and e-mail systems. As a result, a number of emails were either delayed or not received by Counsel at all. Counsel also possesses limited technological capabilities and relies on his legal assistant for filing documents and transcription - Counsel still uses a dictation phone. However, Counsel's legal assistant was out of the office on the date Plaintiffs Response was filed, so Counsel's law clerk had to take over her duties on that day (her first time filing). Counsel's law clerk had been regularly assisting Counsel with the present case and expressed that this was the first case she truly felt passionate about … While completing these items, Counsel's law clerk had various issues, including with sending opposing counsel the Joint Case Management Plan which required a phone conference to rectify. Additionally, Counsel's law clerk believed that Plaintiff’s Response to Defendant's Motion to Dismiss was also due that day when it was not.  

In midst of these issues, Counsel - already missing his legal assistant - became frustrated. However, Counsel's law clerk said she had already completed Plaintiff's Response and Counsel immediately read the draft but did not thoroughly examine the cases cited therein … unbeknownst to Counsel and to his dismay, Counsel's law clerk did use artificial intelligence in drafting Plaintiff's Response. Counsel immediately instituted a strict policy prohibiting his staff from using artificial intelligence without exception - Counsel doesn't use artificial intelligence, so neither shall his staff.

Second, Counsel now requires any staff assisting in drafting documents to provide Counsel with a printout of each case cited therein with the passage(s) being relied on highlighted or marked.”

The lawyer also submitted an invoice from a company called Mainframe Computers for $480 which include line items for “Install office,” “printer not working and computer restarting,” “fixes with email and monitors and default fonts,” and “computer errors, change theme, resolution, background, and brightness.”

Reddit Mods Sued by YouTuber Ethan Klein Fight Efforts to Unmask Them

2025-09-30 03:06:57

Reddit Mods Sued by YouTuber Ethan Klein Fight Efforts to Unmask Them

This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.

Critics of YouTuber Ethan Klein are pushing back on subpoenas that would reveal their identities as part of an ongoing legal fight between Klein and his detractors. Klein is a popular content creator whose YouTube channel has more than 2 million subscribers. He’s also involved in a labyrinthine personal and legal beef with three other content creators and the moderators of a subreddit that criticises his work. Klein filed a legal motion to compel Discord and Reddit to reveal the identities of those moderators, a move their lawyers say would put them in harm’s way and stifle free speech on the internet forever.

Klein is most famous for his H3 Podcast and collaborations with Hasan Piker and Trisha Paytas which he produced through his company Ted Entertainment Inc. Following a public falling out with Piker, Klein released a longform video essay critiquing his former podcast partner. As often happens with long video essays about YouTube drama, other content creators filmed themselves watching Klein’s essay.

Landlords Demand Tenants’ Workplace Logins to Scrape Their Paystubs

2025-09-29 22:53:21

Landlords Demand Tenants’ Workplace Logins to Scrape Their Paystubs

Landlords are using a service that logs into a potential renter’s employer systems and scrapes their paystubs and other information en masse, potentially in violation of U.S. hacking laws, according to screenshots of the tool shared with 404 Media.

The screenshots highlight the intrusive methods some landlords use when screening potential tenants, taking information they may not need, or legally be entitled to, to assess a renter.

“This is a statewide consumer-finance abuse that forces renters to surrender payroll and bank logins or face homelessness,” one renter who was forced to use the tool and who saw it taking more data than was necessary for their apartment application told 404 Media. 404 Media granted the person anonymity to protect them from retaliation from their landlord or the services used.

💡
Do you know anything else about any of these companies or the technology landlords are using? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at [email protected].

“I am livid,” they added.