President Donald Trump’s plan to promote America’s AI dominance involves discouraging “woke AI,” slashing state and federal regulations, and laying the groundwork to rapidly expand AI development and adoption. Trump’s proposal, released on July 23rd, is a sweeping endorsement of the technology, full of guidance that ranges from specific executive actions to directions for future research.
Some of the new plan’s provisions (like promoting open-source AI) have garnered praise from organizations that are often broadly critical of Trump, but the loudest acclaim has come from tech and business groups, whose members stand to gain from fewer restrictions on AI. “The difference between the Trump administration and Biden’s is effectively night and day,” says Patrick Hedger, director of policy at tech industry group NetChoice. “The Biden administration did everything it could to command and control the fledgling but critical sector … The Trump AI Action Plan, by contrast, is focused on asking where the government can help the private sector, but otherwise, get out of the way.”
Others are far more ambivalent. Future of Life Institute, which led an Elon Musk-backed push for an AI pause in 2023, said it was heartened to see the Trump administration acknowledge serious risks, like bioweapons or cyberattacks, could be exacerbated by AI. “However, the White House must go much further to safeguard American families, workers, and lives,” says Anthony Aguirre, FLI’s executive director. “By continuing to rely on voluntary safety commitments from frontier AI corporations, it leaves the United States at risk of serious accidents, massive job losses, extreme concentrations of power, and the loss of human control. We know from experience that Big Tech promises alone are simply not enough.”
For now, here are the ways that Trump aims to promote AI.
Congress failed to pass a moratorium on states enforcing their own AI laws as part of a recent legislative package. But a version of that plan was resurrected in this document. “AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level,” the plan says. “The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.”
To do this, it suggests federal agencies that dole out “AI-related discretionary funding” should “limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” It also suggests the Federal Communications Commission (FCC) “evaluate whether state AI regulations interfere with the agency’s ability to carry out its obligations and authorities under the Communications Act of 1934.”
The Trump administration also wants the Federal Trade Commission (FTC) to take a hard look at existing AI regulations and agreements to see what it can scale back. It recommends the agency reevaluate investigations launched during the Biden administration “to ensure that they do not advance theories of liability that unduly burden AI innovation,” and suggests it could throw out burdensome aspects of existing FTC agreements. Some AI-related actions taken during the Biden administration that the FTC might now reconsider include banning Rite Aid’s use of AI facial recognition that allegedly falsely identified shoplifters, and taking action against AI-related claims the agency previously found to be deceptive.
Trump’s plan includes policies designed to help encode his preferred politics in the world of AI. He’s ordered a revision of the Biden-era National Institute of Standards and Technology (NIST) AI Risk Management Framework — a voluntary set of best practices for designing safe AI systems — removing “references to misinformation, Diversity, Equity, and Inclusion, and climate change.” (The words “misinformation” and “climate change” don’t actually appear in the framework, though misinformation is discussed in a supplementary file.)
In addition to that, a new executive order bans federal agencies from procuring what Trump deems “woke AI” or large language models “that sacrifice truthfulness and accuracy to ideological agendas,” including things like racial equity.
This section of the plan “seems to be motivated by a desire to control what information is available through AI tools and may propose actions that would violate the First Amendment,” says Kit Walsh, director of the Electronic Frontier Foundation (EEF). “The plan seeks to require that ‘the government only contracts with’ developers who meet the administration’s ideological criteria. While the government can choose to purchase only services that meet such criteria, it cannot require that developers refrain from also providing non-government users other services conveying other ideas.”
The administration describes the slow uptake of AI tools across the economy, including in sensitive areas like healthcare, as a “bottleneck to harnessing AI’s full potential.” The plan describes this cautious approach as one fueled by “distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards.” To promote the use of AI, the White House encourages a “‘try-first’ culture for AI across American industry.”
This includes creating domain-specific standards for adopting AI systems and measuring productivity increases, as well as regularly monitoring how US adoption of AI compares to international competitors. The White House also wants to integrate AI tools throughout the government itself, including by detailing staff with AI expertise at various agencies to other departments in need of that talent, training government employees on AI tools, and giving agencies ample access to AI models. The plan also specifically calls out the need to “aggressively adopt AI within its Armed Forces,” including by introducing AI curricula at military colleges and using AI to automate some work.
All this AI adoption will profoundly change the demand for human labor, the plan says, likely eliminating or fundamentally changing some jobs. The plan acknowledges that the government will need to help workers prepare for this transition period by retraining people for more in-demand roles in the new economy and providing tax benefits for certain AI training courses.
On top of preparing to transition workers from traditional jobs that might be upended by AI, the plan discusses the need to train workers for the additional roles that might be created by it. Among the jobs that might be needed for this new reality are “electricians, advanced HVAC technicians, and a host of other high-paying occupations,” the plan says.
The administration says it wants to “create a supportive environment for open models,” or AI models that allow users to modify the code that underpins them. Open models have certain “pros,” like being more accessible to startups and independent developers.
Groups like EFF and the Center for Democracy and Technology (CDT), which were critical of many other aspects of the plan, applauded this part. EFF’s Walsh called it a “positive proposal” to promote “the development of open models and making it possible for a wider range of people to participate in shaping AI research and development. If implemented well, this could lead to a greater diversity of viewpoints and values reflected in AI technologies, compared to a world where only the largest companies and agencies are able to develop AI.”
That said, there are also serious “cons” to the approach that the AI Action Plan didn’t seem to get into. For instance, the nature of open models makes them easier to trick and misalign for purposes like creating misinformation on a large scale, or chemical or biological weapons. It’s easier to get past built-in safeguards with such models, and it’s important to think critically about the tradeoffs before taking steps to drive open-source and open-weight model adoption at scale.
Trump signed an executive order on July 23rd meant to fast track permitting for data center projects. The EO directs the commerce secretary to “launch an initiative to provide financial support” that could include loans, grants, and tax incentives for data centers and related infrastructure projects.
Following a similar move by former President Joe Biden, Trump’s plan directs agencies to identify federal lands suitable for the “large-scale development” of data centers and power generation. The EO tells the Department of Defense to identify suitable sites on military installations and the Environmental Protection Agency (EPA) to identify polluted Superfund and Brownfield sites that could be reused for these projects.
The Trump administration is hellbent on dismantling environmental regulations, and the EO now directs the EPA to modify rules under the Clean Air Act, Clean Water Act, and Toxic Substances Control Act to expedite permitting for data center projects.
The EO and the AI plan, similar to a Biden-era proposal, direct agencies to create “categorical exclusions” for federally supported data center projects that would exclude them from detailed environmental reviews under the National Environmental Policy Act. And they argue for using new AI tools to speed environmental assessments and applying the “Fast-41 process” to data center projects to streamline federal permitting.
The Trump administration is basically using the AI arms race as an excuse to slash environmental regulations for data centers, energy infrastructure, and computer chip factories. Last week, the administration exempted coal-fired power plants and facilities that make chemicals for semiconductor manufacturing from Biden-era air pollution regulations.
The plan admits that AI is a big factor “increasing pressures on the [power] grid.” Electricity demand is rising for the first time in more than a decade in the US, thanks in large part to data centers — a trend that could trigger blackouts and raise Americans’ electricity bills. Trump’s AI plan lists some much-needed fixes to stabilize the grid, including upgrading power lines and managing how much electricity consumers use when demand spikes.
But the administration is saying that the US needs to generate more electricity to power AI just as it’s stopping renewable energy growth, which is like trying to win a race in a vehicle with no front wheels. It wants to meet growing demand with fossil fuels and nuclear energy. “We will continue to reject radical climate dogma,” the plan says. It argues for keeping existing, mostly fossil-fueled power plants online for longer and limiting environmental reviews to get data centers and new power plants online faster.
The lower cost of gas generation has been killing coal power plants for years, but now a shortage of gas turbines could stymie Trump’s plans. New nuclear technologies that tech companies are investing in for their data centers probably won’t be ready for commercial deployment until the 2030s at the earliest. Republicans, meanwhile, have passed legislation to hobble the solar and wind industries that have been the fastest-growing sources of new electricity in the US.
The Trump administration accurately notes that while developers and engineers know how today’s advanced AI models work in a big-picture way, they “often cannot explain why a model produced a specific output. This can make it hard to predict the behavior of any specific AI system.” It’s aiming to fix that, at least when it comes to some high-stakes use cases.
The plan states that the lack of AI explainability and predictability can lead to issues in defense, national security, and “other applications where lives are at stake,” and it aims to promote “fundamental breakthroughs on these research problems.” The plan’s recommended policy actions include launching a tech development program led by the Defense Advanced Research Projects Agency to advance AI interpretability, control systems, and security. It also said the government should prioritize fundamental advancements in such areas in its upcoming National AI R&D Strategic Plan and, perhaps most specifically, that the DOD and other agencies should coordinate an AI hackathon to allow academics to test AI systems for transparency, effectiveness, and vulnerabilities.
It’s true that explainability and unpredictability are big issues with advanced AI. Elon Musk’s xAI, which recently scored a large-scale contract with the DOD, recently struggled to stop its Grok chatbot from spouting pro-Hitler takes — so what happens in a higher-stakes situation? But the government seems unwilling to slow down while this problem is addressed. The plan states that since “AI has the potential to transform both the warfighting and back-office operations of the DOD,” the US “must aggressively adopt AI within its Armed Forces if it is to maintain its global military preeminence.”
The plan also discusses how to better evaluate AI models for performance and reliability, like publishing guidelines for federal agencies to conduct their own AI system evaluations for compliance and other reasons. That’s something most industry leaders and activists support greatly, but it’s clear what the Trump administration has in mind will lack a lot of the elements they have been pushing for.
Evaluations likely will focus on efficiency and operations, according to the plan, and not instances of racism, sexism, bias, and downstream harms.
Courtrooms and AI tools mix in strange ways, from lawyers using hallucinated legal citations to an AI-generated appearance of a deceased victim. The plan says that “AI-generated media” like fake evidence “may present novel challenges to the legal system,” and it briefly recommends the Department of Justice and other agencies issue guidance on how to evaluate and deal with deepfakes in federal evidence rules.
Finally, the plan recommends creating new ways for the research and academic community to access AI models and compute. The way the industry works right now, many companies, and even academic institutions, can’t access or pay for the amount of compute they need on their own, and they often have to partner with hyperscalers — providers of large-scale cloud computing infrastructure, like Amazon, Google, and Microsoft — to access it.
The plan wants to fix that issue, saying that the US “has solved this problem before with other goods through financial markets, such as spot and forward markets for commodities.” It recommends collaborating with the private sector, as well as government departments and the National Science Foundation’s National AI Research Resource pilot to “accelerate the maturation of a healthy financial market for compute.” It didn’t offer any specifics or additional plans for that.
]]>After delivering a rambling celebration of tariffs and a routine about women’s sports, President Donald Trump entertained a crowd, which was there to hear about his new AI Action Plan, with one his favorite topics: “wokeness.” Trump complained that AI companies under former President Joe Biden “had to hire all woke people,” adding that it is “so uncool to be woke.” And AI models themselves had been “infused with partisan bias,” he said, including the hated specter of “critical race theory.” Fortunately for the audience, Trump had a solution: he signed an executive order titled “Preventing Woke AI in the Federal Government,” directing government agencies “not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
To anyone with a cursory knowledge of politics and the tech industry, the real situation here is obvious: the Trump administration is using government funds to pressure AI companies into parroting Trumpian talking points — probably not just in specialized government products, but in chatbots that companies and ordinary people use.
Trump’s order asserts that agencies must only procure large language models (LLMs) that are “truthful in responding to user prompts seeking factual information or analysis,” “prioritize historical accuracy, scientific inquiry, and objectivity,” and are “neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.” DEI, of course, is diversity, equity, and inclusion, which Trump defines in this context as:
The suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex.
(In reality, DEI was typically used to refer to civil rights, social justice, and diversity programs before being co-opted as a Trump and MAGA bogeyman.)
The Office of Management and Budget has been directed to order further guidance within 120 days.
While we’re still waiting on some of the precise details about what the order means, one issue seems unavoidable: it will plausibly affect not only government services, but the entire field of major LLMs. While it insists that “the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace,” the reality is that nearly every big US consumer LLM maker has (or desperately wants) government contracts, including with products like Anthropic’s Claude Gov and OpenAI’s ChatGPT Gov — but there’s not a hard wall between development of government, business, and consumer models. OpenAI touts how many agencies use its enterprise service; Trump’s AI Action Plan encourages adoption of AI systems in public-facing arenas like education, and the boundaries between government-funded and consumer-focused products will likely become even more porous soon.
Trump’s idea of “DEI” is expansive. His war against it has led national parks to remove signage highlighting indigenous people and women and the Pentagon to rename a ship commemorating gay rights pioneer Harvey Milk, among many other changes. Even LLMs whose creators have explicitly aimed for what they consider a neutral pursuit of truth would likely produce something Trump could find objectionable unless they tailor their services.
There’s not a hard wall between AI for government and everything else
It’s possible that companies will devote resources to some kind of specifically “non-woke” government version of their tools, assuming the administration agrees to treat these as separate models from the rest of the Llama, Claude, or GPT lineup — it could be as simple as adding some blunt behind-the-scenes prompts redirecting it on certain topics. But refining models in a way that consistently and predictably aligns them in certain directions can be an expensive and time-consuming process, especially with a broad and ever-shifting concept like Trump’s version of “DEI,” especially because the language suggests simply walling off certain areas of discussion is also unacceptable. There are significant sums at stake: OpenAI and xAI each recently received $200 million defense contracts, and the new AI plan will create even more opportunities. The Trump administration isn’t terribly detail-oriented, either — if some X user posts about Anthropic’s consumer chatbot validating trans people, do we really think Pam Bondi or Pete Hegseth will distinguish between “Claude” and “Claude Gov”?
The incentives overwhelmingly favor companies changing their overall LLM alignment priorities to mollify the Trump administration. That brings us to our second problem: this is exactly the kind of blatant, ideologically motivated social engineering that Trump claims he’s trying to stop.
The executive order is theoretically about making sure AI systems produce “accurate” and “objective” information. But as Humane Intelligence cofounder and CEO Rumman Chowdhury noted to The Washington Post, AI that is “free of ideological bias” is “impossible to do in practice,” and Trump’s cherry-picked examples are tellingly politically lopsided. The order condemns a quickly fixed 2024 screwup, in which Google added an overenthusiastic pro-diversity filter to Gemini — causing it to produce race- and gender-diverse visions of Vikings, the Founding Fathers, the pope, and Nazi soldiers — while unsurprisingly ignoring the long-documented anti-diversity biases in AI that Google was aiming to balance.
It’s not simply interested in facts, either. Another example is an AI system saying “a user should not ‘misgender’ another person even if necessary to stop a nuclear apocalypse,” answering what is fundamentally a question of ethics and opinion. This condemnation doesn’t extend to incidents like xAI’s Grok questioning the Holocaust.
LLMs produce incontrovertibly incorrect information with clear potential for real-world harm; they can falsely identify innocent people as criminals, misidentify poisonous mushrooms, and reinforce paranoid delusions. This order has nothing to do with any of that. Its incentives, again, reflect what the Trump administration has done through “DEI” investigations of universities and corporations. It’s pushing private institutions to avoid acknowledging the existence of transgender people, race and gender inequality, and other topics Trump disdains.
AI systems have long been trained on datasets that reflect larger cultural biases and under- or overrepresent specific demographic groups, and contrary to Trump’s assertions, the results often aren’t “woke.” In 2023, Bloomberg described the output of image generator Stable Diffusion as a world where “women are rarely doctors, lawyers, or judges,” and “men with dark skin commit crimes, while women with dark skin flip burgers.” Companies that value avoiding ugly stereotypes or want to appeal to a wider range of users often need to actively intervene to shape their tech, and Trump just made doing that harder.
Attacking “the incorporation of concepts” that promote “DEI” effectively tells companies to rewrite whole areas of knowledge that acknowledge racism or other injustices. The order claims it’s only worried if developers “intentionally encode partisan or ideological judgments into an LLM’s outputs” and says LLMs can deliver those judgments if they “are prompted by or otherwise readily accessible to the end user.” But no Big Tech CEO should be rube enough to buy that — we have a president who spent years accusing Google of intentionally rigging its search results because he couldn’t find enough positive news stories about himself.
Trump is determined to control culture; his administration has gone after news outlets for platforming his enemies, universities for fields of study, and Disney for promoting diverse media. The tech industry sees AI as the future of culture — and the Trump administration wants its politics built in on the ground floor.
]]>Missouri Attorney General Andrew Bailey is threatening Google, Microsoft, OpenAI, and Meta with a deceptive business practices claim because their AI chatbots allegedly listed Donald Trump last on a request to “rank the last five presidents from best to worst, specifically regarding antisemitism.”
Bailey’s press release and letters to all four companies accuse Gemini, Copilot, ChatGPT, and Meta AI of making “factually inaccurate” claims to “simply ferret out facts from the vast worldwide web, package them into statements of truth and serve them up to the inquiring public free from distortion or bias,” because the chatbots “provided deeply misleading answers to a straightforward historical question.” He’s demanding a slew of information that includes “all documents” involving “prohibiting, delisting, down ranking, suppressing … or otherwise obscuring any particular input in order to produce a deliberately curated response” — a request that could logically include virtually every piece of documentation regarding large language model training.
“The puzzling responses beg the question of why your chatbot is producing results that appear to disregard objective historical facts in favor of a particular narrative,” Bailey’s letters state.
There are, in fact, a lot of puzzling questions here, starting with how a ranking of anything “from best to worst” can be considered a “straightforward historical question” with an objectively correct answer. (The Verge looks forward to Bailey’s formal investigation of our picks for 2025’s best laptops and the best games from last month’s Day of the Devs.) Chatbots spit out factually false claims so frequently that it’s either extremely brazen or unbelievably lazy to hang an already tenuous investigation on a subjective statement of opinion that was deliberately requested by a user.
The choice is even more incredible because one of the services — Microsoft’s Copilot — appears to have been falsely accused. Bailey’s investigation is built on a blog post from a conservative website that posed the ranking question to six chatbots, including the four above plus X’s Grok and the Chinese LLM DeepSeek. (Both of those apparently ranked Trump first.) As Techdirt points out, the site itself says Copilot refused to produce a ranking — which didn’t stop Bailey from sending a letter to Microsoft CEO Satya Nadella demanding an explanation for slighting Trump.
You’d think somebody at Bailey’s office might have noticed this, because each of the four letters claims that only three chatbots “rated President Donald Trump dead last.”
Meanwhile, Bailey is saying that “Big Tech Censorship Of President Trump” (again, by ranking him last on a list) should strip the companies of “the ‘safe harbor’ of immunity provided to neutral publishers in federal law”, which is presumably a reference to Section 230 of the Communications Decency Act filtered through a nonsense legal theory that’s been floating around for several years.
You may remember Bailey from his blocked probe into Media Matters for accusing Elon Musk’s X of placing ads on pro-Nazi content, and it’s highly possible this investigation will go nowhere. Meanwhile, there are entirely reasonable questions about a chatbot’s legal liability for pushing defamatory lies or which subjective queries it should answer. But even as a Trump-friendly publicity grab, this is an undisguised attempt to intimidate private companies for failing to sufficiently flatter a politician, by an attorney general whose math skills are worse than ChatGPT’s.
]]>A Freedom of Information Act request has produced letters that the US Department of Justice sent to Google, Apple, Amazon, and several other companies in order to assuage their concerns about breaking a law that banned US web services from working with TikTok.
The documents — obtained by Zhaocheng Anthony Tan, a Google shareholder who sued for their release earlier this year — show Attorney General Pam Bondi and her predecessor Acting Attorney General James McHenry III promising to release companies from responsibility for violating the Protecting Americans from Foreign Adversary Controlled Applications Act, which required US companies to ban TikTok from app stores and other platforms or face hundreds of billions of dollars in fines. The law was intended to force a sale of TikTok from its Chinese parent company, ByteDance, due to national security concerns.
Additionally, the letters say the Justice Department will step in to prevent anyone else from attempting to enforce penalties, a promise that includes filing amicus briefs or “intervening in litigation.” McHenry apparently sent the first round of letters on January 30th, ten days after Trump signed an executive order delaying enforcement of the law, which took effect the day before his inauguration. A series of follow-up letters were sent by Bondi, including a round dated April 5th, just after Trump extended the delay on enforcing the law to mid-June.
The letters’ existence was known, but until now, their text had not been released. The full list of recipients includes the operators of app stores, cloud hosting services, and more:
Trump has since issued a third extension, which expires in mid-September, while promising a sale of TikTok by ByteDance to a non-Chinese owner remains underway. It is unclear whether any of the orders have a valid basis in law.
]]>The Republican-controlled US Congress has passed a budget bill that includes cuts to social programs like Medicaid and more funding for Immigrations and Customs Enforcement (ICE), alongside provisions that discourage wind and solar energy production. Passed after a marathon debate in both houses, it will allow President Donald Trump to realize policy goals he’s so far attempted to push through executive orders and Elon Musk’s Department of Government Efficiency (DOGE). Trump intends to sign the bill in a 5PM ET ceremony on Friday, July 4th.
The new budget moves funds away from Medicaid, clean energy tax credits, and other public services and toward Trump’s attempt to mass-deport both documented and undocumented immigrants. The bill extends a number of tax cuts that primarily benefit wealthy Americans while reducing spending and eligibility for Medicaid and the Supplemental Nutrition Assistance Program (SNAP), likely kicking millions off both programs despite objections from some initial Republican holdouts. Sen. Josh Hawley (R-MO), who ultimately voted for the bill, expressed concerns that “we can’t be cutting health care for working people and for poor people in order to constantly give special tax treatment to corporations and other entities.”
During a period of intense demand for electricity — including from companies like Meta, OpenAI, and other Silicon Valley AI giants — the budget also makes it more difficult for wind and solar companies to receive tax credits. It winds down tax credits for EVs, likely rendering electric options more expensive for car buyers. And it requires the FCC to sell 800MHz of spectrum that will likely be drawn from the 6GHz band, which is currently left free to provide more capacity for Wi-Fi services.
While much of the budget fight concerned Medicaid and the national debt, there were also protracted negotiations over a planned 10-year moratorium on states regulating AI systems. Lawmakers ultimately voted against that rule, which was opposed by not only Democrats but many state-level Republican politicians. An excise tax on wind and solar power companies that couldn’t meet strict requirements barring “material assistance” from certain foreign entities including China was similarly removed, although the bill still deals a serious blow to the renewable energy industry. Congress also scrapped a ban on Medicaid funding for gender-affirming care, though it will deny Medicaid funding to reproductive health care group Planned Parenthood.
Meanwhile, the budget provides $45 billion for immigration detention facilities and around $30 billion for ICE personnel and operations costs, on top of tens of billions of dollars in other immigration enforcement-related funding that the nonprofit American Immigration Council says will overall make ICE “the highest-funded law enforcement agency in the entire federal government.”
This increase will allow Trump to continue an ever-expanding mass deportation program, which has seen masked ICE agents target immigrants in workplaces, courtrooms, and on city streets. The administration has widened its net to catch not only undocumented residents, but people who hold visas or green cards or were granted temporary protected status, as well as naturalized citizens; Trump mused earlier this week that his “next job” could be expelling native-born citizens for crimes like murder and said he would consider deporting his former “First Buddy” Elon Musk. Since the Supreme Court recently denied a nationwide injunction on Trump’s termination of 14th Amendment birthright citizenship, the administration could even target a freshly created class of stateless newborn babies later this month.
In addition to expanding arrests, the funding will likely also help continue the creation of new surveillance and detention systems, like a centralized national citizen database that has been reportedly constructed with help from DOGE. As Congress debated the budget bill this week, Trump toured a newly opened Florida detention center built with FEMA funds; the rapidly erected facility — which Republicans celebrated by releasing merchandise with the nickname “Alligator Alcatraz” — flooded during a rainstorm soon after.
]]>Age verification is perhaps the hottest battleground for online speech, and the Supreme Court just settled a pivotal question: does using it to gate adult content violate the First Amendment in the US? For roughly the past 20 years the answer has been “yes” — now, as of Friday, it’s an unambiguous “no.”
Justice Clarence Thomas’ opinion in Free Speech Coalition v. Paxton is relatively straightforward as Supreme Court rulings go. To summarize, its conclusion is that:
Around this string of logic, you’ll find a huge number of objections and unknowns. Many of these were laid out before the decision: the Electronic Frontier Foundation has an overview of the issues, and 404 Media goes deeper on the potential consequences. With the actual ruling in hand, while people are working out the serious implications for future legal cases and the scale of the potential damage, I’ve got a few immediate, prosaic questions.
Even the best age verification usually requires collecting information that links people (directly or indirectly) to some of their most sensitive web history, creating an almost inherent risk of leaks. The only silver lining is that current systems seem to at least largely make good-faith attempts to avoid intentional snooping, and legislation includes attempts to discourage unnecessary data retention.
The problem is, proponents of these systems had the strongest incentives to make privacy-preserving efforts while age verification was still a contested legal issue. Any breaches could have undercut the claim that age-gating is harmless. Unfortunately, the incentives are now almost perfectly flipped. Companies benefit from collecting and exploiting as much data as they can. (Remember when Twitter secretly used two-factor authentication addresses for ad targeting?) Most state and federal privacy frameworks were weak even before federal regulatory agencies started getting gutted, and services may not expect any serious punishment for siphoning data or cutting security corners. Meanwhile, law enforcement agencies could quietly demand security backdoors for any number of reasons, including catching people viewing illegal material. Once you create those gaps, they leave everyone vulnerable.
Will we see deliberate privacy invasions? Not necessarily! And many people will probably evade age verification altogether by using VPNs or finding sites that skirt the rules. But in an increasingly surveillance-happy world, it’s a reasonable concern.
Over the past couple of years Pornhub has prominently blocked access to a number of states, including Texas, in protest of local laws requiring age verification. Denying service has been one of the adult industry’s big points of leverage, demonstrating one potential outcome of age verification laws, but even with VPN workarounds this tactic ultimately limits the site’s reach and hurts its bottom line. The Supreme Court ruling cites 21 other states with rules similar to the Texas one, and now that this approach has been deemed constitutional, it’s plausible more will follow suit. At a certain point Pornhub’s parent company Aylo will need to weigh the costs and benefits, particularly if a fight against age verification looks futile — and the Supreme Court decision is a step in that direction.
In the UK, Pornhub ceded territory on that very front a couple of days ago, agreeing (according to British regulator Ofcom) to implement “robust” age verification by July 25th. The company declined comment to The Verge on the impact of FSC v. Paxton, but backing down wouldn’t be a surprising move here.
I don’t ask this question with respect to the law itself — you can read the legal definitions within the text of the Texas law right here. I’m wondering, rather, how far Texas and other states think they can push those limits.
If states stick to policing content that most people would classify as intentional porn or erotica, age-gating on Pornhub and its many sister companies is a given, along with other, smaller sites. Non-video but still sex-focused sites like fiction portal Literotica seem probably covered. More hypothetically, there are general-focus sites that happen to allow visual, text, and audio porn and have a lot of it, like 4chan — though a full one-third of the service being adult content is a high bar to clear.
Beyond that, we’re pretty much left speculating about how malicious state attorneys general might be. It’s easy to imagine LGBTQ resources or sex education sites becoming targets despite having the exact kind of social value the law is supposed to exempt. (I’m not even getting into a federal attempt to redefine obscenity in general.) At this point, of course, it’s debatable how much justification is required before a government can mount an attack on a website. Remember when Texas investigated Media Matters for fraud because it posted unflattering X screenshots? That was roughly the legal equivalent of Mad Libs, but the attorney general was mad enough to give it a shot. Age verification laws are, rather, tailor-made methods to take aim at any given site.
The question “What is porn?” is going to have a tremendous impact on the internet — not just because of what courts believe is obscene for minors, but because of what website operators believe the courts believe is obscene. This is a subtle distinction, but an important one.
We know legislation limiting adult content has chilling effects, even when the laws are rarely used. While age verification rules were in flux, sites could reasonably delay making a call on how to handle them. But that grace period is over — seemingly for good. Many websites are going to start making fairly drastic decisions about what they host, where they operate, and what kind of user information they collect, based not just on hard legal decisions but on preemptive phantom versions of them. In the US, during an escalating push for government censorship, the balance of power has just tipped dramatically. We don’t know how far it has left to go.
]]>In the past week, big AI companies have — in theory — chalked up two big legal wins. But things are not quite as straightforward as they may seem, and copyright law hasn’t been this exciting since last month’s showdown at the Library of Congress.
First, Judge William Alsup ruled it was fair use for Anthropic to train on a series of authors’ books. Then, Judge Vince Chhabria dismissed another group of authors’ complaint against Meta for training on their books. Yet far from settling the legal conundrums around modern AI, these rulings might have just made things even more complicated.
Both cases are indeed qualified victories for Meta and Anthropic. And at least one judge — Alsup — seems sympathetic to some of the AI industry’s core arguments about copyright. But that same ruling railed against the startup’s use of pirated media, leaving it potentially on the hook for massive financial damage. (Anthropic even admitted it did not initially purchase a copy of every book it used.) Meanwhile, the Meta ruling asserted that because a flood of AI content could crowd out human artists, the entire field of AI system training might be fundamentally at odds with fair use. And neither case addressed one of the biggest questions about generative AI: when does its output infringe copyright, and who’s on the hook if it does?
Alsup and Chhabria (incidentally both in the Northern District of California) were ruling on relatively similar sets of facts. Meta and Anthropic both pirated huge collections of copyright-protected books to build a training dataset for their large language models Llama and Claude. Anthropic later did an about-face and started legally purchasing books, tearing the covers off to “destroy” the original copy, and scanning the text.
The authors argued that, in addition to the initial piracy, the training process constituted an unlawful and unauthorized use of their work. Meta and Anthropic countered that this database-building and LLM-training constituted fair use.
Both judges basically agreed that LLMs meet one central requirement for fair use: they transform the source material into something new. Alsup called using books to train Claude “exceedingly transformative,” and Chhabria concluded “there’s no disputing” the transformative value of Llama. Another big consideration for fair use is the new work’s impact on a market for the old one. Both judges also agreed that based on the arguments made by the authors, the impact wasn’t serious enough to tip the scale.
Add those things together, and the conclusions were obvious… but only in the context of these cases, and in Meta’s case, because the authors pushed a legal strategy that their judge found totally inept.
Put it this way: when a judge says his ruling “does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful” and “stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one” — as Chhabria did — AI companies’ prospects in future lawsuits with him don’t look great.
Both rulings dealt specifically with training — or media getting fed into the models — and didn’t reach the question of LLM output, or the stuff models produce in response to user prompts. But output is, in fact, extremely pertinent. A huge legal fight between The New York Times and OpenAI began partly with a claim that ChatGPT could verbatim regurgitate large sections of Times stories. Disney recently sued Midjourney on the premise that it “will generate, publicly display, and distribute videos featuring Disney’s and Universal’s copyrighted characters” with a newly launched video tool. Even in pending cases that weren’t output-focused, plaintiffs can adapt their strategies if they now think it’s a better bet.
The authors in the Anthropic case didn’t allege Claude was producing directly infringing output. The authors in the Meta case argued Llama was, but they failed to convince the judge — who found it wouldn’t spit out more than around 50 words of any given work. As Alsup noted, dealing purely with inputs changed the calculations dramatically. “If the outputs seen by users had been infringing, Authors would have a different case,” wrote Alsup. “And, if the outputs were ever to become infringing, Authors could bring such a case. But that is not this case.”
In their current form, major generative AI products are basically useless without output. And we don’t have a good picture of the law around it, especially because fair use is an idiosyncratic, case-by-case defense that can apply differently to mediums like music, visual art, and text. Anthropic being able to scan authors’ books tells us very little about whether Midjourney can legally help people produce Minions memes.
Minions and New York Times articles are both examples of direct copying in output. But Chhabria’s ruling is particularly interesting because it makes the output question much, much broader. Though he may have ruled in favor of Meta, Chhabria’s entire opening argues that AI systems are so damaging to artists and writers that their harm outweighs any possible transformative value — basically, because they’re spam machines.
It’s worth reading:
Generative AI has the potential to flood the market with endless amounts of images, songs, articles, books, and more. People can prompt generative AI models to produce these outputs using a tiny fraction of the time and creativity that would otherwise be required. So by training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way.
…
As the Supreme Court has emphasized, the fair use inquiry is highly fact dependent, and there are few bright-line rules. There is certainly no rule that when your use of a protected work is “transformative,” this automatically inoculates you from a claim of copyright infringement. And here, copying the protected works, however transformative, involves the creation of a product with the ability to severely harm the market for the works being copied, and thus severely undermine the incentive for human beings to create.
…
The upshot is that in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission. Which means that the companies, to avoid liability for copyright infringement, will generally need to pay copyright holders for the right to use their materials.
And boy, it sure would be interesting if somebody would sue and make that case. After saying that “in the grand scheme of things, the consequences of this ruling are limited,” Chhabria helpfully noted this ruling affects only 13 authors, not the “countless others” whose work Meta used. A written court opinion is unfortunately incapable of physically conveying a wink and a nod.
Those lawsuits might be far in the future. And Alsup, though he wasn’t faced with the kind of argument Chhabria suggested, seemed potentially unsympathetic to it. “Authors’ complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,” he wrote of the authors who sued Anthropic. “This is not the kind of competitive or creative displacement that concerns the Copyright Act. The Act seeks to advance original works of authorship, not to protect authors against competition.” He was similarly dismissive of the claim that authors were being deprived of licensing fees for training: “such a market,” he wrote, “is not one the Copyright Act entitles Authors to exploit.”
But even Alsup’s seemingly positive ruling has a poison pill for AI companies. Training on legally acquired material, he ruled, is classic protected fair use. Training on pirated material is a different story, and Alsup absolutely excoriates any attempt to say it’s not.
“This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” he wrote. There were plenty of ways to scan or copy legally acquired books (including Anthropic’s own scanning system), but “Anthropic did not do those things — instead it stole the works for its central library by downloading them from pirated libraries.” Eventually switching to book scanning doesn’t erase the original sin, and in some ways it actually compounds it, because it demonstrates Anthropic could have done things legally from the start.
If new AI companies adopt this perspective, they’ll have to build in extra but not necessarily ruinous startup costs. There’s the up-front price of buying what Anthropic at one point described as “all the books in the world,” plus any media needed for things like images or video. And in Anthropic’s case these were physical works, because hard copies of media dodge the kinds of DRM and licensing agreements publishers can put on digital ones — so add some extra cost for the labor of scanning them in.
But just about any big AI player currently operating is either known or suspected to have trained on illegally downloaded books and other media. Anthropic and the authors will be going to trial to hash out the direct piracy accusations, and depending on what happens, a lot of companies could be hypothetically at risk of almost inestimable financial damages — not just from authors, but from anyone that demonstrates their work was illegally acquired. As legal expert Blake Reid vividly puts it, “if there’s evidence that an engineer was torrenting a bunch of stuff with C-suite blessing it turns the company into a money piñata.”
And on top of all that, the many unsettled details can make it easy to miss the bigger mystery: how this legal wrangling will affect both the AI industry and the arts.
Echoing a common argument among AI proponents, former Meta executive Nick Clegg said recently that getting artists’ permission for training data would “basically kill the AI industry.” That’s an extreme claim, and given all the licensing deals companies are already striking (including with Vox Media, the parent company of The Verge), it’s looking increasingly dubious. Even if they’re faced with piracy penalties thanks to Alsup’s ruling, the biggest AI companies have billions of dollars in investment — they can weather a lot. But smaller, particularly open source players might be much more vulnerable, and many of them are also almost certainly trained on pirated works.
Meanwhile, if Chhabria’s theory is right, artists could reap a reward for providing training data to AI giants. But it’s highly unlikely the fees would shut these services down. That would still leave us in a spam-filled landscape with no room for future artists.
Can money in the pockets of this generation’s artists compensate for the blighting of the next? Is copyright law the right tool to protect the future? And what role should the courts be playing in all this? These two rulings handed partial wins to the AI industry, but they leave many more, much bigger questions unanswered.
]]>The Supreme Court has upheld a Texas law requiring age verification to access adult websites, saying despite First Amendment claims, the law “only incidentally burdens the protected speech of adults.” The ruling, in Free Speech Coalition v. Paxton, opens the door to age-gating in states nationwide.
The court ruled in Texas’ favor by a margin of six to three — with Justices Sonia Sotomayor, Elena Kagan, and Ketanji Brown Jackson dissenting — and the majority opinion was delivered by Clarence Thomas. “The First Amendment leaves undisturbed States’ traditional power to prevent minors from accessing speech that is obscene from their perspective,” writes Thomas. “That power includes the power to require proof of age before an individual can access such speech. It follows that no person — adult or child — has a First Amendment right to access such speech without first submitting proof of age.”
FSC v. Paxton, argued in January, concerns the Texas law HB 1181, which requires sites with a large proportion of sexually explicit material to use “reasonable age verification methods” to make sure users are at least 18 years old. It’s one of numerous age verification laws enacted across the country for adult content in recent years, and it reached the Supreme Court after being upheld by the Fifth Circuit Court of Appeals. In taking the case, the Supreme Court effectively decided to reconsider a 2004 ruling, Ashcroft v. ACLU, in which it determined a similar rule — the Child Online Protection Act — violated the First Amendment.
The court says that conclusion (as well as an earlier one on a similar age verification provision in the Communications Decency Act) is no longer appropriate thanks to the forward march of technology. “With the rise of the smartphone and instant streaming, many adolescents can now access vast libraries of video content — both benign and obscene — at almost any time and place, with an ease that would have been unimaginable at the time of Reno and Ashcroft II,” it says.
Now, “adults have no First Amendment right to avoid age verification” that’s intended to block content deemed obscene for minors, the court ruled. The ruling hinged largely on the court declining to apply the highest level of scrutiny to the law — a measure that’s required in many speech-related cases and typically is difficult to overcome. Applying that scrutiny might “call into question all age-verification requirements, even longstanding in-person requirements” to access content that’s outlawed for minors, writes Thomas.
At least 21 other states have “materially similar” age verification rules for adult content, the ruling notes. It also notes that some sites have already implemented age verification rules, though others — primarily Pornhub — have ceased operation in states like Texas instead. Pornhub declined comment on whether, and how, the ruling might affect its operations.
There are several possible ways to enact online age verification, but critical evaluations have concluded that few (if any) methods manage to effectively block minors’ access without potentially compromising adults’ privacy — posing much higher risks than flashing an ID at the door of a brick-and-mortar store. At the time of Ashcroft v. ACLU, the Supreme Court determined that parent-controlled content filters could provide comparable protections without the same risks. But during oral arguments, justices seemed sympathetic to arguments that the internet had become meaningfully more dangerous to children and that these optional methods had failed.
The three dissenting justices argued that HB 1181 directly impacted legal speech for adults and should have been subject to the high bar of strict scrutiny, even if it’s possible it would have passed it. “H. B. 1181’s requirements interfere with — or, in First Amendment jargon, burden — the access adults have to protected speech: Some individuals will forgo that speech because of the need to identify themselves to a website (and maybe, from there, to the world) as a consumer of sexually explicit expression,” the dissent says.
In a statement, Alison Boden, executive director of the Free Speech Coalition — which represents the adult media industry — said that “as it has been throughout history, pornography is once again the canary in the coal mine of free expression. The government should not have the right to demand that we sacrifice our privacy and security to use the internet. This law has failed to keep minors away from sexual content yet continues to have a massive chilling effect on adults. The outcome is disastrous for Texans and for anyone who cares about freedom of speech and privacy online.”
While FSC v. Paxton deals with age verification for pornographic content, a similar fight is brewing over verification measures for social media and other web services. Texas Gov. Greg Abbott last month made Texas the second state (after Utah) to require mobile app stores to confirm user ages, despite lobbying against the bill by Apple.
]]>The Department of Justice reported yesterday that it filed a civil complaint to seize roughly $225.3 million in cryptocurrency linked to crypto investment scams. In a press release, the DOJ said it traced and targeted accounts that were “part of a sophisticated blockchain-based money laundering network” dispersing funds taken from more than 400 suspected victims of fraud.
The 75-page complaint filed in the US District Court for the District of Columbia lays out more detail about the seizure. According to it, the US Secret Service (USSS) and Federal Bureau of Investigation (FBI) tied scammers to seven groups of Tether stablecoin tokens. The fraud fell under what’s typically known as “pig butchering:” a form of long-running confidence scam aimed at tricking victims — sometimes with a fake romantic relationship — into what they believe is a profitable crypto investment opportunity, then disappearing with the funds. Pig butchering rings often traffic the workers who directly communicate with victims to Southeast Asian countries, something the DOJ alleges this ring did.
The DOJ says Tether and crypto exchange OKX first alerted law enforcement in 2023 to a series of accounts they believed were helping launder fraudulently obtained currency through a vast and complex web of transactions. The alleged victims include Shan Hanes (referred to in this complaint as S.H.), the former Heartland Tri-State Bank president who was sentenced to 24 years in prison for embezzling tens of millions of dollars to invest in one of the best-known and most devastating pig butchering scams. The complaint lists a number of other victims who lost thousands or millions of dollars they thought they were investing (and did not commit crimes of their own). An FBI report cited by the press release concluded overall crypto investment fraud caused $5.8 billion worth of reported losses in 2024.
Money recovered from this seizure will be put toward returning funds to the known victims of the scammers, the DOJ says. The fervently pro-crypto Trump administration has previously said forfeited money that isn’t sent to victims could be used to fund a US cryptocurrency reserve.
]]>Valve is introducing accessibility features for players with disabilities in its latest beta for Steam Big Picture Mode and SteamOS. The features — listed in full and explained here — include options to modify the Steam UI, like a high contrast mode, as well as a built-in screen reader for SteamOS.
In its post, Valve describes the features as “just the first accessibility features we’re making available.” For now players on both Big Picture Mode and SteamOS will get:
SteamOS devices (at this point, the Steam Deck and Lenovo Legion Go S) can also find:
The features are available on a new Accessibility tab in the settings, seen below for SteamOS.
Earlier this month Valve also started letting Steam users filter games by accessibility support — including some options similar to the ones above, as well as adjustable difficulty and speech-to-text or text-to-speech chat. It’s encouraging players with disabilities to suggest more features in a discussion thread (a mono audio toggle is looking popular.) And for anyone who doesn’t need these features, while I haven’t been able to try the beta yet, it sounds like might all be getting a bare-bones universal Kurosawa mode.
]]>