It sounds like a good idea: Install an AI assistant to help people get food stamps in Maryland.

What happens, though, when your innovation partner gets banned by the federal government?

“How can I help you?” is what the AI assistant Claude asks at the start of every chat.

And President Donald Trump’s administration is really angry with Anthropic, the parent company of Claude.

Advertise with us

Defense Secretary Pete Hegseth declared the company a supply-chain risk last week, saying its ethical red lines threatened national defense. The White House is finalizing an executive order banning Anthropic tools across the government, Axios reported.

“America’s warfighters will never be held hostage by the ideological whims of Big Tech,” Hegseth wrote on X. “This decision is final.”

This is a fight over who controls what AI can do, its creators or the government. What it means for Maryland’s experiment with San Francisco-based Anthropic is unclear.

“We will have to talk with our counterparts in the federal government, because the partnership with Anthropic ... is not currently drawing down any federal funding for this program,” said Katie Savage, Maryland’s information technology secretary. “So we have to check in with anything that would put us out of compliance.

“And if it does, of course, well, we’ll redirect.”

Advertise with us

Gov. Wes Moore announced the partnership with Anthropic and another company, Percepta, in November. They are building Claude into agencies that deliver public benefits, plus the state labor and environment departments.

Funded by $525,000 in Rockefeller Foundation grants, it was intended as a model for other states.

Katie Savage, Maryland's first secretary of information technology, has helped oversee a project to put Claude in applications for public safety net programs.
Katie Savage, Maryland's first secretary of information technology, has helped oversee a project to put Claude in applications for public safety net programs. (Maryland Department of Information Technology)

“It won’t necessarily be used to process benefits directly, because we believe in having a human in the loop,” Savage said. “But it is going to be utilized to form a knowledge bed and be a chatbot for state workers in light of all of the different HR 1 requirements that have come down.”

Known as the One Big Beautiful Bill Act, HR 1 slashed Medicaid and drastically changed federally funded benefits. If Trump’s executive order bans Claude in federal programs, Savage said, the state might have to work with other companies.

Maryland has already made changes. It relaunched its online portal last year for food benefits, Medicaid, cash and energy assistance programs as MarylandBenefits.org. It offers a single application for many programs.

Advertise with us

An AI assistant wasn’t included, Savage said, because there wasn’t a comfort level with its use. The Pentagon has no such qualms.

Claude was the first chatbot to work with classified intelligence sources, and the Pentagon leaned on it to plan the capture of Venezuelan President Nicolás Maduro in January.

America and Israel used it next in Iran, combining Claude with Palantir Technologies’ Maven Smart System to constantly update human commanders with battlefield targets.

Although the Trump administration is busting all sorts of norms, there’s nothing new about raising ethical questions about new technologies. Google engineers did it during the Afghan war.

The red line for Anthropic co-founder Dario Amodei was the use of Claude for mass domestic surveillance and the creation of fully autonomous weapons.

Advertise with us

“In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei wrote. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”

Other AI companies are crossing Amodei’s lines.

Palantir works with the National Security Agency at Fort Meade, conducting mass surveillance overseas. That intelligence helped target leadership figures in Iran.

FILE - Dario Amodei, CEO and co-founder of Anthropic, attends the annual meeting of the World Economic Forum in Davos, Switzerland, Jan. 23, 2025. (AP Photo/Markus Schreiber, File)
Dario Amodei, CEO and co-founder of Anthropic, asked the Pentagon to exclude its technology from use in mass domestic surveillance and autonomous weapons. (Markus Schreiber/AP)

The company is rolling out some of those same capabilities in ImmigrationOS, a $30 million AI platform Immigration and Customs Enforcement will use to track people for deportation.

Confronted with Amodei’s qualms, Hegseth presented an ultimatum. When his March 6 deadline passed, he labeled the company a supply-chain risk, locking it out of the defense industry.

Advertise with us

“Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military,” Hegseth wrote. “That is unacceptable.”

As the director of the Digital Defense Service under President Joe Biden, Savage made similar evaluations.

“We don’t have a lot of information about the process by which Claude was designated as a supply-chain risk,” she said. “I’m curious about the process that it went through.”

The Pentagon seeded the companies producing AI systems with money and is buying their products. The fight will set a precedent for who controls the government’s use of them.

“I think, honestly, the best course is probably for the Defense Department to invest in some of their own technologies that would be explicitly for their purposes,” Savage said. “Short of making that investment, it’ll have to be, I think, a partnership and a conversation like we’re seeing now.”

Advertise with us

Other companies are taking sides.

The Pentagon awarded OpenAI a replacement contract without Anthropic’s conditions. Anthropic sued the Pentagon, and Microsoft filed a brief in support of its rights to control its technology.

Gov. Wes Moore says he wants government to work smarter, adopting systems such as artificial intelligence. (Jerry Jackson/The Banner)

Maryland’s use of Claude seems far less dramatic. There are no state autonomous weapons.

The state adopted a safe AI policy in May with seven principles: human-centered design, security, privacy, transparency, equity, accountability and effectiveness.

In June, it used bilingual AI from Anthropic and Code for America to help 18,000 new families with schoolchildren get food support. It added a self-service portal for environmental site assessments on building projects.

Advertise with us

Now lawmakers in Annapolis are considering regulation of AI safety in schools, toys, hiring and health care.

Despite that, there isn’t much keeping a future governor from changing the safe AI policy. That governor might want chatbots to share information from benefit databases with law enforcement — a form of mass surveillance.

Savage believes state privacy laws would prevent that. Things, however, are clearly changing fast.

Amodei explained what he sees as right for democracies. He also pointed out that AI can’t safely do things the Pentagon might want.

Today.

“It’s something that we need to continuously revisit to ensure that the state is keeping pace with any appropriate guardrails that the technology evolves,” Savage said. “We’re evolving the guardrails as well.”