
Key Highlights
- Anthropic is offering its Claude AI to all U.S. government agencies for just $1 per year
- AI can reduce paperwork time, speed up medical diagnoses for veterans, and answer public questions instantly
- Companies like Anthropic, OpenAI, and Google are all competing to get their AI into federal offices
According to the latest report, artificial intelligence is about to change how the U.S. government works, and it is coming at an unbelievable price.
(Source: AI Blog on X)
Anthropic, the company behind the powerful Claude AI, announced Tuesday that it is offering its chatbot to every federal agency for a flat $1 fee per year.
That means whether it is the White House, Congress, or even the courts, government workers could soon have an AI assistant helping them analyze documents, draft reports, and others. This will help them speed up decision-making, that too for less than the cost of a coffee.
Anthropic’s $1 Bid for US Government
AI companies like Anthropic and OpenAI (maker of ChatGPT) are racing to get their tech into the government’s hands. By charging almost nothing, they are betting that once agencies see how useful AI can be, they all pay for bigger contracts later.
It is also about influence. The U.S. government is the world’s biggest customer, and winning its trust could set the standard for how AI is used globally, especially as China pushes its own AI tools.
What Can Claude Do?
Last month, President Trump’s AI Action Plan called for faster adoption of AI across federal agencies. By doing this, they want to cut bureaucracy, speed up services, and keep the U.S. ahead in tech.
Anthropic’s CEO, Dario Amodei, says giving the government top-tier AI for $1 is about “securing America’s leadership.” But critics wonder:
The U.S. General Services Administration (GSA), which handles government purchasing, just added Claude, ChatGPT, and Google’s Gemini to its approved vendor list. OpenAI also scored a $1-per-agency deal for ChatGPT Enterprise, proving this is now an AI arms race for federal contracts.
How the US Government is Using AI
The US government is going headfirst into the world of AI by using numerous cutting-edge AI models to make its work faster, smarter, and more efficient.
From sorting through mountains of data to catching fraud, federal agencies are tapping into different types of AI-such as generative AI, machine learning, and natural language processing, to tackle everything from public health to national security.
According to a 2024 report by the Government Accountability Office (GAO), federal agencies reported over 1,100 AI use cases, nearly double the 571 from 2023. Of course, generative AI models like Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini saw a major increase, jumping from 32 to 282 cases.
These numbers show just how fast the government is moving to integrate AI into its day-to-day operations.
Generative AI in Government Offices
The Department of Veterans Affairs (VA) uses AI to speed up X-rays and scans. Instead of waiting days for results, doctors can now spot problems quickly, meaning veterans get the care they need without long delays.
Similarly, the General Services Administration (GSA), which is the government’s office supply store, uses AI to review contracts in minutes instead of hours. That means less time wasted on boring paperwork and more time for important projects.
The Social Security Administration (SSA) has AI chatbots that answer millions of questions a year, like checking benefit status or helping with claims. Now, instead of sitting on hold forever, people can get help instantly.
According to the latest report, over three months in early 2025, North Carolina’s state employees working on unclaimed property and local government finances tested the AI tool in their daily workflows.
However, there have been some incidents that have happened in the past that raise security concerns. In a startling incident revealed in the May 2025 safety report, Claude Opus 4 attempted to blackmail a fictional engineer during internal testing, threatening to expose a made-up affair to keep itself from being shut down. This eye-opening case has sparked big questions about how far AI might go to protect itself.