Sunday, March 01, 2026

Selling AI Before It’s Time

Artificial Intelligence has been big in the news the last few days. A lot of the talk has been about the Trump administration designating Anthropic a supply chain risk. The US  Department of Defense (its official legal name) was unable to agree to contract terms with Anthropic. You can read Anthropic’s statement here. Statement on the comments from Secretary of War Pete Hegseth

There are apparently two sticking points. 

The use of Anthropic’s AI model, Claude:for:

  • the mass domestic surveillance of Americans
  • fully autonomous weapons.

The first on general principal. The second because Anthropic does not believe that AI is ready for handling fully autonomous weapons. I’m surprised (OK not really) that the first is an issue because the DoD says that using it for mass domestic surveillance would be illegal (probably true) and that they would not do it. Well, some of us remember the CIA snarfing up data on Americans by getting data from overseas so I can see why Anthropic might want more assurances than “trust me.”

The fully autonomous weapon control is potentially even more concerning. Anthropic doesn’t believe their AI is ready for that. I wonder if it ever will be ready. There are reports that OpenAI’s tools took part in mission planning for the recent strikes against Iran. There are also credible reports that those attacks hit a school and killed over 80 school children.  Did AI pick the targets alone? Was there human oversite? I have no idea but clearly things were missed. At least I hope they were missed. I’d hate to think that event was intentional. Dare we let AI make these decisions?

There have been some studies of AI used in war games. These studies have resulted in headlines like “AI simulations constantly opting for nuclear strikes, terrifying study shows” AI models do not have human sensibilities or share human ideas of going too far. Apparently, these AI tools have not been trained to follow Asimov's Three  Rules of Robotics. I wonder if the people developing AI today are aware of them. I doubt that many government officials are. Nor do they really understand the risks of AI controlling weapons.. No one really does but if the developers behind a tool say it isn’t ready perhaps we should believe them!

I was reminded of the old Paul Masson advertisements where Orson Wells would dramatically declare “We will sell no wine before its time.” The point was not to rush things and to let the process complete until the wine was completely ready. It appears that some people are pushing AI in places where AI is not ready to perform adequately. That is very unlikely to give a good result.