Despite the company being blacklisted by the US government, the WSJ reported on the AI model’s role in a military mission in the Middle East. According to the WSJ, the Claude tool is used across various units, including US Central Command, to assess intelligence, identify targets, and model combat scenarios. Conflict Around the Military Use of AITensions between the Pentagon and Anthropic have continued for several months. In response, the US authorities added Anthropic to a blacklist, banned federal agencies from using its products, and labeled the company a national security threat. This intensified debates about the risks of autonomous weapons and the role of AI in decision-making.
Source: Wall Street Journal March 01, 2026 10:29 UTC