While AI ethics experts continue to debate the risks of using artificial intelligence in military systems, the U.S. Department of Defense has announced a series of agreements with technology companies. According to a report by the Associated Press, the Pentagon has entered into deals with seven leading IT firms to deploy their commercial AI models within secure military networks—a context where such algorithms were previously almost never utilized.
These agreements extend far beyond the delivery of standard hardware: according to AP, they provide for the use of off-the-shelf commercial models for data analysis, logistics, reconnaissance, and decision support for service members. While the Pentagon has historically favored proprietary developments and isolated solutions, the department is now intentionally pursuing closer collaboration with the private sector to accelerate the implementation of cutting-edge technologies.
The establishment of these agreements strengthens the technological presence of the U.S. in the field of military AI amid global competition, particularly from China, which is actively advancing its own AI militarization programs. At the same time, experts point out that commercial companies originally design algorithms for tasks related to user convenience, advertising, and services, rather than for decision-making in the heat of combat operations.
Questions regarding accountability for algorithmic errors, potential training biases, and the boundaries of acceptable AI use in military systems remain unresolved. Official Pentagon statements emphasize rigorous oversight and "lawful operational use," yet the high level of classification makes fully open verification and transparent public debate difficult to achieve.
Analysts note that such deals reflect a broader trend where the state can no longer rely exclusively on internal developments, leading it to delegate a portion of innovation to private labs and corporations. In turn, technology companies gain access to substantial government funding and datasets that would be inaccessible to them under normal circumstances.
Comparing commercial AI that helps navigate a smartphone route with the same type of algorithms used to manage unmanned platforms or analyze intelligence data reveals how drastically the risks of model flaws and errors escalate. While the consequences of failure in civilian tasks are typically limited, potential errors in a military context could lead to much more severe repercussions.
It is expected that these steps by the U.S. may accelerate AI integration in the military programs of other nations, including those already actively developing their own military AI systems. Meanwhile, scientists and human rights advocates continue to insist on the necessity of clear rules and regulations for military AI applications, highlighting that policy is moving faster than the relevant norms and international agreements.
These agreements illustrate that the boundaries between civilian and military technologies are becoming increasingly blurred. The central question is not whether such systems will emerge, but how their use will be controlled and how society and regulators will manage to balance security interests with innovation and ethical standards.



