Some recent articles on military issues have cause for concern. An article about how the war in Ukraine talks about “A New Model for Military Development,” in which the development of new strategies occurs on the battlefield itself, using, among other things, AI-driven navigation. According to Lockheed Martin, the goal is to “develop a capability, fly it the next day, and deliver it like mobile app updates.” Although the benefits of being able to adapt quickly to new scenarios is obvious, the risks are not so obvious, but real nonetheless. Given AI’s penchant for “hallucinating,” and given the mixture of civlians and combatants in most war zones these days, the lack of time to test the reliability of these new weapons is cause of some concern. Though inconvenient, to be sure, a lengthy development allows for appropriate testing.
A review by William Hartung of the book by Alex Karp, the CEO of the military tech firm Palantir, also raises concerns. Titled The New Age Militarists, Hartung criticizes Karp for writing to his shareholders that the rise of the West was not due to “the superiority of its ideas or values or religion…but rather by its superiority in applying organized violence.”
Although it is true that Karp’s opinion is that of one person, Hartung points out that “The grip of the military-tech center on the Trump administration is virtually unprecedented in the annals of influence-peddling.” And amid all the cost-cutting going on, Senator Roger Wicker (R-MS) has pushed for a $150 billion increase in the Pentagon budget
.
Finally, an article in MIT’s Technology Review titled OpenAI’s New Defense Contract Completes its Military Pivot which announces that AI will be deployed on the battlefield. The company states that it will help build AI models that “rapidly synthesize sensitive data, reduce the burden on human operators, and improve situational awareness…” Meaning that AI is going to be making decisions on the battlefield previously made by humans. Do we really want the hallucination-prone AI to make crucial decisions where civilians are involved? Civilians are already dying in large numbers; this would seem to raise the possibility of increasing that number.
Of course, it is entirely possible that these developments will enhance safety and accuracy on the battlefield. We just need assurances that this is so.
All of this calls for a serious review of how technology, including the various forms of artificial intelligence, is used in military actions. Some time ago, we had the Congressional Office of Technology Assessment which was equipped for such a review. This capacity should be reinstated in some form.
As always, let me hear from you. And share this with your friends and colleagues. My best, Alan