
Introduction to AI and Military Applications
The growing adoption of artificial intelligence (AI) in civilian and military sectors has raised serious concerns about its dual-use potential—particularly in military contexts such as ISTAR (Intelligence, Surveillance, Target Acquisition, and Reconnaissance). Recent AI advancements have made once-hypothetical risks a reality, as foundation models, created for civilian use, are now repurposed to serve military operations. More worryingly, this shift has largely sidestepped adequate policy discussions on potential civilian harm.
Focus on Narrow AI Risks
Traditional policy debates have centred around using AI to create CBRN (Chemical, Biological, Radiological, Nuclear) weapons. This has led to a disproportionate focus on a few extreme, hypothetical scenarios while neglecting the far-reaching, real-world implications of AI systems currently used in military contexts. Specifically, AI models in ISTAR operations represent immediate risks that could have life-or-death consequences for civilians. Policymakers should be paying more attention to how widely available commercial models might inadvertently leak personal information, contributing to adversaries’ military intelligence capabilities.
The Role of Personal Data in ISTAR
Personal information collected from data brokers or scraped from public platforms is often embedded in commercial AI models. This data can be misused for ISTAR military applications, including surveillance and targeting. Some defence contractors are already using AI trained on civilian data to improve battlefield intelligence, and this infusion of personal data into military systems amplifies the potential for mistakes with deadly consequences.
Failures in Current Policy Interventions
Current regulatory strategies are inadequate, including computing thresholds for AI models and restrictions on the public release of model weights. Compute thresholds, devised to control advanced AI capabilities, don’t align with the capabilities needed for military AI systems. Moreover, techniques such as model extraction and inversion attacks mean adversaries don’t need access to a model’s underlying code to extract sensitive data. This loophole demands that we rethink how we regulate the models themselves and the data they’re trained on.
Training Data as the Key Vulnerability
The risks posed by foundation models extend beyond the technology’s computing power—central to any AI model’s capabilities is its training data. The inappropriate use of personal data in foundational AI models for commercial purposes presents an unacceptable risk when these models are extended to INTAR military systems. The use of large datasets, often unchecked for sensitive personal information, presents opportunities for malicious and accidental misuse in ways that traditional security measures like computing limits fail to address.
Recommendations for Future Interventions
Governments and institutions must reform policies to account for the dual-use risks of foundation models. Policymakers must prioritize stricter controls on personal data while ensuring that these models are traceable throughout their development lifecycle to avoid vulnerabilities in the data pipeline. Given the significant risks involved, it might also be important to maintain separate, highly secure models specifically designed for military purposes rather than fine-tuning commercially available models.
Conclusion: Preventing Future Risks
To mitigate AI’s dual-use risks in military contexts, policymakers must widen the scope of their interventions beyond CBRN weaponry and apply stringent data controls. Addressing the use of personal information in AI models used for military intelligence is critical to preventing the proliferation of ISTAR capabilities that could negatively impact civilians and destroy global trust. It’s vital to embed traceability and data transparency in creating these models to safeguard national security and civilian privacy.