AI and the National Security

Context

  • The recent developments highlight the growing intersection between Artificial Intelligence (AI) and national security. Anthropic, an American AI company, has urged authorities to treat Chinese AI labs such as DeepSeek, MoonshotAI, and MiniMax as national security threats.
  • At the same time, AI models developed by American firms have reportedly been used by the U.S. military in the Iran attacks, helping accelerate the “kill chain” from target identification to legal approval and military strike.

Associated Issues Due to Intersection

  • AI Model Distillation and Strategic Competition
    • Chinese AI companies have been accused of distilling frontier AI models developed by American firms.
    • Distillation refers to the process where a weaker AI model learns by analysing the outputs of a stronger model.
    • Such practices raise concerns about technological diffusion and competitive advantage in the AI race.
  • Difficulty in Controlling AI Proliferation
    • Generative AI is often compared to nuclear technology, suggesting the need for strict non-proliferation measures.
    • However, the comparison is misleading because AI is a dual-use general-purpose technology, closer to semiconductors than nuclear weapons.
    • Unlike nuclear research driven by governments, AI innovation is largely conducted by private companies for civilian purposes.
    • Mathematical AI models are not scarce or easily traceable, making proliferation extremely difficult to control.
  • Ineffectiveness of Technology Restrictions
    • Restrictions on AI inputs such as semiconductors have often been circumvented or partially repealed.
    • The development of DeepSeek’s model at a fraction of the cost of frontier models demonstrates how innovation can bypass regulatory controls.
    • Distillation techniques create another pathway for technological diffusion, making restrictions even harder to enforce.
  • Ethical Concerns and Military Applications
    • Frontier AI models developed by companies such as Anthropic, OpenAI, Google, and xAI may be used in military systems for:
      • Surveillance operations
      • Cyber warfare
      • Lethal autonomous weapon systems
    • When Anthropic raised concerns about such uses, it risked losing defence contracts and being labelled a “supply chain risk”, while competitors like OpenAI accepted permissive military contracts.
    • This situation highlights a competitive “race to the bottom” among AI companies seeking government partnerships.
  • Talent Mobility and Knowledge Diffusion
    • AI talent is highly mobile, making restrictions on knowledge transfer difficult.
    • Many researchers working in Chinese AI laboratories were trained in U.S. universities or previously employed in American companies.
    • This global circulation of expertise makes technological containment strategies less effective.
  • Concentration of Power in the AI Industry
    • Calls for coordinated action across AI companies, cloud providers, and policymakers may strengthen the market dominance of a small group of technology companies.
    • Input-based restrictions risk consolidating technological power within a few firms, potentially harming innovation, scientific collaboration, and global economic development.

Way Forward

  • Global Governance Frameworks: States should develop plurilateral agreements on responsible military use of AI technologies.
  • Human Oversight in Lethal Decisions: International norms must ensure meaningful human control over decisions involving lethal force.
  • Restrictions on Mass Surveillance: Agreements should include limits on the use of AI for large-scale civilian surveillance.
  • Auditable Technical Standards: Establish transparent and verifiable standards for AI systems used in defence operations.
  • Universal Applicability: Governance frameworks must apply across all states and actors to ensure effectiveness.
  • Balanced Innovation Policy: Policymakers should avoid overly restrictive technology controls that hinder innovation and international scientific cooperation.

Leave a Comment

Your email address will not be published. Required fields are marked *

This will close in 0 seconds

Scroll to Top