Skip to main content Skip to secondary navigation
Generative AI for National Security

Project MinervAI

Project MinervAI

Main content start

Despite the discourse around autonomous weapons and AI-enabled command-and-control systems, the national security apparatus still runs primarily on human-generated documents like policy memos, plans, orders, and intelligence reports. With the explosion of large language models (LLMs) like ChatGPT in the commercial sphere, the national security workforce may be poised for a revolution in knowledge work. By training on massive text corpora, LLMs have demonstrated unexpected fluency in natural language and show promise for accelerating and enhancing writing and analysis. However, concerns remain around accuracy, security, and safety assurances. This project explores the potential applications of LLMs and other generative AI to core national security functions like policy development, intelligence analysis, planning, and operational reporting. It assesses use cases where LLMs may substantially increase productivity and quality of output while requiring only minimal human verification. Guiding questions include how generative AI could augment human expertise without achieving full automation, and how commercial advances can responsibly translate to high-stakes national security work dependent on trust and transparency. The project ultimately seeks to chart a path for the national security workforce to capitalize on AI's language breakthroughs while upholding duties to the nation.

Team Members

Defense Innovation Scholar

David Vernal

Bio