ChatGPT is an AI-powered chatbot developed by the US-based OpenAI lab that has garnered over 100 million active users since its launch in November. While it has already impacted various fields, its potential impact on national security and defence remains unclear as it has yet to replace humans in any meaningful capacity. Nonetheless, initial use cases and reactions to the tool suggest it already creates benefits and risks for defence.
Based on a groundbreaking form of generative AI called GPT-3.5, it can communicate with users and generate detailed human-like responses to questions or prompts in various text formats. Across multiple industries, such as healthcare, real estate, public relations, marketing, customer service, and media, some companies have started using iterations of generative AI for various tasks, such as scheduling appointments and writing articles.
Both large and small businesses have complained about time-consuming regulations in the defence sector. Still, smaller enterprises often need help complying with the National Contract Regulation standards due to their unfamiliarity with processes and procedures associated with government contracting. AI tools can help alleviate this problem by assisting founders of small- and medium-sized businesses (SMBs) get proposals out and accepted.
The Department of Defense in the US has also recognised the benefits of generative AI to speed up and simplify the federal acquisition process. This year, the DoD’s Chief Digital and AI Office announced that it is prototyping and testing an AI-powered contract-writing capability called “Acqbot.” The tool is intended to help contracting officers write contracts and manage the contract lifecycle. The device is still being developed and requires substantial input data and human supervision.
Here in the UK, the Ministry of Defence (MoD) is exploring using AI and machine learning to support its operations. The Defence and Security Accelerator (DASA) has launched several initiatives to develop AI-powered solutions to aid military personnel, such as using AI to improve situational awareness for pilots and another project to use AI to detect and track hostile drones.
While tools like ChatGPT offer several benefits, it also poses a potential threat to national security by providing cybercriminals with an arsenal of potential use cases. It is already being used by non-state threat actors, including script kiddies, hacktivists, and scammers, to engage in various forms of cybercrime. In the future, this technology has the potential to be harnessed by nation-state actors to conduct cyber espionage, information operations, and cyberattacks to increasingly devastating effect.
ChatGPT still has a long way to go before it can be relied upon for essential national security or defence contracting tasks. The information it produces is displayed confidently yet is often unreliable without further verification. OpenAI’s FAQ page notes that ChatGPT sometimes makes incorrect or biased answers and has limited knowledge of anything that occurred before 2021. Some software developers have even banned ChatGPT answers because it often generates code with substantial errors, an obstacle to individuals using ChatGPT for either good or nefarious purposes.
Given its limitations, the current version of AI tools will only revolutionise national security or government contracting after some time. Nevertheless, government contractors and the MoD workforce should better understand the pros and cons that AI and ML-based capabilities will bring to their industry in the coming years. Defence professionals aware of AI advances, such as ChatGPT, will be most poised to take advantage of the technology’s benefits and defend against its security risks.
(Header image is AI generated)