On 12 February, over two dozen activists gathered outside OpenAI’s headquarters in protest over the company’s decision to remove a ban on military usages for its products.
Protesters from the group, aptly named PauseAI, fear that unchecked usage of the technology could lead to waves of disinformation, job loss and even what it describes as “S-risks”, potential “unescapable dystopias that would extend through all spacetime.” They are not alone in their fears, but neither is OpenAI in extending its work to the military.
The decision was made in January and was followed up with an announcement in Bloomberg that the company was working with the US Department of Defence (DoD) to develop cybersecurity tools. OpenAI maintains that it continues its bans on “using its tech to develop weapons, destroy property or harm people,” however, describing its work with the DoD as “very much aligned with what we want to see in the world.”
The use of AI in the defence sector is growing. A GlobalData report from 2023 estimates that the specialised AI market, which includes defence applications, will grow to $477.6bn by 2030 from $31.1bn in 2022. The uses of generative AI for the sector include assisting in weapon and machinery design, building improved combat situations and devising cyber offence and security capabilities.
AI and cybersecurity
AI cyberattacks have been a key concern so far in 2024. In early February the US ruled that using AI-generated voices in scam robocalls was illegal after a spate of AI impersonations of President Joe Biden during the New Hampshire Democratic primary, raising fears as the US election looms later in the year. GCHQ, one of the UK’s spy agencies, also warned that AI will lead to an increase in both opportunistic and high-skill cyberattacks.
OpenAI’s new video generation tool also proves that convincing generative video is not far off and though the tool remains highly controlled for now, it is likely that competition will increase and eventually push this technology into public hands. A recent AI-assisted deepfake attack on a Hong Kong financial services worker cost the company $25m after he entered a video call made up of recreations of company employees, showing how dangerous this technology can be.
These dangers are making militaries and national governments more concerned than ever about cybersecurity. Expanding on pre-existing links between big tech and the defence sector, companies like Microsoft and OpenAI are working to combat AI-assisted threats. In a blog post released earlier this week (14 February) Microsoft detailed its work in conjunction with OpenAI to track large language model (LLM) use in cyberattacks.
The two companies track the usage of their LLMs to find examples of misuse. While the post notes that “our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely,” it also highlights five malicious actors associated with China, Iran, Russia and North Korea using their services for a range of activities from compiling research into defence companies in the Asia-Pacific region to developing phishing code.
Microsoft is currently the recipient of a number of DoD contracts, including forming part of its cloud services provision alongside Amazon, Google and Oracle. GlobalData intelligence shows that the company has already mentioned cybersecurity at least 23 times in its company filings for 2024, making it the ninth most mentioned theme and placing it ahead of ESG, big data and internet of things. AI tops the charts with at least 82, above gaming and geopolitics.