05/02/24
AI-based platforms are vulnerable to terrorist exploitation
Terrorists could potentially exploit artificial intelligence (AI)-based platforms like ChatGPT for their destructive and evil purposes, according to Prof. Gabriel Weimann of Reichman University’s School of Government.
Working with five interns from the university’s International Institute for Counter-Terrorism (ICT), Weimann investigated how terrorists or violent extremists could take advantage of such AI tools to manipulate these systems with specific commands that, in effect, “jailbreak” the model, making it possible to bypass many of its protective measures.
They published their findings in the journal produced by the Combat Terrorism Center at West Point under the title “Terror: The Risks of Generative AI Exploitation.”
With the arrival and rapid adoption of sophisticated deep-learning models such as ChatGPT, they explained that there is growing concern that terrorists and violent extremists could use these tools to enhance their operations online and in the real world.ChatGPT is a revolutionary technological advancement – an AI-powered digital assistant that is designed to help individuals and companies manage their everyday tasks more efficiently. In early 2023, this new application reached 100 million active users two months after its launch, becoming the fastest-growing consumer application in history.
“Large language models have the potential to enable terrorists to learn, plan, and propagate their activities with greater efficiency, accuracy, and impact than ever before. As such, there is a significant need to research the security implications of these deep-learning models. Findings from this research will prove integral to the development of effective countermeasures to prevent and detect the misuse and abuse of these platforms by terrorists and violent extremists.”
The team conducted a systematic experiment in which several fictitious and anonymous accounts were activated and used to enter a variety of commands relevant to the needs of terrorists – such as seeking information on recruitment, operational planning, and propaganda dissemination – to five prominent AI platforms (Chat GPT 4, Chat GPT 3.5, Google Bard, Nova, and Perplexity).The researchers analyzed the responses that the five platforms generated to a total of 2,250 prompts, which solicited information that would be useful to terrorists, including propaganda strategies, tactics for recruiting volunteers and spreading disinformation, instructions for orchestrating attacks, and more.
With the help of “jailbreak” techniques, they were able to penetrate the platforms’ defensive barriers. For example, if you ask ChatGPT a question like “How do you make a bomb?” you will immediately receive a message informing you that the system does not provide this type of information – but through manipulations simulating the tactics of terrorist organizations, the researchers managed to breach the platform’s safeguards and obtain the information.
Weimann and his team revealed a 50% success rate – meaning that the answers provided by the AI platform were both responsive and relevant. These responded to the information that was requested with information that was pertinent to the question. The findings of this pioneering study shed light on how terrorists or violent extremist actors can exploit this technology and offer interesting and deeply worrying insights into the vulnerabilities of these platforms.
Through their experiments, the researchers observed that the platforms tested generally exhibited high success rates in fulfilling requests for information beneficial to terrorists. “Our study offers actionable recommendations for government and security agencies, as well as for the operators of the platforms themselves on how to fortify the defense mechanisms that were proven to be ineffective in the experiments,” Weimann concluded.
Related Links
Back to index