Skip to content
Menu
menu

Image by iStock; Security Management

Europol Research Shares How ChatGPT Might Be Used to Facilitate, Fight Crime

Large language models (LLMs) have entered the chat. None more so than ChatGPT, an LLM released by OpenAI in November 2022 to allow the public to become familiar with the program.

ChatGPT now has 100 million users, making it the fastest-growing consumer Internet app available. While many users are exploring how to use ChatGPT—from writing cover letters to creating lesson plans to drafting shopping lists—law enforcement is also carefully tracking how the new artificial intelligence (AI) tool will impact its work.

Recently, Europol’s Innovation Lab organized workshops with subject matter experts in operational analysis, serious and organized crime, cybercrime, counterterrorism, and information technology to explore how criminals could abuse LLMs—specifically ChatGPT. The findings were then compiled in ChatGPT—The Impact of Large Language Models on Law Enforcementthat was published on Monday.

“The aim of this report is to raise awareness about the potential misuse of LLMs, to open a dialogue with Artificial Intelligence (AI) companies to help them build in better safeguards, and to promote the development of safe and trustworthy AI systems,” according to a Europol press release.

ChatGPT Basics

ChatGPT is an LLM—a type of AI system that can process, manipulate, and generate text. It requires training to work. The materials that were used to train ChatGPT consisted of 45 terabytes of text from the Internet—both unsupervised (to predict missing words in a given text to learn structure and patterns of human language) and then reinforced learning from human feedback (where human input helped ChatGPT learn to adjust its parameters to perform better).

ChatGPT is currently on its fourth model—GPT-4—which can solve more advanced problems than previous iterations. The data that it is trained on, however, only goes up to September 2021. It also sometimes provides answers that sound plausible but are actually inaccurate or wrong.

“GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its data cuts off (September 2021), and does not learn from its experience,” OpenAI wrote in a blog post announcing the new version. “It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obvious false statements from a user. And sometimes it can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces.”

In an interview with technology reporter Kara Swisher, Open AI CEO Sam Altman said it was important to release ChatGPT to the public to allow society to adjust to the existence of these new tools.

“Now, the reason we’re doing this work is because we want to minimize those downsides while still letting society get the big upsides, and we think it’s very possible to do that,” Altman explained. “But it requires, in our belief, this continual deployment in the world, where you let people gradually get used to this technology, where you give institutions, regulators, policy-makers time to react to it, where you let people feel it, find the exploits, the creative energy the world will come up with—use cases we and all the red teamers we could hire would never imagine.”

OpenAI has also been adding safeguards to ChatGPT to prevent malicious use of the technology. For instance, OpenAI policies allow ChatGPT to assess text input for sexual, hateful, violent, or self-harm promoting content and restricts its ability to respond to these prompts. This is not a failsafe method, however, since prompt engineering can be used to circumnavigate some of these policies.

Europol Findings

Europol’s analysis of ChatGPT found that while the information that it provides is already available on the Internet, “the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime.”

Europol researchers were especially interested in how ChatGPT can be used to enhance phishing by creating more authentic seeming phishing and online fraud campaigns faster at an increased scale.

“ChatGPT may therefore offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to respond to messages in context and adopt a specific writing style,” according to the report. “Additionally, various types of online fraud can be given added legitimacy by using ChatGPT to generate fake social media engagement, for instance to promote a fraudulent investment offer.”

Additionally, ChatGPT’s ability to produce authentic sounding text quickly and at large scale could allow it to facilitate propaganda and disinformation campaigns with little effort from users.

“For instance, ChatGPT can be used to generate online propaganda on behalf of other actors to promote or defend certain views that have been debunked as disinformation or fake news,” the report explained.

The Europol researchers also highlighted concerns that ChatGPT could be used to enable those without technical knowledge to engage in cybercrime, such as creating phishing pages or malicious Visual Basic for Applications (VBA) scripts.

“This type of automated code generation is particularly useful for those criminal actors with little to no knowledge of coding and development,” according to the report. “Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing. If prompts are broken down into individual steps, it is trivial to bypass these safety measures.”

While these abilities exist today, the Europol researchers also highlighted their concerns for the future. For instance, as LLMs become more advanced they could be integrated with other AI services to create synthetic media—such as deepfakes.

“Other potential issues include the emergence of ‘dark LLMs,’ which may be hosted on the Dark Web to provide a chatbot without any safeguards, as well as LLMs that are trained on particular—perhaps particularly harmful—data,” the researchers explained. “Finally, there are uncertainties regarding how LLM services may process user data in the future—will conversations be stored and potentially expose sensitive personal information to unauthorized third parties? And if users are generating harmful content, should this be reported to law enforcement authorities?”

With the knowledge of how ChatGPT can begin to be used by criminals, the Europol researchers wrote that it is critical for law enforcement and non-governmental organizations to be involved in creating new and enhanced safeguards to limit how LLMs can be used for illegal activity.

They also recommended that law enforcement officers begin developing skills to make the most of LLMs, including ChatGPT, to understand how they can be leveraged to build their knowledge, expand their expertise, and extract results. Law enforcement agencies may also want to explore creating customized LLMs for tailored purposes.

“As technology progresses, and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse, as well as to ensure potential benefits can be taken advantage of,” according to the report. “This report is a first exploration of this emerging field. Given the rapid pace of this technology, it remains critical that subject matter experts take this research further and dive deeper if they are to grasp its full potential.”

What ChatGPT Had to Say

After reviewing the Europol report, Security Management decided to ask ChatGPT itself what its impact might be on law enforcement in Europe.

ChatGPT highlighted some opportunities for law enforcement in Europe, such as enhanced language translation abilities for police officers who need to communicate with others who speak a different language (a common situation in Europe); predictive policing capabilities (to help law enforcement allocate resources appropriately); automated transcription of audio and video recordings; and improved intelligence gathering capabilities from social media.

It also said there were ethical considerations to take into account when using ChatGPT in a law enforcement capacity since it could exacerbate issues with bias and discrimination.

“For example, if the data used to train a large language model is biased in some way, such as containing more information about certain types of crimes or certain types of suspects, then the model may be more likely to produce biased outputs when used in real-world applications,” ChatGPT wrote. “This could have serious implications for law enforcement, as biased outputs could lead to unfair treatment of certain groups of people.”

The specific scenario ChatGPT went on to provide explained using an LLM to predict where crime is likely to occur, but the LLM was trained on data that was biased against certain neighborhoods or demographics so it produced predictions that unfairly targeted those groups for increased police presence.

“Similarly, if a large language model used to analyze social media data to gather intelligence on criminal activity is trained on data that is biased against certain types of people or activities, then the model may produce outputs that unfairly target those groups for surveillance or investigation,” ChatGPT added. “It's important for law enforcement agencies to be aware of these potential biases and to take steps to mitigate them. This might include using more diverse and representative data to train models, monitoring the outputs of models to ensure that they are not producing biased results, and implementing policies and procedures to ensure that models are being used fairly and ethically.”

ChatGPT declined to answer questions about how it might be used to facilitate or encourage crime, but did say that like any technology, LLMs can be used to generate fake news, spread disinformation, or create phishing emails.

imageknjh.png

It also explained that there are safeguards law enforcement can put in place to help limit LLMs’ ability to be used for criminal purposes. Many of these suggestions overlapped with the Europol researchers’ findings—including education and awareness about LLMs, how they can be used, and how to detect their product creations; using other AI technologies to detect and prevent the use of LLMs for criminal purposes; regulation and oversight of LLMs; and collaboration between tech companies, academia, and law enforcement agencies.

“Ultimately, preventing the use of large language models for criminal purposes will require a collaborative effort from law enforcement agencies, technology companies, policymakers, and the public,” ChatGPT said.

arrow_upward