Hackers misusing ChatGPT easier to detect: OpenAI report

AI tools like ChatGPT have primarily been used by cybercriminals to save time or reduce costs.

OpenAI has revealed that bad actors exploiting its AI tool, ChatGPT, have inadvertently made it easier for authorities to detect and disrupt their covert cyber operations.

In a detailed report, OpenAI discussed the emerging trend of cybercriminals using its tools to aid their illicit activities, but noted that these attempts often backfire by exposing their methods.

According to the report, ChatGPT prompts have helped OpenAI identify platforms and tools that bad actors are targeting. In one instance, the misuse of ChatGPT enabled OpenAI to link a covert influence campaign across social media platforms X (formerly Twitter) and Instagram.

The report also revealed how ChatGPT prompts gave insight into new tools being tested by cybercriminals to enhance their deceptive activities online.

Despite growing concerns about the potential for AI to escalate the spread of disinformation, OpenAI stressed that its models have not provided threat actors with capabilities they couldn't have obtained from publicly available sources.

Instead, AI tools like ChatGPT have primarily been used by cybercriminals to save time or reduce costs, such as generating social media posts for scaling spam networks that would have previously required a large team.

The report highlighted several notable cases, including one involving a suspected Chinese-based adversary known as “SweetSpecter.”

This group used ChatGPT to research and execute a spear-phishing campaign targeting both government officials and OpenAI employees.

Posing as a user troubleshooting an issue, SweetSpecter attempted to spread malware through email attachments. Fortunately, OpenAI’s spam filters blocked the threat before it reached employees.

By monitoring SweetSpecter’s ChatGPT prompts, OpenAI discovered the group’s intent to exploit vulnerabilities in various apps and infrastructure, including systems belonging to a major car manufacturer.

In addition to this, OpenAI noted that SweetSpecter had even asked for help in naming email attachments to avoid detection.

Another significant case involved CyberAv3ngers, a group reportedly affiliated with the Iranian armed forces. CyberAv3ngers is known for disruptive cyber-attacks on public infrastructure in countries such as the United States, Ireland, and Israel.

OpenAI identified this group’s use of ChatGPT to conduct research and debug code, providing valuable insights into technologies that the group may aim to exploit in future attacks.

OpenAI’s report also revealed how they disrupted another Iranian threat actor, STORM-0817, which was caught using AI tools to target an Iranian journalist critical of the government.

STORM-0817 had been using ChatGPT to scrape Instagram profiles and debug their code, which OpenAI flagged as part of their broader monitoring efforts.

Though AI tools are being experimented with by bad actors, OpenAI was clear in stating that they have not seen any significant breakthroughs in threat actors' capabilities.

While some campaigns have managed to engage real people online, their overall impact remains limited.

OpenAI noted that, for the most part, the AI tools being used by cybercriminals only offer incremental capabilities already achievable with non-AI resources.

The company’s report underscored the need for greater collaboration across the tech industry to build robust defences against cyber threats. OpenAI committed to ongoing transparency in addressing the ways in which its models may be misused.

However, they were of the view that AI companies cannot fight these battles alone and must work alongside other institutions to develop multi-layered protections against state-linked cyber actors and covert influence operations online.

RELATED

Load Next Story