OpenAI's ChatGPT, which has become exceedingly popular in the last few months, is now being used by cyber criminals to create malware, research by threat intelligence company Check Point Research discovered.
The AI chatbot has imposed restrictions on how the services can be used, but posts on a dark web hacking forum revealed that it can still be used to create malware.
Additionally, anonymous users on the forum have revealed to viewers how they can achieve this, like “the key to getting it to create what you want is by specifying what the program should do and what steps should be taken, consider it like writing pseudo-code for your comp[uter] sci[ence] class.”
Using this method, hackers can create a “python file stealer that searches for common file types ”that can automatically self-delete after files have been uploaded or an error is encountered while the program is running. The method is designed to remove any evidence of hacking".
Read: Meta restores Donald Trump's access to Facebook, Instagram
Another user on the platform also shared their experience creating a dark web marketplace script, which can be used for various purposes including selling personal information obtained through data breaches, selling illegally obtained card information, or even selling cyber crime-as-a-service products.
Users have agreed that ChatGPT is a great way to "make money", as users claimed they made more than US$1,000 per day. Forbes believes hackers did so by impersonating women to enact social engineering attacks on vulnerable targets.
Cyber Security experts already told Cyber Security Hub that according to their forecast the top cyber security threat of 2023 would be crime-as-a-service, and ChatGPT has expedited the process by creating malware for free.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ