ChatGPT, an AI tool, has gained a lot of attention in the tech community over the past several months. ChatGPT can be used to write reports, programme codes, and responses to inquiries. And it appears that fraudsters are currently employing for nefarious purposes. A cybersecurity research group claims that ChatGPT was utilised by malicious parties to create dangerous programmes. Threat actors are developing infostealers, encryption tools, and aiding fraud in underground hacking forums. Three examples of ChatGPT use by cybercriminals were reported by Checkpoint Research.
For creating info stealers
Checkpoint Research claims that on December 29, 2022, "ChatGPT - Benefits of Malware" thread surfaced on well-known underground hacker forum. The thread's creator revealed that he was using ChatGPT to test out malware strains and strategies that were documented in research papers and articles about common malware. According to Checkpoint Research, these articles appeared to be showing less technically skilled cybercriminals real-world examples of how to exploit ChatGPT for harmful reasons.
For creating multi-layered encryption tools Using ChatGPT for fraud
Cybercriminals find ChatGPT appealing, according to Check Point's Threat Intelligence Group Manager Sergey Shykevich. Recently, there has been evidence that hackers are beginning to utilise it to create harmful malware. Hackers may be able to move more quickly since ChatGPT provides them with a solid foundation. ChatGPT can be used for malevolent purposes in addition to being used for good to help engineers write code.
Post a Comment
We welcome relevant and respectful comments. Off-topic or spam comments may be removed.