Cyber Security News

FraudGPT: A New Dark Side AI Tool For Cyber Criminals

A new tool called FraudGPT has been launched by cybercriminals which pose a serious threat to both individuals and businesses.

This black-hat-based tool is capable of executing social engineering and Business Email Compromise (BEC) attacks, making it a real cause for concern.

The recent activities on the Dark Web Forum show the emergence new malicious AI tool dubbed FraudGPT, active since July 22, 2023.

According to a report shared by the Netenrich threat research team, cybercriminals are currently selling a tool on various Dark Web marketplaces and the Telegram platform.

FraudGPT: Dark Side AI Tool

The threat actors advertised that the “FraudGPT craftiness would play a vital role in business email compromise (BEC) phishing campaigns on organizations.”

With FraudGPT, attackers could create fewer emails that might tempt recipients into clicking on a malicious link, potentially making the future safer.

This tool has been created solely for offensive purposes and the individuals responsible for it are charging $200 per month or up to $1,700 per year.

The following are the offensive features of the tool;

  • Write malicious code
  • Create undetectable malware
  • Find non-VBV bins
  • Create phishing pages
  • Create hacking tools
  • Find groups, sites, markets
  • Write scam pages/letters
  • Find leaks, vulnerabilities
  • Learn to code/hack
  • Find cardable sites
  • Escrow available 24/7
  • 3,000+ confirmed sales/reviews

The individual responsible for fraudGPT had created a Telegram channel a month prior to the release of the tool.

He confidently affirms his status as a verified vendor on numerous underground dark web marketplaces, such as EMPIRE, WHM, TORREZ, WORLD, ALPHABAY, and VERSUS.

Earlier to FraudGPT, yet another tool dubbed WormGPT was launched by threat actors aiming to offer the following services;

  • Generate  advanced phishing emails
  • Launch BEC attacks

The WormGPT is an unrestricted variant of ChatGPT since it lacks ethical boundaries or limitations, unlike ChatGPT. WormGPT highlights the significant risk of generative AI.

Just after its launch, WormGPT’s Telegram channel gained more than 5,000 active subscribers in just a week, showing threat actors’ rapid adoption of the tool to perform illicit activities and attacks.

Recommendations

Defending against AI-driven BEC attacks demands a multi-layered strategy, blending tech solutions and user awareness.

Here below, we have mentioned the recommendations offered by the cybersecurity analysts:-

  • AI Detection Tools
  • Email Authentication Protocols
  • User Training and Awareness
  • Email Filtering and Whitelisting

Stay up-to-date with the latest Cyber Security News; follow us on GoogleNewsLinkedinTwitterand Facebook.

Guru Baran

Guru is an Ex-Security Engineer at Comodo Cybersecurity. Co-Founder - Cyber Security News & GBHackers On Security.

Recent Posts

6 Best Practices To Protect Your Company From Data Loss

Data is a critical asset in today's digital business landscape. The loss of crucial information…

9 hours ago

OWASP ModSecurity Core Rule 3.3.5 Released – What’s New!

The CRS v3.3.5 release has been announced by the OWASP ModSecurity Core Rule Set (CRS)…

12 hours ago

Critical MikroTik RouterOS Flaw Exposes 900,000 Systems to Cyber Attacks

MikroTik RouterOS were vulnerable to a privilege escalation vulnerability which was first disclosed in June…

16 hours ago

Zenbleed – AMD’s Zen2 Processor Flaw Allows Attackers to Steal Sensitive Data

The CPUs that are based on x86-64 architecture feature XMM registers (128-bit), recently extended to…

2 days ago

Hackers Use SMS Alerts to Install SpyNote Malware

Reports indicate that a Smishing campaign was conducted against Japanese Android users under the name…

2 days ago

‘SIM Swapper’ Pleads Guilty For Hacking Instagram User Accounts

A 24-year-old man named Amir Hossein Golshan from Downtown Los Angeles has pleaded guilty for…

2 days ago