Meta’s latest security report sheds light on the growing threat of fake ChatGPT malware that is being distributed on its platform. The security team at Meta has warned that these imposters are being used to hack user accounts and take over business pages.
According to the Q1 security report, malware operators and spammers are capitalizing on high-engagement topics and trends that grab people’s attention. With AI chatbots like ChatGPT, Bing, and Bard being the latest tech trend, scammers are now tricking users into trying fake versions of these bots.
Since March, Meta security analysts have discovered about 10 types of malware posing as AI chatbot-related tools like ChatGPT. Some of these malware variants come in the form of web browser extensions and toolbars, while others are available through unofficial web stores. Even Facebook ads have been used to spread these fake ChatGPT scams, as reported by The Washington Post.
What makes matters worse is that some of these malicious ChatGPT tools use AI to appear like a legitimate chatbot. Meta has already blocked over 1,000 unique links to the discovered malware iterations that have been shared across its platforms. In addition, the company has provided technical details on how scammers gain access to accounts, which includes hijacking logged-in sessions and maintaining access.
To help businesses that have been hacked or shut down on Facebook, Meta has introduced a new support flow to fix and regain access to them. Business pages are usually hacked because individual Facebook users with access to them are targeted by malware.
To counter these threats, Meta is rolling out new Meta work accounts that support existing single sign-on (SSO) credential services from organizations that don’t link to a personal Facebook account. By migrating to these accounts, businesses can make it much more difficult for malware like the bizarro ChatGPT to attack.