Security experts have warned that Russian hackers on dark web forums are trying to bypass OpenAI's API restrictions to access ChatGPT for criminal purposes.

Security experts have warned that Russian hackers on dark web forums are trying to bypass OpenAI’s API restrictions to access ChatGPT for criminal purposes.

At the end of last November, the research organization “Open AI” launched the first beta version of the ChatGPT program, which is a new chatbot that relies on artificial intelligence to conduct conversations with humans and answer user questions in a manner Creative and write articles when asked to do so.

Experts from Check Point Research observed several individuals debating how stolen payment cards could be used to log into accounts of users whose accounts were upgraded to OpenAI accounts, thus circumventing the restrictions of free accounts.

Others have created blog posts on how to bypass geo-controls in OpenAI, and still, others are creating tutorials that explain how to use semi-legal online SMS services to sign up for ChatGPT.

Experts from the company that works in the field of information security published their findings in a report. “It’s not very difficult to bypass OpenAI’s measures to restrict certain countries from accessing its technologies, especially ChatGPT,” said Sergey Shekevich, director of the company’s Threat Intelligence Group.

“Currently, we see Russian hackers already discussing and investigating how to bypass geo-fences to use ChatGPT with their malicious operations… Cybercriminals are getting more and more interested in ChatGPT because the AI ​​technology behind it can make the hacker more effective in terms of The cost of creating malware.

Case in point, just last week, Check Point Research published a separate advisory that highlights how threat actors have already created malicious tools using GPT chatbots. This information included layered encryption tools and dark web marketplace scripts.

Overall, the cybersecurity firm isn’t the only company that believes that ChatGBT can participate in cybercrime, as many experts have warned that the AI ​​bot could be used by potential cyber criminals to teach them how to create attacks and even write ransomware.

Mike Hunt

A writer and reviewer with good experience in the field of technology. He worked for a long time in technology news sites. He is interested in all news, mobile phones and modern technology. He has a strong resume. He works for us as a writer and reviewer. You can contact him via e-mail: [email protected]

Leave a Reply