ChatGPT Hacked: Sensitive Details Of 100,000+ Users Being Sold On Dark Web; Maximum Users From India!


Shreya Bose

Shreya Bose

Jun 25, 2023


A report by Group-IB has revealed that the data of around 1,00,000 individuals was compromised after their ChatGPT accounts were hacked.

ChatGPT Hacked: Sensitive Details Of 100,000+ Users Being Sold On Dark Web; Maximum Users From India!

Group-IB is a Singapore-based cyber technology company that claimed to have identified more than 1,00,000 stealer-infected devices in which ChatGPT credentials were saved.

India among worst affected

Notably, India witnessed the highest number of compromises, the report said.

The Threat Intelligence Unit of the Group-IB has revealed that India (12,632), Pakistan (9,217), and Brazil (6,531) were the top countries where users were impacted by the cyber attack.

Data traded on dark web

By default, ChatGPT stores the history of user queries and AI responses.

Consequently, unauthorized access to ChatGPT accounts may expose confidential or sensitive information.

This can be exploited for targeted attacks against companies and their employees.

These stolen credentials were traded on dark web marketplaces over the past year. 

The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale. 

Info stealer

According to Group-IB’s latest findings, ChatGPT accounts have already gained significant popularity within underground communities.

Group-IB’s analysis of underground marketplaces revealed that the majority of logs containing ChatGPT accounts have been breached by the Raccoon info stealer.

Info stealers are a type of malware that collects credentials saved in browsers, bank card details, crypto wallet information, cookies, browsing history, and other information from browsers installed on infected computers.

It then sends all this data to the malware operator.

This type of malware infects as many computers as possible through phishing or other means in order to collect as much data as possible.

How companies get affected

Dmitry Shestakov, Head of Threat Intelligence at Group-IB explained, “Many enterprises are integrating ChatGPT into their operational flow. 

Employees enter classified correspondences or use the bot to optimize proprietary code. 

Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”

Protective measures

To mitigate the risks associated with compromised ChatGPT accounts users are advised to update their passwords regularly and implement two-factor authentication.

By enabling 2FA, users are required to provide an additional verification code, typically sent to their mobile devices, before accessing their ChatGPT accounts.

Companies can utilise real-time threat intelligence which enables them to take proactive action to mitigate the impact.

This will help them better understand the threat situation. 

They can then act upon it by proactively protecting their assets, and making informed decisions to strengthen their cybersecurity protocol.


Shreya Bose
Shreya Bose
  • 609 Posts

Subscribe Now!

Get latest news and views related to startups, tech and business

You Might Also Like

Recent Posts

Related Videos

   

Subscribe Now!

Get latest news and views related to startups, tech and business

who's online