According to WhatsApp’s latest monthly compliance report, it banned over 20 lakh accounts in India in August.
Precisely 20,70,000 accounts were banned due to their unauthorised use of automated or bulk messages.
Over 95% of accounts that are involved in automated messaging face bans in India.
The app received 420 grievance reports in August.
105 were reports pertaining to account support, 222 were ban appeals, others were regarding support (34), product support (42) and safety (17).
Out of these, it acted on 41 accounts, that is, it either banned an account or restored a previously banned account.
It said that as response to user complaints sent through via the grievance channel, it deploys tools and resources to check harmful behaviour.
Whatsapp reported that the majority of users who reach out either do so because they want their account restored after a ban or for product or account support.
Tools To Detect Misbehavior
It said that it has “advanced capabilities to identify these accounts sending a high or abnormal rate of messages”.
Some of these capabilities involve its reliance on available unencrypted information, including user reports, profile photos, group photos and descriptions.
It also makes use of advanced AI tools and resources to detect and prevent abuse on its platform.
Abuse Detection Mechanism
In the report it laid out three stages of an account’s lifestyle through which it detects abuse.
The stages are at registration, during messaging, and in response to negative feedback which it receives in the form of user reports and blocks.
Then, its team of analysts “augments these systems to evaluate edge cases and help improve [its] effectiveness over time”.
Between June 16 and July 31, a span of forty-six days, it had banned over three million accounts.
It took action against those penetrating online abuse and to ensure user safety.
Whatsapp keeps a record of accounts with a disproportionately high rate of messages and bans those found to have abused the platform.
New IT Rules
The monthly compliance report is something the government’s new IT rules require major social media platforms to do.
It came into effect on May 26 and specifically applies to digital platforms with more than 5 million users.
In their reports, these platforms must publish details of complaints received and action taken.