This Safer Internet Day, it may be apt to explore some AI-based applications that enable women to navigate the internet more securely by safeguarding from bullying, unauthorized content sharing and various sorts of gendered abuse prevalent today.
Automated threat identification, prompt resource connections and tighter community moderation are crucial for a healthy digital ecosystem where vulnerable voices feel heard.
Sarthak Dubey, Co-Founder, Mitigata: Smart Cyber Insurance said, “AI and ML can be used to identify threats, such as employing ML-based systems to detect phishing emails and identify hate speech, and cyberbullying instances, and block potential threats. Additionally, AI can encrypt sensitive data, raising the barrier to unauthorized access by cybercriminals. It also plays a crucial role in monitoring data access and spotting unauthorized users, strengthening security measures. Leveraging the power of AI, the harmful content on social media platforms can be filtered and law enforcement agencies can deploy resources more strategically. Making the internet a much safer place for women.
Along with preemptive mitigation of threats, Personal Cyber Insurance also plays a key role here- as it covers aspects like cyberbullying/extortion, theft of funds, identity theft, etc. and pays for the legal/ defence costs along with the direct loss of funds. Now, if someone is harassed online, the SOP for the incident response has to be communicated and implemented smartly.”
With deepening digital penetration today, online risks remain disproportionately high for women navigating abuse, bullying and unauthorized content publication. A staggering 41% report facing sexist harassment while 84% encountered technology-facilitated abuse from partners in the EU per Amnesty data. Cyber stalking cases alone stand over 77000 between 2018-2020 in India as per NCRB highlighting the scale of threats limiting voice and freedoms.
Sameer Dhanrajani, CEO, 3AI said, “In the pursuit of helping women have a safer and more protected Internet usage, Artificial Intelligence tools have proven to be a revolutionary step. AI can analyse patterns to predict potential threats, enabling pre-emptive actions highlighting applications that leverage AI for real-time monitoring, location tracking, and distress signal generation. The incorporation of AI in women’s online safety initiatives is paramount for creating a world where every woman feels secure and empowered. By leveraging the latest advancements in technology, we can collectively contribute to building a safer and more inclusive future. An AI & machine learning powered women’s safety app can help prevent online sexual harassment & malicious intent. The integration of AI tools online is being done with the objective of achieving 100% security for women, identify cyber criminals, combat online harmful activities and provide prompt assistance to women.”
Automated assistance holds immense preventative potential whether detecting unusual digital footprints indicating personal harm or preempting vengeance publication of sensitive materials denying consent. Hence this Safer Internet Day warrants greater spotlight on AI’s promise in upholding civil guardrails by analysis, instant coordination and proactive takedowns.
Ms. Aparna Acharekar, Co-founder, coto said, “The internet, and social media platforms in particular, can be brutal places, especially for women, given the huge amount of trolling, abuse and discrimination that women are subjected to. As per an online survey conducted by Microsoft in 2023, a shocking 69% of individuals globally have faced various online hazards, like fake news, harassment, cyberbullying, and even violent threats. Given the rise of AI, deepfake cases have only worsened the issue of internet safety and brought forth concerns related to cyber security. At coto, we promote a safe online space where women can have authentic and interesting conversations with each other without the fear of being judged or trolled. These brave discussions fearlessly delve into taboo subjects like menstrual health, careers, relationships, financial independence, and personal achievements, weaving a powerful narrative of empowerment. To promote the safety of our female users on the app, we deploy cutting-edge artificial intelligence algorithms to ensure that users on coto are verified with utmost accuracy. Additionally, we provide our users the option to stay anonymous and only allow verified women on the app.”
While no solution achieves perfectly safe outcomes, tightened community monitoring, emergency response triggers and recourse assistance intend progress toward gender-inclusive ecosystems. By both computationally screening high-volume inputs alongside aiding vulnerable victims, emerging tools expand support structures women frequently lack negotiating online challenges. Replication at population-scale remains key going forward.
Chris Boyd, Staff Research Engineer, Tenable said, “Safer Internet Day is a chance to reflect on attacks witnessed over the last 12 months, and consider what may be headed our way through the coming year. This year’s event looks closely at ways in which children can explore safely online, and how we can assist them in their digital journey.
“With more teens and young adults online than ever before, it’s crucial that we explain the risks posed by ransomware, bullying in online spaces, and the possibility of digital manipulation and misinformation aided and abetted by increasingly slick deepfake and AI technology.
“With a number of major elections coming up in 2024, the possibility of being duped by lies and faked video footage is stronger than ever. Consider that many adults continue to fall for so-called “cheapfakes” (crudely edited photographs and memes on social media), and that in many cases scammers don’t even need to reach for AI tools in the first place to achieve their objectives.
“This year’s electoral campaigns will be a heady mixture of malign interference campaigns, confidence tricks, and the ever present threat of ransomware and malware groups waiting to potentially take sides with their own unique brand of electoral interference.
“If we don’t want to see a repeat of the social engineering and malware dispensing tactics used over the last few years targeting the next generation of netizens, it’s up to us to help and encourage educators and those with a direct involvement with children’s learning to offer workable solutions to keeping them from harm online for both Safer Internet Day and beyond.”
Here are few AI tools, that help women to experience a safer Internet:
StopNCII: a free global tool specifically designed to help people prevent the non-consensual sharing of intimate images online. It focuses on protecting victims of Non-Consensual Intimate Image (NCII) abuse, also known as revenge porn
Noonlight: Uses AI to analyze user behavior and detect potential threats. It can send alerts to emergency contacts or authorities in case of unusual activity.
MySafeti: Provides real-time tracking and sends alerts to designated contacts when the user feels unsafe. It also has a feature that can automatically call for help if the user doesn’t respond to prompts.
Safetipin: Uses AI to identify and classify harassment in real-time, providing users with resources and support.
ZeroFOX: Uses AI to identify and remove harmful content, including cyberbullying and hate speech.
Hivemind: Uses AI to detect and flag abusive language in online communities. Uses AI to identify and analyze online harassment, providing insights to help users stay safe.
DetectaText: Uses AI to identify and analyze online harassment, providing insights to help users stay safe.
Bumble Scanner: An open-source tool that helps users identify nude images or videos of themselves circulating online without their consent.
Tech Against Violence: A coalition of tech companies working together to develop and deploy tools to combat online violence against women.
Perspective API: This tool uses machine learning to identify and filter out toxic comments in online conversations. It can be integrated into various platforms to help prevent harassment and abuse.
Facebook Safety Check: Facebook’s Safety Check uses AI to detect and prioritize potentially harmful content. It helps in identifying and removing content that violates community standards, including harassment and bullying.
IBM Watson: This AI-powered tool uses machine learning to analyze security data and identify potential threats. While not specific to gender, it contributes to overall online safety by detecting and responding to cybersecurity issues.
Google Content Moderation: Google employs AI algorithms for content moderation on platforms like YouTube. The algorithms automatically flag and remove inappropriate content, helping create a safer environment for users.
Microsoft Content Moderator: This tool uses machine learning to detect and filter potentially offensive or unsafe content across various media types, including text, images, and videos.