Unsettling Increase in AI-enabled Striptease Software and Crusade against Deepfake Exploitation
In the digital age, artificial intelligence (AI) has revolutionized various aspects of our lives, but it has also brought about new challenges. One such challenge is the use of AI-powered bots to digitally remove clothing from images, a practice that is significantly contributing to the problem of deepfake abuse and harassment.
These bots, found on platforms like Telegram, employ advanced AI technology to analyze photos and generate realistic nude or partially undressed images of individuals without their consent. The images are often sourced from social media or private photos, and their creation is automatic and realistic, making it difficult for victims to prove the falsity of the images once disseminated online.
The misuse of these bots has led to harmful behaviours such as sextortion and blackmail, digital harassment and cyberbullying, and underage exploitation. Predators or abusers can create false nude images to threaten or coerce victims, while victims may be targeted with these fake images, leading to emotional distress and social harm. In some cases, images of minors have been manipulated and circulated, causing severe abuse and legal concerns.
Despite their legitimate uses when applied ethically, the deployment of these bots in Telegram and similar platforms often lacks oversight, consent, or legal controls. This has led to widespread misuse, where individuals’ images are stripped of clothing digitally and used to harass, exploit, or coerce people, perpetuating the cycle of deepfake abuse and online harm.
High-profile individuals, including celebrities and journalists, have been targeted with deepfake pornographic content. The bot's ecosystem includes Telegram channels dedicated to sharing and "rating" the generated images. As of July 2020, the bot is estimated to have been used to target at least 100,000 women.
Organisations like Witness, the Cyber Civil Rights Initiative (CCRI), and Sensity AI are working to combat the spread of misinformation and protect human rights. They are developing AI-powered tools to detect deepfakes and technologies like blockchain to create tamper-proof records of an image’s origin.
As the use of AI-powered "undressing" bots continues to grow, it underscores the need for a collective effort to create a safer and more ethical digital landscape. Existing laws related to harassment, defamation, and revenge porn can be updated to encompass deepfake-related offenses. Many countries are working on legislation specifically criminalizing the non-consensual creation and distribution of deepfake pornography.
In summary, AI-powered Telegram bots are exacerbating deepfake abuse by making it easy to create and distribute realistic, non-consensual fake nudes, which contribute to sexual harassment, blackmail, and digital exploitation globally. It is crucial for all digital platforms to implement stricter measures to prevent the misuse of AI technology and ensure a safer online environment for everyone.
- The use of AI technology in generating realistic nude or partially undressed images without consent, as seen in Telegram bots, is significantly contributing to the growing issue of deepfake abuse and harassment.
- Organizations such as Witness, the Cyber Civil Rights Initiative (CCRI), and Sensity AI are developing solutions using AI and blockchain technology to combat deepfake misinformation and protect human rights.
- Existing laws related to harassment, defamation, and revenge porn need to be updated to encompass deepfake-related offenses, considering the growing issue of non-consensual deepfake pornography.
- To ensure a safer and more ethical digital landscape, it is crucial for all digital platforms, including Telegram, to implement stricter measures to prevent the misuse of AI technology, especially in cases where individuals' images are being stripped of clothing digitally and used to harass, exploit, or coerce people.