Digital Violence and Image Manipulation: When Artificial Intelligence is Used Without Consent

A few weeks ago, a woman denounced in ICT Tac who stole photos from their social networks to create erotic content without your consent using Artificial Intelligence (IA). From a mysterious Instagram account, she explained, they sent her messages to extort her and spread images in which she recognized her face, but not her body. When her case went viral, other women claimed to have had similar experiences.

Until recently, specific computer skills were required to be able to create realistic pornography or erotic content with AI. But there are now several tools –affordable and relatively easy to use– with which this type of content can be generated. Recently, actresses Emma Watson and Scarlett Johansson were victims of a deepfake –a generated video, image or sound that mimics the look and sound of a person– where their faces have been used for an erotic video. The footage was shared on social media to promote an app that, for $8 a month, puts a person’s face on a video using AI.

according to the newspaper The Washington Post which quotes experts on the subject, as this type of technology progresses, it is possible that bullying incidents are also on the rise and extortion and digital gender-based violence.

Artificial intelligence. Reference picture. Photo: Pixabay.

Until 2019, 96% of wrong wrong online were pornographic in nature, according to an analysis by artificial intelligence firm DeepTrace Technologies, and most of them were women. In 2020 more than 15,000 were identified wrong wrong online, according to research by cybersecurity firm Sensity.

“Let these be used nudes attacking women is a sign of how women are perceived and gender violence that we continue to live each day. AI does not create new problems. Now that we are entering this new cyberspace, the world of data science and AI, these problems are with us,” he explains. Valentina Munoz programmer and digital rights activist.

It must be emphasized, says Muñoz, that disseminating a sexual image without consent is gender violence even if the image is false. “Women’s rights and privacy are violated and they continue to be sexualized. Until AI and all technologies are approached from a gender perspective, we will continue to have these issues,” she explains.

The president of the Association of Young Women for Ideas (AMUJI), Martina Figueroa, assures that this issue is closely monitored, that it changes and progresses every day, and that does not have strong regulations to address. “Many people use digital tools to commit acts of violence against other people; And sadly, women and girls are the most affected by digital violence,” she says.

According to a study carried out by the UN in 51 countries in 2022, 38% of women surveyed have experienced some form of online harassment or abuse which includes the dissemination of non-consensual sexual content.

And how is it regulated?

Nathalie Walker, a lawyer and academic at Andrés Bello University, explains that the lack of regulation in this area generates uncertainty because the development of this technology is done very quickly and lets you do things that until a few years ago were unthinkable.

And this essential regulation, he explains, is not without difficulties. “Overly restrictive regulation can jeopardize innovation and, with it, stagnate industrial development and progress. On another side, technology companies they have great economic power which in many cases manages to avoid attempts at sectoral regulations,” he says. For the expert, a common effort is necessary because it is useless for each country to create its own regulations.

In Chile, the legislation is created and designed for physical environments , not for virtual ones. And even if there are answers in the criminal field to deal with these situations, there is still a long way to go.

“It’s not the most common way, mainly because the victim of this manipulation wants to get a ‘quick damage check’: if there’s a doctored photo or a fake video in which it’s supposed to appear, this what it needs is to get it to be’ Leave the site or platform and stop circulating, so people don’t keep seeing it,” Walker says. “And, to achieve that , we should not wait for the development of a process that can take several years.For this reason, these types of cases have been rather redirected towards protection remedies (to prevent the images from continuing to circulate) and requests for compensation (to repair the damage),” he adds.

In general, the governments of different countries work in collaboration with private organizations and technology companies to develop ethical standards and guidelines in the use of artificial intelligence and so establish policies that address the negative impact image manipulation.

For now, experts agree that it’s not realistic to ask people to remove their digital presence to avoid being bullied, but they suggest not using or supporting platforms that use this type. content and raise awareness about it. “digital consent”.

Read also in Paula:

Source: Latercera

Related articles

Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share article

Latest articles

Newsletter

Subscribe to stay updated.