Science & Tech

AI is being used by pedophiles to transform well-known celebrities into children for sinister ends

Using artificial intelligence (AI), pedophiles are creating unsettling pictures of famous people as kids, according to the Internet Watch Foundation (IWF).

According to the charity’s most recent study, there is an increasing problem whereby celebrities—including well-known female singers and movie stars—being digitally aged to appear younger, and predators are sharing these photographs.

The disturbing use of artificial intelligence (AI) to produce hundreds of fake photos of actual victims of child abuse is another issue brought up by the IWF. These photos are then shared on the dark web.

Experts are concerned about the rise of AI systems that can create visuals from written instructions. of response to the alarming rise of pedophiles, US Homeland Security Secretary Alejandro Mayorkas and Home Secretary Suella Braverman recently released a joint statement.

Read More:Proteas captain believes Pakistan might be a “serious threat” to South Africa

According to the IWF report, during a month-long surveillance of a darknet child abuse website, researchers found around 3,000 photoshopped photographs that are prohibited by UK legislation.

A new pattern surfaced: predators would take one picture of a known victim of child abuse and utilize artificial intelligence (AI) to create many explicit photos. The researchers discovered, for instance, a folder with 501 photos of a real-life child abuse victim who was initially between the ages of 9 and 10, together with an AI model file that had been fine-tuned so that additional photos could be taken of her.

A few of the AI-generated photos, which feature famous people as kids, are incredibly lifelike and might pass for real to untrained eyes.

Because they frequently promote predatory behavior, these photos are being exploited to normalize it and squander expensive law enforcement resources.

In advance of the UK government’s AI Summit, the IWF disseminated its study to increase public awareness of this matter. In the course of their inquiry, the IWF examined 11,108 AI-generated pictures of child sexual assault posted on a dark web forum. Of those, 2,978 were found to be unlawful in the UK.

The disturbing thing is that more than 1,900 of these photos showed kids in elementary school—ages seven to ten—which emphasizes how serious the issue is.

Read More:World Cup: Hasan Ali of Pakistan is out of the crucial match against South Africa

The IWF’s findings have confirmed the early concerns about the improper use of AI to produce sexual photographs of youngsters. The IWF’s chief executive, Susie Hargreaves, voiced her profound concern and emphasized how urgently this issue needed to be resolved.

The study emphasizes how AI-generated photos have real-world repercussions that not only encourage predatory behavior but also provide difficulties for law enforcement. The situation is becoming more complex as new types of violations are emerging, such as the alteration of harmless photographs to commit Category A offenses.

The IWF’s findings demonstrate how urgently more stringent regulations and global cooperation are needed to stop the use of AI in child exploitation and abuse.

Related posts
Science & Tech

Elon Musk's X introduces two brand-new membership packages

Elon Musk’s social media platform, formerly known as Twitter and now called X, introduced two…
Read more
Science & Tech

More than 530 cyber attacks are revealed every second on the Internet

LONDON: Cyber security experts identify more than 46 million potential cyber attacks every day…
Read more
Science & Tech

Alternative WhatsApp application 'Beep Pakistan' introduced

The Ministry of Information Technology has introduced the alternative app ‘Beep…
Read more

Leave a Reply