YouTube’s paedophile problem is only a small part of online child sexual abuse

by The Conversation / 06 March, 2019
By Belinda Winder and Hany Farid
Image/Evdokimov Maxim/Shutterstock

Image/Evdokimov Maxim/Shutterstock

YouTube has, yet again, failed to protect children online.

Recent investigations by Wired and video blogger Matt Watson have alleged that paedophiles were using the site’s comments section to leave predatory messages on videos containing and uploaded by children, and to share links to child sexual abuse material.

In response to the investigations – and the threat of an advertiser boycott – YouTube has now said it will disable comments on videos containing young children. But sadly, this is not an isolated incident. In January 2019 was alleged that Microsoft’s Bing search engine was surfacing and suggesting child sexual abuse material. And these kind of incidents are repeats of similar problems that have occurred over the past five years.

The reality is that the internet has a systemic problem with child sexual abuse material that isn’t confined to niche sites or the dark web but hiding in plain sight among content hosted and controlled by the tech giants. We must do more to protect children online and this action has to go beyond tweaks to algorithms or turning off comments.

Read more: Catch-22: Where should the Government house sex offenders?

In 2016, more than 57,000 web pages containing child sexual abuse images were tracked by the Internet Watch Foundation – a UK-based body that identifies and removes such illegal content. This was an increase of 21% from the previous year. The US-based National Center for Missing and Exploited Children received more than 10m reports of child sexual abuse content in 2017, an increase of 22% from the previous 12 months. It’s likely that these initiatives, while much needed, are identifying and removing only a small amount of the content that is distributed online every day.

Images depicting child abuse that are posted online have a severe impact on these abused children for years or decades after the primary physical abuse has ended. Abused children have already been victimised, but research shows that the availability of their images online keeps the nightmare alive for the child, their family and friends. It can also significantly affect a victim’s interaction with the internet for the rest of their lives.

Technology companies are uniquely positioned to act as guardians of the threshold by removing and reporting sexually explicit content that is uploaded onto their services. So why don’t they do more to aggressively protect millions of children around the world?Removing illegal web pages isn’t enough. Thomas Holt/Shutterstock

Even in the early days of the web, it was clear that services provided by technology companies were being used to spread child sexual abuse content. As early as 1995, the chatrooms of AOL – an early incarnation of social media – were allegedly used to share child abuse material. In response, AOL executives at the time claimed that they were doing their best to rein in abuses on their system but that their system was too large to manage. This is precisely the same excuse that we hear more than two decades later from the titans of tech.

Between 2003 and 2008, despite repeated promises to act, major tech companies failed to develop or use technology that could find and remove illegal or harmful content, even though it violated their terms of service. Then in 2009, Microsoft worked with National Center for Missing and Exploited Children and a team at Dartmouth College that included one of us (Hany Farid) to develop the technology photoDNA. This software quickly finds and removes known instances of child sexual content as it is uploaded, and has been provided at no cost to technology companies participating in the initiative.

After years of pressure, photoDNA is now used by many web services and networks. But technology firms have since failed to further innovate to respond to an increasingly sophisticated criminal underworld. For example, despite foreseeing the rise in child abuse videos, tech firms haven’t yet deployed systems that can identify offending footage like photoDNA can do for images.

These companies need to act more quickly to block and remove illegal images, as well as responding to other activity that enables and encourages child exploitation. This means continually developing new technologies, but also fundamentally rethinking the perverse incentive of making money from user content, regardless of what that content actually is.

Standing in the way of control

However, a combination of financial, legal and philosophical issues stand in the way of tech firms reining in illegal activities on their services. In the first instance, removing content is in many cases simply bad for business because it reduces opportunities for advertising revenue and gathering user data (which can also be sold).

Meanwhile, the law often absolves tech firms of responsibility for the content they host. In the US, Section 230 of the Communications Decency Act gives tech firms broad immunity from prosecution for the illegal activities of their users. This immunity relies on categorising the likes of YouTube or Facebook as benign “platforms” as opposed to active “publishers”. The position in the EU is similar. What’s more, some tech companies believe that illegal activity is a state responsibility, rather than a corporate one.

Given the size, wealth and reach of the tech giants, these excuses don’t justify inaction. They need to pro-actively moderate content and remove illegal images that have been uploaded to their sites. They could and should help to inform research in this vital area of child safety, working with law enforcement and researchers to investigate and expose the scourge of online child abuse.

Advertisers can put financial pressure to encourage sites to moderate and block illegal and abusive third-party content (as several companies have done following the latest failures on YouTube). But such boycotts rarely last. So if public pressure isn’t enough then government regulation that forces companies to comply with their own terms of service and local laws may be necessary.

This might be difficult to police. It may have unintended consequences, such as making it more difficult for small companies to compete with the current giants of technology. Or it may encourage companies to overreact and become overly restrictive about permissible content. In which case, we would prefer that technology companies harness their enormous wealth and resources and simply do the right thing.

By Belinda Winder, Professor of Forensic Psychology & Head of the Sexual Offences, Crime and Misconduct Research Unit, Nottingham Trent University and Hany Farid, Professor of Computer Science, Dartmouth College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Latest

How to enhance your dining experience – with water
103174 2019-03-22 00:00:00Z Dining

How to enhance your dining experience – with water…

by Metro

A stunning dining experience isn’t just about food and wine. Water plays a big part too.

Read more
Facebook won't give up its insidious practices without a fight
103856 2019-03-22 00:00:00Z Tech

Facebook won't give up its insidious practices wit…

by Peter Griffin

Facebook came under fire for its response to the live-streaming of the Christchurch terror attack, but it's digital nudging that's also concerning.

Read more
In photos: The world unites in solidarity with Christchurch
103800 2019-03-21 15:36:46Z World

In photos: The world unites in solidarity with Chr…

by Lauren Buckeridge

Countries around the world have put on a show of solidarity for the victims of the Christchurch terror attack.

Read more
The tangled path to terrorism
103777 2019-03-21 09:59:55Z Psychology

The tangled path to terrorism

by Marc Wilson

The path that leads people to commit atrocities such as that in Christchurch is twisting and unpredictable, but the journey often begins in childhood.

Read more
If 'This is not New Zealand', let us show it
103768 2019-03-21 09:31:27Z Social issues

If 'This is not New Zealand', let us show it

by The Listener

The little signs among the banks of flowers said, “This is not New Zealand.” They meant, “We thought we were better than this.” We were wrong.

Read more
Extremism is not a mental illness
103785 2019-03-21 00:00:00Z Crime

Extremism is not a mental illness

by The Mental Health Foundation of NZ

Shooting people is not a symptom of a mental illness. White supremacy is not a mental illness.

Read more
PM announces ban on all military-style semi-automatic weapons and assault rifles
103805 2019-03-21 00:00:00Z Crime

PM announces ban on all military-style semi-automa…

by RNZ

Ms Ardern pledged the day after the terrorist massacre that "gun laws will change" and would be announced within 10 days of the attack.

Read more
No mention of right-wing extremist threats in 10 years of GCSB & SIS public docs
103770 2019-03-21 00:00:00Z Politics

No mention of right-wing extremist threats in 10 y…

by Jane Patterson

There is not one specific mention of the threat posed by white supremacists or right-wing nationalism in 10 years of security agency documents.

Read more