Despite its promises, YouTube has failed to take down hate-filled and extremist content in an adequate time-frame.
In a heated exchange with another gamer, YouTube star PewDiePie shouted the n-word. An underage girl is forced to undress in front of the camera. A model described being bullied for having dark skin.
What’s the difference between these three YouTube videos? The first two earned money for their content; the third did not.
Several months ago, YouTube vowed to demonetize videos with hate-filled content. Last month, the company also promised to introduce a system, making it easier for content creators to contest videos deemed offensive.
YouTube has yet to deliver on its promise.
While the company demonetizes not-for-profit projects, child exploitation videos still readily receive money.
In an exclusive report, The Sun found that the platform has placed ads next to child exploitation videos.
One video shows a young Russian girl forced to undress. An adult dressed in a killer clown costume then proceeds to inject her with a giant-sized needle.
In another video, the clown continually drugs a child before tying him up.
According to The Sun, the clips were dramatized. Yet, they still appeared “extremely disturbing.”
Google took down the videos only after The Sun reported them.
Matan Uziel brought the videos to The Sun’s attention. He founded the Real Women Real Stories channel on YouTube. He designed the not-for-profit project to promote awareness of the “often unseen hardships women face in different professions and places.”
On Matan’s YouTube page, you can find videos of women discussing various hardships they face daily.
In one video, a black model speaks out against skin bleaching. The channel also has a video from famous Mexican actress Kate Del Castillo. She discussed physical abuse in marriages.
As part of their demonetization campaign, YouTube removed ads from Real Women Real Stories’ videos.
Uziel told The Sun Online,
“To say that sexual abuse and rape is ‘out’ is terrible for women who want to bring their stories forward and can’t because we simply don’t have the money.
“YouTube was our only source of income and now our channel is nearly dead.”
In March 2016, the channel earned around $2,149. Last June, they only received an estimated $11.
In a blog post, a spokesman for Google wrote,
“To help creators better understand how they can make money from their stories, we have publicly available guidelines which explain the types of videos where we don’t allow ads to run.
“If a creator feels their videos should be monetized, we also provide a simple form for them to appeal the decision.”
It appears that YouTube has yet to follow through on this, at least with Matan Uziel.
You can flag terrorist and neo-Nazi content on the platform. Yet, YouTube may not take them down, at least not right away.
To fight online terror, Google’s General Counsel, Kent Walker, made a pledge. He said that the company would take four steps to fight terrorism online.
Walker wrote,
“Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all. Google and YouTube are committed to being part of the solution.”
He also made a fantastic statement.
“We have used video analysis models to find and assess more than 50% of the terrorism-related content we have removed over the past six months.”
Yet, you can still find hundreds of extremist and neo-Nazi videos on the platform. That’s according to The Henry Jackson Society, a UK think tank.
Researchers found one video titled ‘Adolf Hitler Was Right.’ The video readily praised the Nazi leader. It also showed Jewish families taken to concentration camps.
Researchers also discovered a video of a child singing in the background as footage exalted terrorism. Another video shows a Muslim teenager being attacked. The man who attacked the teen called him “Isis scum.”
In addition, The Henry Jackson Society easily found Taliban propaganda on YouTube.
Even more interesting, users had already flagged many of the terrorist and neo-Nazi videos. The video platform simply hadn’t taken most of them down.
Labour MP Yvette Cooper commissioned the study. The chair of the Home Affairs Select Committee, Cooper denounced the content on the video platform. She called the delay in offensive video takedowns “unacceptable.”
Speaking with The Independent, Cooper said,
“Whether that’s Islamic extremism or far Right extremism, the reality is that this material is far too easy to access.
“We know social media can play a role in the radicalization of young people, drawing them in with twisted and warped ideology.
“YouTube have promised to do more, but they just aren’t moving fast enough.”
According to The Independent, the UK think tank came across sixty-one flagged far-right videos and sixty Islamist videos. As of this story publishing, YouTube had already removed dozens of the flagged videos.
Speaking with The Independent, Dr. Alan Mendoza underscored the severity in allowing terror and neo-Nazi content on the site. The executive director of the Henry Jackson Society, Mendoza said that the internet impacted offenders’ engagement with extremism.
“These ideologies can be freely disseminated and amplified online and there is room for improvement by technology firms to provide spaces to expose and debate their inconsistencies.”
So, should platforms like YouTube receive punishment for not quickly taking down extremist content? Yvette Cooper believes so. She called on the UK government to introduce “proper penalties and fines for social media companies who do not act swiftly enough to remove dangerous and illegal content.”
Cooper isn’t alone. The Policy Exchange, another UK think tank, found that over 75% of the British public want internet companies to do more “to find and delete content that could radicalize people.”
The Policy Exchange also found that people in the UK accessed jihadi content more frequently than elsewhere in Europe. Globally, the country ranked in fifth place behind Turkey, the US, Saudi Arabia, and Iraq.
YouTube readily defended itself from these claims. A spokesperson for the company said,
“Through new uses of technology, the majority of videos we removed for violent extremism over the past month were taken down before receiving a single human flag. We’re doing more every day to tackle these complex issues.”
The company also vowed to be “part of the solution” to extremism.
If only.
Image by The Israel Project (CC by 2.0)
Youtube (Google) can do what ever the hell it wants. It’s their company.
Just don’t be hypocritical and call it a free speech platform.
musicoverlord.com
Hello, I am a reader of Digital Music News, and I just read your article called “Youtube Still Hosting Neo-Nazi, Terrorist, and Child Exploitation Videos”. I found it intriguing, but compelled to respond to you. I am not a Youtuber myself and have no bias, but I couldn’t help but notice some striking logical errors that are fairly unreconcilable.
I am in no way in favor of extremist or exploitative content, nobody wants to stumble across or host anything like that. But youtube and the amount of content uploaded to it by the minute is way beyond what we can actually comprehend. Hundreds of hours of video are uploaded to youtube every single minute making it completely impossible to manually review. Thus, they employ AI, filters, and even recruit volunteer viewers to censor the content. It’s really not fair to blame Youtube as a platform for the things that slip through the cracks. If somebody decides to draw a swastika, you cannot blame the pen for facilitating the spread of hatred…
Youtube, a bottom-up/grassroots service, is designed as a platform open to anybody. It has given thousands and thousands of people the opportunity to completely change their lives, build a career organically, and help break down the cronyism of corporate media giants that oligopolistically controlled entertainment for decades. With you being so deep into the music industry, I find it baffling that you seem unable to see the powers of liberation platforms like these can give to the common people. People that otherwise would never even be given a shot. Anybody familiar with the history of the music industry should understand how the elite gatekeepers have been able to exploit creatives. This has changed with the internet and places like Soundcloud, Youtube, etc. Artists have been able to launch their own careers all by themselves, creating art free from the shackles of corporate elites and connecting fans to artists directly in a way considered unthinkable just a couple decades ago. So what does this have to do with your article, you ask?
Framing youtube as the facilitators of hateful content actually is really damaging to bottom up platforms. This would be as ridiculous as framing Soundcloud as a tool that spreads hate just because somebody uploaded a podcast talking about pro-terrorist ideas… This does nothing but scare advertisers away from the platform entirely, hurting small artists trying to do good work on a grassroots level. Even worse, the hysteria has led to AI-driven demonetization bots that are gouging TONS of creators, including the one you even mention in your article. My point is that articles like these have led DIRECTLY to the crippling of small creators trying to make it without the help of music industry cronies.
Instead it would be far more productive to be fighting the hysteria. Advocate recognition that the freedom offered by these open platforms has empowered creatives with opportunities unprecedented ever before in entertainment. It would be, quite literally, technologically impossible to always remove all nefarious content while also not triggering any false positives and harming genuine creators pushing the edge. Why don’t you change your article to “AI Technology Still not Good Enough to Create Utopia! Get on it Scientists!!!” . Given the amount of content uploaded hourly, I seriously do not understand how you can decry Youtube/Google for not being tough enough with their algorithms, and then within the same article also decry that creators are being harmed by their AI tech when it was you journalists that pressured them to implement them in the first place over this hysteria… These extremist videos are not rising to the front pages, they dont amass millions of views, and they aren’t actually a prominent force in media; this is just fear-mongering. Frame these platforms as bottom-up platforms instead of hosts of extremism; follow this example:
—A Jihadi video on Youtube with ~900 views gets a Pepsi ad placed on it.
OMG! DOES PEPSI SUPPORT TERRORISM?! Of course not, its an automated system by youtube, nobody watching the video can rationally assume companies specifically endorse the videos upon which their ads appear.
—Pepsi is still not happy. Advertisers demand the ability to choose whether or not their ads appear on content detected as controversial/political/adult
Youtube implements systems to automatically distinguish this type of content, a task impossible to do manually given the amount of content uploaded by the minute. How can these AI systems detect between news coverage of jihadism, anti-jihadism advocacy content, and pre-jihadism content? Well, until AI technology improves, we literally cannot. We do not yet have AI reliably capable of critical thinking, satire/sarcasm detection, or abstract thought. So what do they do? They have no choice but to categorize these as “political” or “controversial” or another vague term. They add a manual appeals process to overturn false positives, but given that the vast majority of the views of videos come when they are newest, by the time the appeals go through, every that will have watched the video might already have watched it.
So what can youtube do? 1. Entice back advertisers by asserting that they are NOT responsible for what a few bad actors choose to put on their site. 2. maybe stash the ad money from demonetized videos, and give creators that full amount if it is determined that the videos were falsely hit. 3. when advertisers go to buy ads on youtube, the “do not place my ads on controversial videos” option is set to ON by default, so maybe change it to OFF by default so creators aren’t missing out on ad money from companies that don’t care who sees their ads and haven’t specifically requested to avoid certain categories.
In conclusion, I think you are taking the wrong approach. If you want to support artists and creators, do not assassinate the very platforms they rely on by scaring off advertisers with fear mongering, but instead promote ideas that fight extremism and focus on the empowerment brought to the disenfranchised through tools of freedom. Otherwise, you’re just giving back power to the entertainment industry oligarchs that exploited so much talent in the past. Fight for art, not for corporatism.
Sincerely,