The world was transfixed when horrific images of Paris’ Notre-Dame cathedral engulfed in flames were broadcast across social media. Global news outlets in nearly every country were live-streaming the devastating fire across platforms like Facebook, Twitter and YouTube.
However, those watching the live-streamed broadcast on YouTube saw something strange—an excerpt from an Encyclopedia Britannica entry regarding the World Trade Center bombing on September 11, 2001 accompanied the footage. And while the text box may seem like a complete non-sequitur, it was actually a disclaimer meant to discourage the spread of what YouTube’s algorithms mistakenly flagged as a “conspiracy theory” or “fake news.”
As it turned out, the columns of smoke rising from the Gothic cathedral bore a startling resemblance to those of 9/11–and YouTube’s anti-conspiracy software was quick to kick into gear.
YouTube later apologized for the disclaimer. According to Bloomberg, a company spokesman said:
“We are deeply saddened by the ongoing fire at the Notre-Dame cathedral … These panels are triggered algorithmically and our systems sometimes make the wrong call. We are disabling these panels for live streams related to the fire.”
The text boxes were introduced to YouTube last year in an announcement by Chief Executive Officer Susan Wojcicki, who touted the system–which relies on websites including Wikipedia–as a new tool that would lead to a cessation of fake news videos’ virality on the platform.
At the time, the CEO said:
“When there are videos that are focused around something that’s a conspiracy — and we’re using a list of well-known internet conspiracies from Wikipedia — then we will show a companion unit of information from Wikipedia showing that here is information about the event.”
Last year’s announcement came as YouTube, a division of Alphabet Inc.’s Google, came under fire for allowing its algorithm to direct traffic from so-called “moderate” mainstream content to videos seen as on the fringes of political discourse, including holocaust denial videos, 9/11 conspiracy theories, hoaxes and a host of other videos ranging from the misleading to the outright false that are seen as contributing to a growing culture of viral “fake news” on the web.
Yet the new system–which YouTube claims generates information boxes below videos “tens of millions” of times per week– has hardly sated critics of social media giants including Twitter, Facebook and Google.
Just last month, video footage of the attack on mosques in New Zealand was unrolled across various social media video-streaming platforms ranging from Twitter to YouTube and LiveLeak, leading to a frenzied effort to pull the videos from sites across the web.
The Wall Street Journal called the video’s rapid spread “a gruesome example of how social-media platforms can be used to spread terror despite heavy spending by their owners to contain it.”
Government officials in the United States and Europe also called on social media platforms to create new ways to halt the spread of “toxic videos” and “hate content,” while New Zealand’s authorities warned netizens that they may face up to ten years in prison for possessing the video.
However, as Monday’s warning shows, attempts to introduce automated censorship or advisory filters can often lead to false-positives and the removal or mislabeling of decidedly non-“toxic” information that is of interest to the public.