Livestreamed Massacre: Tech’s Hard Lessons from Mass Murder
Today, mass shooters like the one now in the Buffalo, NY supermarket attack don’t stop at planning their brutal attacks. They are also making marketing plans while ensuring their massacres are streamed live on social platforms in hopes of fomenting more violence.
Sites like Twitter, Facebook, and now the game streaming platform Twitch have learned painful lessons from dealing with the violent videos that now often accompany such shootings. But experts are calling for a broader discussion of live streams, including whether they should exist since once such videos go online, it’s almost impossible to erase them completely.
The self-proclaimed white supremacist gunman who police say killed 10 people, most of them black, had a GoPro camera mounted on his helmet on Saturday to stream his attack live on Twitch, the video game streaming platform operated by another gunman in 2019. who killed two people in a synagogue in Halle, Germany.
He had previously outlined his plan in a detailed but extensive series of online diary entries that had apparently been posted publicly before the attack. However, it’s unclear how people may have seen them. His goal: was to inspire copycats and spread his racist beliefs. After all, he was a copycat himself.
He decided not to stream on Facebook, as another mass shooter did when he killed 51 people three years ago in two mosques in Christchurch, New Zealand. Unlike Twitch, Facebook requires users to sign up for an account to watch live streams.
However, not everything went according to plan. By most reports, the platforms responded faster to stop the Buffalo video from spreading than after the Christchurch shooting in 2019, said Megan Squire, a senior fellow and technology expert at the Southern Poverty Law Center.
Another Twitch user who watched the live video probably flagged it to the attention of Twitch’s content moderators, she said, which would have helped Twitch cut the stream less than two minutes after the first gunfire by a company spokesperson. . Twitch did not say how the video was marked.
“In this case, they did pretty well,” Squire said. “The fact that the video is so hard to find right now is proof of that.”
In 2019, the Christchurch shooting was live-streamed on Facebook for 17 minutes and quickly spread to other platforms. This time, the media seemed to coordinate better overall, especially sharing digital “signatures” of the video used to detect and remove copies.
But platform algorithms can have a harder time identifying a copycat video if someone has edited it. That caused problems, such as when some Internet forum users remade the Buffalo video with twisted attempts at humor. Squire said that tech companies should use “more fancy algorithms” to detect those partial matches.
“It seems darker and more cynical,” she said of recent efforts to distribute the recording video.
Twitch has over 2.5 million viewers at any given time; according to the company, about 8 million content creators stream video on the platform monthly. The site uses a combination of user reports, algorithms, and moderators to detect and remove violence on the forum. The company said it quickly removed the shooter’s stream but did not share details about what happened on Saturday, including whether it was reported or how many people watched the frenzy live.
A Twitch spokesperson said the company shared the live stream with the Global Internet Forum to Counter Terrorism, a nonprofit group founded by tech companies to help others monitor their platforms for rebroadcasts. But clips of the video still made their way to other platforms, including the Streamable site, where it was available to millions of people to watch. A spokesperson for Hopin, the company that owns Streamable, said Monday it was working to remove the videos and terminate the accounts of those who uploaded them.
Looking ahead, platforms could face moderation complications in the future due to a Texas law — reintroduced last week by an appeals court — banning major social media companies from “censoring” users’ views. . The gunman “had a very specific stance.” The law is unclear enough to create a risk to platforms that moderate people like him, said Jeff Kosseff, an associate professor of cybersecurity law at the US Naval Academy. “It really puts the finger on the scale of tracking malicious content,” he said.
Alexa Koenig, executive director of the Human Rights Center at the University of California, Berkeley, said there has been a shift in how tech companies respond to such events. In particular, Koenig noted, coordinating the companies to create fingerprint repositories for extremist videos so they can’t be re-uploaded to other platforms “has been an incredibly important development.”
A Twitch spokesperson said the company will review how it responded to the shooter’s live stream.
Experts suggest that sites like Twitch could exert more control over who can live stream and when by building in delays or whitelisting valid users and banning rule violators. Koenig said: “There should also be a general social conversation about the usefulness of live streaming and when it is valuable, when it is not, and how we set safe standards about how it is used and what happens when you use it. “
Another option, of course, is to stop live streaming altogether. But that’s almost impossible to imagine, as tech companies rely on live streams to attract and engage users to make money.
Koenig said that freedom of speech is often why tech platforms allow this form of technology — beyond the unspoken profit component. But that has to be weighed “with rights to privacy and some other issues that arise in this case,” Koenig said.