Could Facebook have been aware of ominous direct-messaging threats from a gunman who, according to Texas authorities, massacred 19 children and two elementary school teachers? Could it have warned the authorities?
Texas Governor Greg Abbott revealed the online messages sent minutes before Wednesday’s attack, though he cited them as messages typically distributed to a wide audience. Facebook intervened, noting that the gunman was sending direct one-on-one messages, not public messages and that they were only discovered “after the terrible tragedy”.
The latest mass shootings in the US by active social media users may put more pressure on social media companies to increase their oversight of online communications, though conservative politicians — including Abbott — are also urging social platforms to relax their restrictions on some expressions.
Facebook parent company Meta monitors people’s private messages for certain types of harmful content, such as links to malware or images of child sexual exploitation. But copied images can be detected using unique identifiers — a kind of digital signature — making them relatively easy to mark by computer systems. Trying to interpret a series of threatening words – which may resemble a joke, satire, or song lyrics – is a much more difficult task for artificial intelligence systems.
For example, Facebook could highlight certain phrases like “going to kill” or “going to shoot.” Still, without context — something AI has a lot of trouble with in general — there would be too many false positives for the company to analyze. So Facebook and other platforms rely on user reports to help catch threats, harassment, and other violations of the law or their own policies. As the latest shootings show, it often comes too late or not at all.
Even this kind of monitoring could soon be obsolete as Meta plans to roll out end-to-end encryption on its Facebook and Instagram messaging systems next year. Such encryption means that no one but the sender and receiver – not even Meta – can decipher people’s messages. WhatsApp, also owned by Meta, already has such encryption.
A recent report commissioned by Meta highlighted the benefits of such privacy but also pointed to some risks – including users who could abuse the encryption to sexually exploit children, facilitate human trafficking and spread hate speech.
Apple has long had end-to-end encryption on its messaging system. That has brought the iPhone maker into conflict with the Department of Justice over the privacy of messages. After the fatal shooting of three American sailors at a naval installation in December 2019, the Justice Department insisted that investigators needed access to data from two locked and encrypted iPhones that belonged to the alleged gunman, a Saudi aviation student.
But the same experts warned that such backdoors to encryption systems make them inherently insecure. Knowing that a backdoor exists is enough to focus the world’s spies and criminals on discovering the mathematical keys that can unlock it. And when they do, everyone’s information is vulnerable to anyone with the secret key.