International | Accidental cover up

Social-media platforms are destroying evidence of war crimes

Content-moderation policies have led to evidence being erased, sometimes before it is ever seen

TECHNOLOGY HAS always mattered in the prosecution of war crimes. The Nazis who stood trial at Nuremberg were damned not only by war reporters’ photographs and films but also by their own typewriters and mimeographs. Forensic science and satellite imagery aided the prosecution of Rwandan and Yugoslavian war criminals. In August Salim Jamil Ayyash was convicted in absentia by the Special Tribunal for Lebanon for his role in a bombing in 2005 that killed 22 people in Beirut, among them Rafik Hariri, the country’s former prime minister. Mr Ayyash was first identified by algorithmic analysis of mobile-phone metadata.

Social media opens a new frontier in such investigations. In 2016 a court in Frankfurt convicted a German national of war crimes after photos were posted to Facebook of him posing with the severed heads of enemy combatants impaled on metal poles in Syria. But social-media firms are in a tricky position. They are under pressure to protect users from horrific content and extremist propaganda, and keen to stay on the good side of governments. That leads them to adopt stringent content-moderation policies. But their policies have also led to the loss of evidence of human-rights violations. As a result, opportunities to bring the perpetrators of appalling atrocities to justice may be missed.

It is not hard to see why investigators have increasingly turned to social media to gather evidence. Conflict zones are difficult and dangerous to visit. Eyewitness reports are fallible and can be manipulated. Gathering information remotely allows investigators to corroborate evidence or generate new leads and information. Fighters bragging about their exploits on Facebook might inadvertently give away their location from the metadata in photos, landmarks in the background or even the weather. That same boast may give prosecutors evidence of intent, a necessary element of a successful prosecution.

User-generated evidence is especially useful for international bodies like the International Criminal Court (ICC) that do not necessarily have the ability to serve subpoenas or search warrants, or adequate funding to mount a thorough investigation. In 2017 the ICC was able to issue an arrest warrant—the first based on social-media evidence—for Mahmoud al-Werfalli, a commander in the Al-Saiqa Brigade (a unit of the self-styled Libyan National Army controlled by Khalifa Haftar, a warlord from the east of the divided country) for his involvement in the murder of 33 people.

However, such evidence can be far from perfect. Those who record it on the ground often lack professional evidence-gathering experience, can be selective in what they film and put themselves at enormous risk. Evidence can be misattributed, staged or manipulated—a growing concern given the rise of deep fakes, highly plausible but untrue audio and video clips created using machine learning. Wading through the mass of potential evidence constantly uploaded requires time and resources. That is, if it is not destroyed before investigators can use it. A new report from Human Rights Watch, an advocacy group based in New York, alleges that social-media platforms are erasing evidence of atrocities from the internet. And its use in court is new, so there is little precedent as to what judges will admit and what weight they will give it. But despite these problems, user-generated contributions can provide otherwise unavailable material to achieve the gold standard for war-crimes investigations: the triad of physical, documentary and testimonial evidence.

Take the example of Bellingcat, an investigative-journalism site that used online materials to uncover the involvement of Russia’s 53rd air defence brigade in the shooting down of Malaysia Airlines flight MH17 over eastern Ukraine in 2014. When asked by prosecutors to provide this evidence, it found that much of it, hosted on Facebook, Twitter, YouTube and VKontakte, a Russian social-media site, had been taken down. It tenaciously scoured the internet for alternative copies, contributing to a successful prosecution in absentia in the Netherlands in March 2020 of three Russians and a Ukrainian. But the initial deletion endangered the investigation.

Such deletions are common. The Syrian Archive, a non-profit group that records and analyses evidence of human-rights violations in Syria, estimates that of the nearly 1.75m YouTube videos it had archived up to June 2020, 21% are no longer available. Almost 12% of the 1m or so tweets it logged have disappeared. Had the Syrian Archive not collected copies, this evidence might have been lost for ever. Human Rights Watch revisited the links to social-media evidence that it used in public reports between 2007 and 2020, most published in the last five years, and found that 11% of the sources it had cited as evidence of human-rights violations had disappeared.

Some of this is because users remove content themselves. But much is a result of platforms’ policies. Though they remove horrific content for good reasons—videos of beheadings and extremist propaganda are not what Twitter means by “see what’s happening in the world right now”—their methods are largely self-regulated and often ignore the content’s evidential value. They are also zealous out of a desire to stay on governments’ good sides, fearing failure to remove offensive or extreme content might lead to more stringent regulation. And they have been stung before—Facebook came under scrutiny for taking over a year to remove material posted by Myanmar’s armed forces to foment genocidal rage against the Rohingya in 2017. (Our picture shows a member of Myanmar's security forces walking past burned Rohingya houses.)

In that case, Facebook preserved much of the content it had removed, but it has not made it easy for investigators to get their hands on it. Gambia has brought a case against Myanmar to the International Court of Justice on behalf of the Organisation of Islamic Co-operation, a group of 57 mainly Muslim states. But the case has been delayed as Gambia tries to convince an American court to compel Facebook to release the content so it can be used as evidence.

It is unclear precisely what happens to content that is removed from platforms. It is often retained for a time, though this varies with platforms’ terms of service and legal restrictions. But once it is removed, investigators have difficulty gaining access to it. And legal requirements regarding data retention may lead to permanent deletion.

As the use of algorithms to remove offending content has increased, the problem of lost evidence has got worse. In the summer of 2017 hundreds of thousands of videos from Syria were taken down from YouTube by a new algorithm unable to differentiate between material posted by ISIS and that from human-rights activists. After some negative press, YouTube restored many of the videos. Initially, heads of social media companies were caught off-guard, having never imagined that their platforms would be used for such purposes. Now they have little excuse. As one Google product manager puts it, Syria is “the first YouTube conflict”.

More and more, social-media companies are using algorithms that remove content before it reaches the public. A video that can be seen elicits a trail of comments, even if it is later deleted. These can aid investigators establish the network of people involved in an incident, according to Alexa Koenig, of the Human Rights Centre at University of California, Berkeley School of Law, helping ”to sketch out the who, what, where, when, why and how”. But posts taken down before publication leave no public trace. Of the content that Facebook removed for violating its community guidelines between January and March 2020, 93% was flagged by automatic systems, not human moderators. Of those, half were removed before any user saw them.

Social-media companies have tried to mitigate the problem on their own. In December 2016 Facebook, Microsoft, Twitter and YouTube established the Global Internet Forum to Counter Terrorism (GIFCT), a communal database where terrorist content is marked with a unique “hash” that other sites can track. As of July 2020 it had over 300,000 hashes. But a hash does not lead to the automatic removal of content. That decision rests with individual platforms, and little is known about how they respond to content being on GIFCT. Numerous human-rights groups have complained about the lack of transparency. It is also unclear how effective the system is: if content is edited—the speed changed, the length altered—it can bypass filters.

A better solution would be for platforms to preserve deleted content that might have evidential value—or pass it on to an independent archive, or to several. This information should be accessible only to those with a legitimate interest, argues Ms Koenig. An independent body would not only preserve potential evidence but could also aid in its verification and help ensure that it is collected and preserved in such a way as to increase the chance that it will be admitted into court and given weight as evidence, while still respecting the privacy concerns. Social-media platforms must be careful when sharing normal users’ data so as not to violate privacy statutes or cause harm to users. When those users have filmed a local army unit engaged in a massacre, say, the risks are that much greater.

Preliminary work is already being done. Experts are in the process of analysing potential models and drafting international protocols to improve how evidence is collected and verified so that it might be useful in prosecutions. But preserving potential evidence will require, above all, a commitment to do so from those in charge of social-media platforms.

In his final report on the Nuremberg trials, Telford Taylor, chief counsel for the prosecution, marvelled at the quantity of documentary evidence created by a “Teutonic penchant for making detailed records of events and conversations” and made available only by the “rapidity and completeness of Germany’s final military collapse”. Had the Nazis had more time, perhaps they could have erased the evidence of their misdeeds. Social media produces far more evidence than the Nazis ever did. But it can also destroy it at blitzkrieg speed.

More from International

Would you really die for your country?

Military conscription is on the agenda in the rich world

Who’s the big boss of the global south?

In a dog-eat-dog world, competition is fierce


Thirty years after Rwanda, genocide is still a problem from hell

Mass killings are at their highest level in two decades