Blog

Since the start of the Israel-Gaza conflict in early October, social media platforms have been inundated with misleading and intentionally falsified content surrounding Hamas’s attacks on Israeli civilians and Israel’s military response. Reports suggest that Hamas is leveraging the lax content moderation on sites like X and Telegram to promote violent content, gain support from extremists, and terrorize civilians. Other actors have spread unverified stories about Hamas fighters inflicting extreme violence on Israeli children and women that have been used to justify the toll of death and destruction that Israel’s military response to the Hamas attacks has unleashed on civilians.

The Israel-Gaza conflict is only the latest example of the misuse of social media to spread disinformation and threaten civilian safety in war. In Ethiopia, online hate speech calling for violence contributed to real world bloodshed. Hate speech shared on Facebook in Myanmar that was amplified by Meta’s algorithms catalyzed violence, including  the torture, rape, and murder of thousands of civilians. In Ukraine, Russian and Russian-affiliated actors sowed panic and tried to manipulate population movements through disinformation.

Misleading and fake online content is so dangerous during war because civilians are increasingly relying on social media as their first source of information about their surroundings, and to make decisions about their safety.

Across online platforms, algorithms deciding what content to show users regularly promote   polarizing posts–including those containing disinformation—maximizing engagement with such content. While most of these platforms have policies committing them to moderate and remove certain inauthentic or harmful content, investment in moderation is often limited and not as thorough in languages other than English. Telegram has very few content moderation rules, and a recent report showed that children are increasingly at risk of exposure to graphic images of violence from Israel-Gaza, made viral by TikTok’s and Instagram’s algorithm. Since Elon Musk took over X, the platform has rolled back its content moderation efforts, firing staff and relying instead on AI and community crowdsourcing to flag and regulate harmful content.

In April, X also began allowing users to purchase a blue check mark for their accounts, a shift that may make it more difficult for users to identify trusted information sources, since the symbol was previously used to indicate the authenticity of influential accounts. Accounts with blue check marks are among those that have been responsible for spreading disinformation about developments in the Israel-Gaza war. The platform has also restricted access to its public data-sets  that allowed open-source investigators to monitor disinformation and analyze its harmful effects.

As evidence of the negative effects of disinformation has grown, states and international organizations have stepped up their efforts to curtail it through a variety of governance mechanisms. The UN Secretary-General’s New Agenda for Peace, released earlier this year, recognizes the online dissemination of misinformation, disinformation, and hate speech—and  the corresponding failure of social media platforms to respect human rights standards in monitoring online harm—as one of the core threats contributing to global instability and insecurity. In response to this threat, Secretary-General Guterres launched a process for establishing a Code of Conduct to secure voluntary commitments from Member States and digital platforms.

International legal frameworks can also provide a basis for efforts to govern the spread of disinformation. While disinformation itself is not prohibited by International Humanitarian Law (IHL), parties to a conflict, are prohibited from using disinformation in a way that violates other provisions of IHL—such as terrorizing the civilian population or inciting violence. An International Court of Justice case brought by the Gambia against Myanmar could establish precedent under IHL for holding states accountable for online hate speech. However, there is not yet strong legal consensus or precedent on how some provisions of IHL should be interpreted when the threat to civilians stems from disinformation. Addressing the responsibility of non-state actors can be more challenging. Under customary international law, victims’ right to remedy for violations of human rights and the laws of war, include restitution, compensation, rehabilitation, and guarantees of non-repetition, depending on the context.

In addition, online platforms have a corporate responsibility to protect human rights. The UN Guiding Principles on Business and Human Rights stipulate that, when business enterprises identify that they have caused or contributed to adverse impacts, they should provide remediation. In early 2023, over a dozen UN experts including Special Rapporteurs penned an open letter asking for stricter regulation of hate speech on online platforms. They emphasized how the International Convention on the Elimination of Racial Discrimination, the International Covenant on Civil and Political Rights, and the United Nations Guiding Principles on Business and Human Rights (UNGP) provide a clear framework on how businesses could center human rights, accountability, ethics and transparency in their business model. In practice, civil society and lawyers have argued for large platforms like Meta to be held liable for wide-ranging reparations and restitution in Myanmar and Ethiopia.

The strongest international legal framework to regulate large platforms is the EU’s Digital Services Act (DSA), enacted in November 2022. Under the law, designated “Very Large Online Platforms” like Meta, TikTok, X and Snapchat, who fail to comply with legal requirements can incur fines worth 6 percent of their global turnover and require enhanced supervision. Repeated serious breaches can result in a temporary ban on services in the EU. In mid-October, EU regulators instituted an inquiry under the DSA into X, Meta and TikTok, to investigate terrorism-related and illegal videos/images on these platforms, regarding the Israel-Gaza war. In response, Meta and TikTok have set up command centers with Hebrew and Arabic speakers, removed hundreds of thousands of illegal posts, and promised to enhance safety features. The threat of heavy fines under the DSA has successfully cornered these platforms into active efforts to remove violent content.

At the national level, states around the world have adopted or considered at least 70 laws to combat disinformation and regulate digital platforms since 2019. Some states are exploring regulatory measures that would require platforms to enhance operational transparency, instead of regulating individual pieces of content. Typically, this would mean implementing community standards concerning a company’s actions around disinformation, the management of personal data for microtargeting, and expanding access for researchers and others to platform-held data. But legislative approaches can also define a narrow scope of solutions to identify and remove harmful content. Additionally, some of the legislation introduced by states is flawed, overbroad, and poses a threat to protected speech.

As is demonstrated by the rapid proliferation of large volumes of disinformation in the Israel-Gaza conflict and other countries experiencing violence globally, additional international and national governance efforts are needed to regulate the spread of false and harmful content on social media.

The EU’s ability to exert pressure on social media companies to regulate harmful online content about the Israel-Gaza war indicates that the most effective way forward might be for states and regional blocs to implement laws modelled after the EU’s DSA.

If states devised stronger domestic standards on compensation and other remedies when online platforms are found to be violating human rights norms, these platforms would be incentivized to improve internal content moderation policies and practice.

Additional guidelines and voluntary commitments secured by the UN can also help foster positive norms. For example, the UN’s upcoming Summit of the Future 2024 could contribute to stronger international consensus on interpretations of how IHL should be applied to the use of disinformation and comprehensive regulatory frameworks combatting disinformation. The UN Secretary-General has already referred to information integrity on digital platforms as one of the core agenda items to be discussed at the Summit. At an international level, the UN Human Rights Council (HRC) could exercise its Special Procedures powers to compose an independent group tasked with investigating and addressing the threat of disinformation on online platforms. Such a group could devise practical guidelines for social media companies on applying human rights principles to their content moderation policies, accountability mechanisms, and impact assessments. The UN HRC could also more routinely authorize fact-finding missions and investigations into cases where disinformation may have contributed to violence and violations.

 

Arnaaz Ameer, CIVIC Research Fellow 

Lauren Spink, CIVIC Senior Research Advisor

Related Content
Filter by
Post Page
Blog Research Report Ukraine Publication Russia Civilian Protection Disinformation Misinformation
Sort by