by Jenny Domino
“At each of these moments, Facebook took action too late, and too incrementally, to avert harm – harm that the platform knew was imminent and which its very design facilitated. Facebook’s history in Myanmar highlights the broader problems with content moderation in vulnerable contexts, and it should serve as a cautionary lesson to companies that wish to prevent their platforms from facilitating atrocities.”
On Feb. 24, 2021, three weeks after Myanmar’s military (the Tatmadaw) staged the coup that changed the course of Myanmar’s future, Facebook announced it was banning all “remaining” military and military-controlled state and media entities from Facebook and Instagram, including ads from military-linked commercial entities. To this end, Facebook said it would use the United Nations Fact-Finding Mission on Myanmar’s (FFM) 2019 report on the military’s economic interests in identifying relevant commercial entities. Though Facebook had removed military accounts and pages in the past for their involvement in human rights violations– most notably the account of State Administration Council chairperson, Senior-General Min Aung Hlaing, in 2018– the company’s 2021 decision went much further by indefinitely suspending military and military-related accounts and pages regardless of content or behavior.
In other words, contrary to popular opinion, former President Trump’s account was not the first high-profile account to be indefinitely suspended by Facebook. Commander-in-Chief Min Aung Hlaing’s de-platforming was described as “unprecedented” in 2018, but outside of Myanmar watchers, it garnered little global attention, much less debate.
The 2021 de-platforming of the Tatmadaw offers a renewed opportunity to engage with how Facebook – and other powerful platforms – should do their part to deal with authoritarians and human rights-violating institutions like the military in Myanmar. Facebook’s act to de-platform the Tatmadaw was the culmination of incremental steps taken by the company in response to the “emergency situation” unfolding in Myanmar since the coup. For example, on Feb. 11, Facebook decided to “significantly reduce” the distribution of false content emanating from military accounts and pages still operating on the platform, but stopped short of an immediate outright ban. And it had previously declined to ban the entire military’s presence on its platform despite it being implicated in the Rohingya human rights crisis. At each of these moments, Facebook took action too late, and too incrementally, to avert harm – harm that the platform knew was imminent and which its very design facilitated. Facebook’s history in Myanmar highlights the broader problems with content moderation in vulnerable contexts, and it should serve as a cautionary lesson to companies that wish to prevent their platforms from facilitating atrocities.
A Dance of De-Platforming and Platforming
The coup was not made overnight. Experts observed that the groundwork had been carefully laid months before the coup to delegitimize the results of Myanmar’s November election. The Tatmadaw’s social media presence formed a key part of the plan to control the narrative surrounding the poll, one the Tatmadaw later invoked to justify the power grab.
Facebook’s latest de-platforming decision removed the infamous “Tatmadaw True News Information Team,” which was the military’s official Facebook page, and Major General Zaw Min Tun, the military’s spokesperson. It also included the MRTV and MRTV Live “news” pages. According to Facebook, these pages repeatedly violated its prohibition on incitement to violence and coordinating harm.
As was locally reported last year, the Tatmadaw set up the “True News” page in June 2020 to provide “accurate news” ahead of the November election. This factor is crucial for analyzing the present, as the military has alleged voter fraud as a pretext for the coup. The Tatmadaw has so far attempted to justify its actions – including the unlawful use of lethal force, arbitrary arrests of protestors, democratically elected leaders, journalists, and celebrities, shutdown of independent media, and denial of the nationwide calls to respect the people’s vote – ironically under the guise of protecting “democracy.” The pages most recently banned by Facebook were used to disseminate the Tatmadaw’s false narrative on election fraud and enabled it to lay the groundwork for the coup.
One may wonder why the Tatmadaw True News Information Team and Zaw Min Tun were allowed on Facebook in the first place. Indeed, I asked this question here last year upon the creation of the page. To recall, Facebook banned Min Aung Hlaing and other generals in 2018 for their involvement in serious human rights violations in Myanmar. As found by the FFM then, the Rohingya and other ethnic minorities suffered the brunt of these violations, some of which constituted crimes under international law. As regards speech that could be expected to incite violence or discrimination, the FFM specifically found that Facebook, along with other forms of media, “enabled the spread of … hateful and divisive rhetoric” targeting the Rohingya in a country where, as the FFM observed, “Facebook is the Internet” (para. 1345). Given the platform’s dominance in the country, the FFM found it “unsurprising that propagators of hate speech resort[ed] to Facebook to wage hate campaigns, amplify their message, and reach new audiences.” The platform was also “widely used to spread misinformation … by government officials and the Tatmadaw” (para. 1346).
As I explained last year, Facebook neither attributed its 2018 de-platforming decision to the 2018 FFM report or any of its Community Standards, despite the latter supposedly being the governing law on the platform. Moreover, although select military officials were de-platformed, civilian government officials equally found by the FFM to have disseminated hate speech against the Rohingya were still allowed to remain on the platform with apparently little to no consequence. More importantly in the present context, only select military accounts were permanently suspended rather than the entire military, without any explanation for this particular strategy. The Tatmadaw television network, Myawaddy, was in fact banned in 2018 but allowed to reappear until banned again in the wake of the coup. It was at least in part as a result of these gaps that the Tatmadaw was able to set up accounts such as the Tatmadaw True News Information Team. Even now, Facebook has inexplicably decided to allow at least 23 other pages and profiles “controlled and/or operated by the Tatmadaw” (without specifying which ones) to continue operating, only opting to significantly reduce the distribution of their content.
What Is Proportionate?
It is apparent by now that Facebook’s lack of clarity and consistency in its 2018 de-platforming decision has returned to haunt it in 2021. Both Zaw Min Tun and the Tatmadaw True News Information Team that Facebook platformed in 2020 figured prominently in the coup that has derailed Myanmar’s fragile path to democracy. And Zaw Min Tun remains the military junta’s spokesperson, now leading the Information Team of the State Administration Council.
As a non-State actor, Facebook has the corporate responsibility to respect human rights under the U.N. Guiding Principle on Business and Human Rights (UNGPs), which includes adherence with the International Covenant on Civil and Political Rights (ICCPR). Article 19 of the ICCPR requires the application of the principles of necessity and proportionality to any measure limiting the right to freedom of expression. This would mean imposing the least intrusive yet necessary means in regulating expression to achieve a legitimate aim. The aims that are legitimate are themselves narrow, including the protection of national security, public health and morals, public order, and the rights of others. The application of these standards to social media platforms seeking to regulate users’ speech, including State actors’ speech, has generated robust debate, but the U.N. Special Rapporteur on freedom of expression notes that platforms have an arsenal of tools to proportionately address problematic content. De-platforming or permanent account suspension of a user is the most extreme response.
In March this year, Facebook released its Corporate Human Rights Policy, wherein it formally committed to respect human rights as laid out in key international instruments. The non-profit BSR recommended adopting such a policy in 2018 in its human rights impact assessment of Facebook’s operations in Myanmar. An important component of this commitment is a clarification on Facebook’s de-platforming approach to world leaders of illiberal and authoritarian regimes, as I initially raised here and here. Such a commitment also demands ongoing human rights due diligence to enable understanding of the wider history and context of the places where Facebook operates, instead of only considering the immediate circumstances surrounding a tragic event. This contextual familiarity is critical to inform questions of risk and to apply the standards of legitimacy, proportionality, and necessity, which are necessarily fact-based.
In the context of Myanmar, a comprehensive ban on military and related accounts appears to have been warranted for some time, given the well-documented and egregious violations with which these accounts have been associated. The FFM reports in 2018 and 2019, Facebook’s own de-platforming decision in 2018, years of widely documented human rights violations in Myanmar, the assortment of international legal proceedings concerning these human rights violations, the prevalence of military-controlled state media, the state of censorship in the country, and other considerations all support a blanket ban on military-linked accounts. Facebook has in fact been moderating Myanmar military-linked accounts under its Coordinated Inauthentic Behavior policy continuously since its initial 2018 actions.
Instead, Facebook’s decisions to (i) platform the Tatmadaw True News Information Team and Zaw Min Tun in 2020, (ii) belatedly reduce distribution of military-related content ten days after the coup, and (iii) wait until the third week of the coup to indefinitely suspend military and related accounts do not seem to be a sufficient response. Facebook justified the indefinite ban in the third week of the coup by invoking four factors:
The Tatmadaw’s history of on-platform content and behavior violations that led to us repeatedly enforcing our policies to protect our community.
Ongoing violations by the military and military-linked accounts and Pages since the February 1 coup, including efforts to reconstitute networks of Coordinated Inauthentic Behavior that we previously removed, and content that violates our violence and incitement and coordinating harm policies, which we removed.
The coup greatly increases the danger posed by the behaviors above, and the likelihood that online threats could lead to offline harm.
Factors 1 and 2 were true long before the coup, while Factors 3 and 4 were true in the first week of the coup as much as in the third week. This is also not the first time that the Tatmadaw disregarded the people’s vote. Further, rumors of a coup spiked in late Jan. this year, prompting diplomatic missions in Myanmar to release a joint statement urging the military to recognize the election results.
Facebook’s responses had also been partially preempted: By Feb. 24, Facebook and other social media platforms were already banned in Myanmar as part of the military junta’s series of network disruptions which, since the coup on Feb. 1, has involved internet and mobile network shutdowns and social media and website bans. Despite the local social media ban, however, Facebook’s decision still carried weight as many people within Myanmar continue to access the platform through virtual private networks (VPN).
Overall, Facebook’s response particularly pales in comparison to its relatively swift action to de-platform former President Trump soon after the U.S. Capitol riots and then refer the matter to the Facebook Oversight Board. Although abhorrent, the violence at the U.S. Capitol was mild compared to the scale of violence called for and facilitated by military-linked Facebook accounts in Myanmar. And the threat to democracy posed by the Jan. 6 insurrection was dwarfed by the actual overthrow of democracy on Feb. 1 in Myanmar and the international crimes several years earlier during the Rohingya crisis. These contrasts reveal a broader problem with Facebook’s approach to content moderation in the most fragile contexts.
A Global Conversation Centered On At-Risk Populations
Facebook’s inconsistent and often-belated de-platforming approach in Myanmar should invite deeper reflection on the parameters of social media access provided to world leaders of illiberal and authoritarian regimes. In its decision on the Trump ban, the Facebook Oversight Board made a policy recommendation to Facebook to “publicly explain” the applicable rules when imposing account-level sanctions against influential users, including its strikes and penalties process. Facebook should take up this recommendation and clarify how it enforces such policies abroad. This problem is also not unique to Facebook. Other platforms such as TikTok and YouTube have respectively moderated Tatmadaw soldiers and video channels for violent content, but have been vague about these content decisions.
Further, beyond formally committing to provide access to remedy in line with the UNGPs, social media companies should explore how various forms of remedy and reparation (including compensation, rehabilitation, and satisfaction in the form of public apologies, memorials, and truth-telling) ought to be made available to communities in Myanmar affected by the adverse human rights impacts that their technology or business operations have engendered. As suggested by Rohingya refugees in Bangladesh, this would include engaging with human rights victims, responding clearly and promptly to requests, providing free internet access to refugee camps, and using their influence to promote an open internet, especially in the region where majority of their users are located.
The U.S. government can also play an important role by considering the global impact of domestic legislation applicable to American platforms before such companies are implicated in atrocities elsewhere. For instance, as I suggested here, talks of reforming Section 230 of the Communications Decency Act would be benefitted by discussions on how the safe harbor provision affects users in varying political contexts, which, in turn, can affect U.S. foreign policy.
As the world ruminates on the Facebook Oversight Board’s recent decision on Trump’s de-platforming, the international community must realize that other countries have needed this kind of intervention long before de-platforming became an issue in liberal democracies. As news from Myanmar continues to shock and inspire, it is time to center the lived experience of at-risk populations, caught between a rock and a hard place, in conceptualizing how online speech ought to be governed in an interconnected world. Let’s not wait for democracy – no matter how imperfect – to unravel before noticing the signs.