Internet shutdowns have emerged as an extreme, yet recurrent, practice to control online communication. In Africa, both autocratic and democratic governments have increasingly resorted to shutdowns as a response to concerns about disinformation around elections or the potential for online hate speech to encourage violence. Partial or nationwide network disruptions, however, have also occurred at times where no threats seemed imminent, including peaceful demonstrations and national exams.
Internet shutdowns appear disproportionate and abusive, especially from the perspective of citizens and end-users who are denied opportunities by a power both arrogant and insecure or incompetent. When leaders who have long overstayed their time in office, such as Cameroon’s Paul Biya or Uganda’s Yoweri Museveni, assert their need and right to enforce suppressive measures to guarantee peaceful elections or prevent the threat of external interference, we see aging, despotic men clinging to power. But are all their claims illegitimate, just cover-ups to retain control? What if these—and similar—arguments were not coming from them, but from more respected sources?
What if it was a respected leader such as Thomas Sankara who asserted the need for this kind of response? Sankara was a revolutionary and pan-Africanist who led Burkina Faso from 1983 until his assassination in 1987. Nigerian literary scholar Abiola Irele wrote that Sankara was “a leader with the genuine interest of the people at heart,” leading “a revolution in the true sense of the word.” His stature and commitment were recognized not only by his admirers, but also by his rivals, who saw how his style of leadership and commitment to socialism served as an inspiration to others on the continent. As a U.S. Embassy cable recognized, following his “example of simplicity, austerity, and honesty,” Burkina Faso had become “highly regarded for the lack of corruption in the government.”
Examining internet shutdowns through the life and thought of Sankara illuminates an often-overlooked aspect of these communication blocks: how these measures are a response to the overwhelming power of for-profit social media companies to enable unprecedented forms of interference with national politics—without taking responsibility for it.
This imbalance has glaringly emerged in whistleblowers’ leaks and revelations, which add to a growing body of evidence demonstrating Big Tech’s negligence and bias. Facebook whistleblower Frances Haugen has referred to her former employer’s strategy and behavior as hypocritical, expanding into new markets under the banner of “building community” and “bringing the world closer together.” In practice, social media companies have avoided taking responsibility and action when interactions between their platforms and local politics sowed and strengthened divisions and antagonism. Sankara would have called it a manifestation of imperialism—a term that has largely fallen out of fashion, but whose core tenets aptly describe the conduct of social media companies—which act in ways that seek to benefit the center of this power, disregarding the consequences on the peripheries.
The model of profitability for social media companies relies on attracting and keeping users’ attention, even when this means promoting vitriolic and polarizing content. Aware of this feature, but seeking to respond to waves of scandals and criticism, companies have invested in systems to remove hate speech and disinformation. But these efforts reflect deep inequalities and have been largely driven by financial incentives and disincentives.
The vast majority of content moderation activity focuses on rich markets, such as the United States or European Union, that are in a position to force companies to act. There are a few exceptions, such as geopolitical events that are U.S. foreign policy priorities (Russia’s invasion of Ukraine, for example), or stories that galvanize global public opinion, such as the genocide against the Rohingya in Myanmar. But in 2020, 87 percent of the time allocated to training disinformation detection algorithms focused on English content, while only 9 percent of users were English speakers. For low-resource languages, including many across Africa, the investment of resources and time can be measured in decimals. As a result, as Haugen has stressed, the most fragile countries end up using the least safe version of the platform: one with little to no content moderation.
These double standards in dealing with core and peripheral markets are also evident from how Big Tech companies openly interact with actors deemed powerful and resourceful. While Facebook has been forced to comply with Germany’s demanding and costly requests to remove content violating its national laws, it has largely dismissed demands originating from African leaders and legislators. This reflects another form of imbalance, which is the very disparate ability of policymakers and legislators in the Global North and Global South to understand how Big Tech firms operate and the expertise and resources to both engage and challenge them. Many European countries have specialized government departments monitoring online content and experienced lawyers ready to challenge companies. In September, the European Union opened an office in Silicon Valley with the explicit intention to extend the capacity of EU regulators to engage American social media companies, a benefit few African countries could afford.
An example of African countries’ struggle in engaging with the rules set and implemented in California came just before the Ugandan elections in early 2021. The Uganda Communications Commission requested Google to take down seventeen YouTube accounts that it accused of inciting violence, compromising national security, and causing economic sabotage. Google declined this request and cited the lack of a court order. Nicholas Opiyo, a human rights lawyer, argued that the way the Ugandan government approached Google revealed a lack of understanding of how large social media companies operate and how content is assessed. He noted that the government cannot simply point to a statute and say the company is in violation of it. “Digital companies work on the basis of legitimate court orders,” he told The Observer. “In other words, there has to be due process to make the point of breach of the law. No digital company is going to take such a letter seriously. It will be put in the dustbin immediately.”
At the same time, during this preelection period, Facebook took down a number of progovernment pages for engaging in “coordinated inauthentic behavior,” despite allegations that the opposition was using similar tactics. The move was made on the recommendation of the Digital Forensic Research Lab, a nongovernmental organization focused on the opposition’s claims and concerns. The government saw Facebook’s action as biased and an unequal application of rules, arguing that the company was “tak[ing] sides” against the government. As Museveni argued, “We cannot tolerate the arrogance of anyone coming here to decide who is good and who is bad.” The internet was shut down during the election period, and Facebook remained banned for more than six months.
These arguments do not seek to justify or condone internet shutdowns. But recognizing shutdowns also as forms of contestation—rather than just abuses by despotic leaders—may open alternative avenues for responding to them. This is where we see the possibilities offered by a leader like Sankara. While many African leaders have—in the words of Cameroonian historian Achille Mbembe—adopted and fetishized the concept of the nation-state from colonial powers, and even borrowed terms “such as ‘national interest’, ‘risks’, ‘threats’ or ‘national security’ . . . [that] refer to a philosophy of movement and a philosophy of space entirely predicated on the existence of an enemy in a world of hostility,” this need not be the case. Rather, Mbembe suggests that African nations must abandon these concepts for “our own long held traditions of flexible, networked sovereignty.” Mbembe’s conclusions would align well with Sankara’s precepts.
It is with regard to the world-of-hostility mindset that internet shutdowns are invoked by leaders as legitimate and proportionate responses, but it may be by relying on networked sovereignty that internet shutdowns are made redundant. Networked sovereignty has its roots in precolonial Africa, when long-distance trade was one of the drivers of cultural and political exchanges. Yet is surprisingly akin to foundational ideas of the internet. Mbembe notes that at that time, these networks were more important than borders, and that what mattered the most was the extent to which flows intersected with other flows.
As decolonization took root, newly independent African states were supposed to exercise a monopoly on state functions almost immediately, once the colonial authorities transferred power to the local ruling elites. This entailed these leaders leveraging media—including print, radio, and television—as tools for state and nation building, in order to generate a type of authority that could not be achieved during previous revolutions. Control of the media in the immediate postcolonial period combined authentic projects of community building, such as large-scale language and literacy projects, with self-serving tactics to retain power for the few.
Until recently, the ability of African governments to regulate media outlets in ways that ensure they follow certain national standards had seemed in reach via coercion, cooptation, or negotiation (with the possible exception of some international broadcasters). However, social networking platforms, which are tremendously popular and evoke powerful imagery of tools for activism and contestation, have remained inaccessible to national authorities, thereby breaking this mechanism of control.
Sankara’s pan-Africanism and Mbebe’s image of Africa’s networked sovereignty could offer a stronger and longer-lasting response to this loss of control and deep inequality. Facebook and Google are betting on the exponential growth of data usage and production on the continent, financing two of the largest undersea cables on the coasts of Africa. As a result, greater coordination and solidarity among African leaders and collectives—of users, companies, and entrepreneurs—could force powerful tech actors to one negotiating table. If the institutions of regional cooperation or the African Union were able to offer shared guidelines to counter online speech inciting violence, they could not only gain greater leverage with the tech giants, but they could also push back against members that claim that internet shutdowns are the only means available to stop violent or destabilizing speech.
Carnegie’s Digital Democracy Network is a global group of leading researchers and experts examining the relationship between technology, politics, democracy, and civil society. The network is dedicated to generating original analysis and enabling cross-regional knowledge-sharing to fill critical research and policy gaps.