Internet and Democracy: Trump’s case and the consequences 

1
Antonio Spadaro, SJ

 Antonio Spadaro, SJ / Communication / Published Date:4 February 2021/Last Updated Date:17 March 2021


Free Article

On January 20, 2021, Joe Biden was sworn into office as the 46th President of the United States of America and began moving into the White House. The transition of power from his predecessor was anything but easy. On January 6, supporters of Donald Trump had stormed Capitol Hill after he had incited his backers, especially his 88 million followers on Twitter, to take back victory, which he said had been “stolen.”

Shortly after, on January 8, Twitter’s board of directors decided to block the outgoing president’s account “due to the risk of further incitement to violence.” Facebook, Instagram, Twitch and Snapchat also followed, suspending Trump’s accounts. Apple and Google removed access to Parler, a social networking app widely used by his supporters, and Amazon stripped the same company of data storage space.

This chain of decisions provoked divergent responses. On the one hand, there was outrage against corporate managers violating freedom of expression; on the other, the decision to silence Trump was greeted with relief, as if a bomb ready to explode had been defused just in time. The New York Times even published a long list of all Trump’s verbal attacks on social media from 2015 to the present.

La Civilta Cattolica

* * *

It is very important to understand the meaning of what happened because it touches the relationship between technology and democracy, which is more fundamental than ever today. There are at least two issues to consider in order to begin to understand how it is possible that digital platforms could have silenced a democratically elected head of state, such as the president of the United States of America.

The first issue is an American law. Section 230 of the Communications Decency Act states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” So, in accord with this legal provision, social networks are not responsible for the content they help disseminate, whatever it may be. Yet, their decision seemed to be the result of a decision related to a sense of responsibility.

The second consideration is that social media and social networks are private platforms that ask users to accept certain norms, particularly those about conduct that incites hatred. Whoever violates these rules – a private citizen or a head of state – can have their account removed from the platform. So, the networks’ management evaluates and judges what decision to make without having to answer to higher authorities, even if they are political and fully legitimate.

For a long time, the attitude of these social networks toward President Trump and his communications was ambiguous, at best, and only recently began “flagging” posts that put in circulation content contrary to their rules and guidelines, because they were judged to be fake news or hate messages. On the other hand, however, Facebook, starting in 2016, introduced the “newsworthiness” exception, whereby the restrictive rules do not apply if the content in breach of regulation is deemed to be in the public interest, particularly if it is circulated by politicians. Then there is the case of Steve Bannon, who was never removed from Facebook despite the fact that he called for the beheading of two senior U.S. government officials on air! This is the same Bannon that Trump pardoned a few hours before leaving the White House. Trump’s social media fell silent only once he was on his way out of office, electorally defeated. Too late for many.

What Jack Dorsey, Twitter’s CEO, wrote is interesting: “I do not celebrate or feel pride in our having to ban @realDonaldTrump from Twitter, or how we got here.” And he clarified that the choice was made due to “an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety.” That said, Dorsey continued, “I feel a ban is a failure of ours” in the goal “to promote healthy conversation.” These actions, he admitted, set “a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.

* * *

Were they right, then, to ban Trump on social media? What was at stake? What Dorsey wrote is really important. Clearly and lucidly he identified the critical issue of a situation that cannot be easily resolved: private companies today exercise a real and strong power over a part of the global public conversation and over the way democracy is lived and expressed.

Today, online conversation via social networks has significant political weight. On the one hand, the ability of citizens to participate and express their opinions is growing: citizenship today must also be digital. On the other hand, the possibility of manipulating public opinion is also growing, thanks to the astute use of data and algorithms, and the possibility of inciting hatred and spreading false news.

The censorship of Donald Trump has highlighted that the digital environment today is a private sphere where the rules of the owners of the communication platforms apply. In this specific case, this seems to have protected people from further incitement to violence. But the problem remains: Who decides? And when can intervention be triggered? Currently, the private rules of the contract apply. And what is the boundary between the application of rules and the censorship mechanism?

Technology has brought about profound changes in our social and political life. We must take note of this. A strong realization of this came on December 15 last year with the announcement of the Digital Services Act under the auspices of the European Commission.

At a time when digital platforms perform an important public service with democratic relevance, they require a social conscience – also the urgent need for improved digital education – and a consequent political decision: they cannot be free to self-regulate with private rules and secret algorithms. What is needed is transparency, forms of protection and vigilance, together with an awareness of the business models of the platforms, which simultaneously control infrastructure, content, users and the advertising market.

The fate of our societies depends on it.