This episode explores challenges generated by the digital age and their impact on freedom of expression in an unequal world. What threats does disinformation pose to democracy? Why are minorities unable to exercise their right to free expression equally in the digital space? And how can big data and tech corporations be subject to accountability and regulation?
Guests featured in this episode:
Irene Khan, the first woman to ever hold the mandate of UN Special Rapporteur for Freedom of Opinion and Expression.
She is also a Distinguished Fellow and Research Associate at the Graduate Institute's Albert Hirschman Centre on Democracy.
Previously, Irene Khan was Secretary-General of Amnesty International (2001-2009) and Director-General of the International Development Law Organization (2010 to 2019).
Democracy in Question? is brought to you by:
• Central European University: CEU
• The Albert Hirschman Centre on Democracy in Geneva: AHCD
• The Podcast Company: Novel
Glossary for this episode...
Who is Maria Ressa?
(00:5:22 or p.2 in the transcript)
Maria Ressa, in full Maria Angelita Ressa, Filipino-American journalist who, through Rappler, the Manila-based digital media company for investigative journalism that she cofounded, became known for detailing the weaponization of social media and for exposing government corruption and human rights violations. Her reporting led to a backlash from the Philippine government, and Ressa, who holds dual citizenship, became an international symbol of the fight for freedom of the press in hostile circumstances. With Russian journalist Dmitry Muratov, she was awarded the 2021Nobel Peace Prize, cited for using “freedom of expression to expose abuse of power, use of violence, and growing authoritarianism in her native country.” Source
What was the insurrection of Capitol Hill in 2021?
(00:6:02 or p.2 in the transcript)
United States Capitol attack of 2021, storming of the United States Capitol on January 6, 2021, by a mob of supporters of Republican President Donald J. Trump. The attack disrupted a joint session of Congress convened to certify the results of the presidential election of 2020, which Trump had lost to his Democratic opponent, Joe Biden. Because its object was to prevent a legitimate president-elect from assuming office, the attack was widely regarded as an insurrection or attempted coup d’état. The FBI and other law-enforcement agencies also considered it an act of domestic terrorism. For having given a speech before the attack in which he encouraged a large crowd of his supporters near the White House to march to the Capitol and violently resist Congress’s certification of Biden’s victory—which many in the crowd then did—Trump was impeached by the Democratic-led House of Representatives for “incitement of insurrection” (he was subsequently acquitted by the Senate). Source
What is the Facebook Oversight Board?
(00:18:49 or p.4 in the transcript)
The Oversight Board was created to help Facebook answer some of the most difficult questions around freedom of expression online: what to take down, what to leave up, and why.
The board uses its independent judgment to support people’s right to free expression and ensure those rights are being adequately respected. The board’s decisions to uphold or reverse Facebook’s content decisions will be binding, meaning Facebook will have to implement them, unless doing so could violate the law.
The purpose of the board is to promote free expression by making principled, independent decisions regarding content on Facebook and Instagram and by issuing recommendations on the relevant Facebook company content policy.
When fully staffed, the board will consist of 40 members from around the world that represent a diverse set of disciplines and backgrounds. These members will be empowered to select content cases for review and to uphold or reverse Facebook’s content decisions. The board is not designed to be a simple extension of Facebook’s existing content review process. Rather, it will review a select number of highly emblematic cases and determine if decisions were made in accordance with Facebook’s stated values and policies. Source
What is a platform law?
(00:21:06 or p.5 in the transcript)
The internet would seem to be an ideal platform for fostering norm diversity. The very structure of the internet resists centralized governance, while the opportunities it provides for the “long tail” of expression means even voices with extremely small audiences can find a home. In reality, however, the governance of online speech looks much more monolithic. This is largely a result of private “lawmaking” activity by internet intermediaries. Increasingly, social media companies like Facebook and Twitter are developing what David Kaye, UN Special Rapporteur for the Promotion and Protection of the Right to Freedom of Opinion and Expression, has called “platform law.” Through a combination of community standards, contract, technological design, and case-specific practice, social media companies are developing “Facebook law” and “Twitter law,” displacing the laws of national jurisdictions. Source
SR: Welcome to "Democracy in Question," the podcast series that explores the challenges democracies are facing around the world today. I'm Shalini Randeria, Rector and President of Central European University in Vienna and Senior Fellow at the Albert Hirschman Center on Democracy at the Graduate Institute in Geneva. This is the final episode of season three of "Democracy in Question", and of course, we'll continue with season four in a couple of weeks. My guest today is Irene Khan, who is the UN Special Rapporteur for freedom of opinion and expression since August 2020.
The first woman ever to hold this mandate, she's an internationally recognized advocate for human rights, gender equality and social justice. She was Secretary-General of Amnesty International from 2001 to 2009, and Director-General of the International Development Law Organization from 2010 to 2019. She's also been consulting editor of the "Daily Star" - Bangladesh's largest English newspaper. My conversation with Irene Khan focuses on the challenges generated by the digital age and its impact on freedom of expression in a highly unequal world. What kinds of threats does disinformation pose to democracy? Why are minorities unable to exercise their right to free expression equally in the digital space? Are structures of inequality that silence the voices of women reproduced within cyberspace too? What role do big data and big tech corporations play in enabling or curtailing freedom of expression? And how can their enormous power be subject to accountability and regulation? These are some of the issues that I discuss in this episode with the UN Special Rapporteur, Irene Khan. They have also been subjects of her very well-read reports. So Irene, a warm welcome to you and thanks for making the time for the podcast today.
IK: Well, thank you very much, Shalini, for giving me this opportunity to talk to you about what I consider, and what I think many people consider, to be some of the biggest challenges but also opportunities of our time with digital technology.
SR: So, let's go straight, Irene, to the heart of the digital revolution, which is turning out not to be such a panacea after all. The digital divide not only continues to separate us into the haves and have-nots of the new information age, but also social media-based mass communication presents us with a lot of unprecedented challenges. Your mandate as the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression has, of course, been crucial in drawing attention to some of the key challenges which the digital age presents. Disinformation and the gender digital divide are two of the major issues that you have engaged with. So, let's start with the question of disinformation, and perhaps also link it to the question of media capture. On the one hand, we are seeing the ownership of media becoming ever more concentrated and difficult to regulate and on the other hand, we have seen an amplification of disinformation using digital technologies. So, I'll come to the disinformation in a moment, but could you first share some of your insights, some of your experiences in dealing with the whole topic of media capture?
IK: You have made a very interesting point, Shalini. You know, the biggest weapon against disinformation is actually more factual information, more well-balanced information through an independent, diverse, pluralistic media. And that is why it is very disturbing that just as we confront these problems of disinformation, at the same time, the space for press freedom, freedom of the media, independence of the media, pluralism in the media sector is actually shrinking for a whole range of reasons. One of which, of course, is the ownership of media, more and more concentrated in a few hands. Sometimes these hands are pretty close to the ruling class, to the political party in power. So that's extremely disturbing. And what it's actually leading to - both this trend towards disinformation as a result of digital technology and, on the other hand, the shrinking space for independent media voices, diverse media voices - what it actually means is that people no longer trust the information that they receive. The integrity of information is challenged. We no longer share the same facts. And as you know, Maria Ressa, who received the Nobel Peace Prize last year has made it very clear, as she says, "If we don't have a shared understanding of truth, then that is the death knell of democracy." We see it in the U.S., for example, with this big steal on one side and this big lie on the other side with democratic elections. That's why I worry about what is happening to legacy media at the same time as digital technology is expanding the space for other forms of information.
SR: So let me pick up the point, Irene, which you just made about the U.S. because it's a year since the insurrection of Capitol Hill in Washington, D.C. in January 2021. And what has come to light in the meanwhile is the astonishing impact of conspiracy theories largely disseminated through online forums and groups among sections of the American population who still refuse to believe that Trump lost the election. So, the pernicious impact of this kind of disinformation on democracy affects us all, whether it be in the U.S. or in other parts of the world where a lot of that kind of information has fueled violence against ethnic minorities, religious minorities, etc. What are some of the key countermeasures to disinformation that you have been able to identify? And I asked myself, can the international human rights framework on which you have worked so much earlier provide a template to formulate some of these countermeasures?
IK: Let me start by pointing out, Shalini, as you know very well, disinformation is not a new phenomenon. In my report to the Human Rights Council, the first report that the human rights system has produced on disinformation, I actually begin with the story 2000 years ago of how Octavius used a vicious disinformation campaign to discredit Mark Antony and become the first Roman Emperor, Augustus. What has happened today 2000 years later is that digital technology has opened new pathways to create, disseminate, amplify enormously the spread of disinformation in ways beyond our imagination, polarizing societies endangering democratic elections, discrediting science and COVID response, for instance. It also attacks people, journalists, human rights defenders, feminist activists, dissidents and so on.
Now, at the same time, we shouldn't forget that these same social media platforms have also created a huge opportunity for us to communicate, whether it's family chats, whether it's political dialogue, whether it's actually organization of groups, women's groups for example, feminist groups, LGBTIQ groups, minorities, who are not welcome in the public space in the real world. That is why when we seek to introduce restrictions to deal with disinformation, we have to be very careful that we do not at the same time close off the taps of free speech. You should not throw the baby out with the bathwater. So, the challenge, I think, for governments are not just to use regulation but to make it “smart regulation”. “Smart regulation” means encouraging these platforms to be transparent about their content moderation policies, to be transparent about the algorithms that they use, which is actually at the bottom of the problem, and also for the companies when they're producing these products, that they need to ensure that they've actually done risk assessment to see what the adverse impact is, that they provide remedies for those whose posts are taken down.
And on the side of the governments, I think it's extremely important to ensure data protection because what is driving the social media platforms is, of course, their ad-driven business model, and that is based on collecting our data. The longer we stay engaged on the platforms, the more data the companies get, the more they use that data for driving their business. And therefore, data protection is extremely important. But we also need to look at a few other means other than regulation, because regulation will only take us so far.
What we need to look at also is: how do we put users more at the center of it? For me, platform users are not just customers or consumers of information, they're actually rights holders, and therefore, they need to have much more choice. They need to have digital and media literacy so that they build their own resilience. They know how to manage this new information ecosystem that's emerging. And the third pillar, I would say, is to create an environment for diverse and reliable information. And that means ensuring that governments themselves increase their own transparency so that people can trust public information. It means encouraging an environment for pluralistic, free, independent media free of political influence so people can actually have diverse sources of reliable information. So, I think it's around those four pillars. One is smart regulation, second is responsible behavior by these companies, third is a user-centered approach, and fourth, is an environment where press freedom is respected. That's the way to go forward to deal with disinformation and slowly build back the public trust that has been lost.
SR: So, I think you make a very, very important point about the over-regulation by states, which could, of course, be much more detrimental. On the other hand, the self-regulation by corporations has not been adequate. And I'll come to that in a moment. But before I come to that, Irene, the other point which you just in passing have mentioned earlier, I think we should dwell on that for a minute. Because I think what we do see is the whole question of gender, which I think what you point to. There is the ambivalence of social media. They have enabled a lot of women's voices to be amplified, to be diffused. It's given new space for minority groups, for LGBTQ rights groups. And on the other hand, we have also, however, seen a lot of hate speech directed particularly against women, against minority groups. We've seen a lot of misogyny, and we've also seen a lot of human rights defenders being targeted, especially on unregulated social media. And on the other hand, what we've seen is how big tech companies have become quite complicit with governments in authoritarian contexts, especially in coming together to pose serious threats to the freedom of expression. So, let's talk about the gendered nature of some of the threats first, and then we'll come to the whole question of surveillance and then regulation.
IK: I think you have raised this very important issue of gender, justice and freedom of expression in the digital age. That was the subject of my first report to the UN General Assembly last year in October. And, as we all know, expression is not free for many women and gender-nonconforming individuals in the real world. And that is why for them, the digital space has offered opportunities that they never had before to organize, to share ideas, especially I would say those, such as feminist activists, for example, or the LGBTQI groups. But at the same time, that same space has also been polluted with all the prejudices and biases that exist in the real world against women and others. And we have seen a spike in online violence on gendered hate speech, disinformation, and especially targeting women journalists, women human rights defenders, women politicians - the objective is quite clear, push them out of the digital space.
And in today's world, if you are a public figure, and you are not safe in the digital space, and you get off the digital space, you disappear. And that, of course, reduces diversity. It affects democracy, because a very important part of our society is excluded. That's extremely dangerous.
Now, at the same time, we have to ensure not that the space is shut down, not that we shut down the digital platforms with so much regulation, but that we make those platforms safe. And that can be done. There has been some effort made by some of the platforms, but there are many more suggestions that are coming up of how one can actually, for example, have buttons by which you can shut off some of this horrible speech, by which you can put in complaints. And of course, most of these platforms have systems for dealing with complaints and so on. But, you know, they haven't actually been geared towards handling online violence. Online gender-based violence is an interesting phenomenon in many ways. It's different from the way in which violence takes place in the real world. But it is easy. It is relatively easy to detect what it is online, and companies need to be more vigilant, they need to introduce instruments for dealing with it, and they need to empower the women and the LGBTQI activists online to be able to use those instruments to protect themselves.
SR: So, let's talk about the role the companies are playing at the moment in regulating the system, on the one hand, as a whole. Companies themselves wanting to regulate the system, but also wanting to, in a sense, frame what one could call platform law. Much more than the state, it's the companies which say we'll be able to self-monitor and self-regulate, and we will also provide the legal framework, which should be binding on us, but in a sense of best practices more than anything that can really be sanctioned by authorities, which we do not yet have because we probably would need a transnational institutional framework in which to do it. So, one case that we have is, for example, Facebook, having instituted an Oversight Board. Could you say something about the experiences, the dilemmas, the challenges of self-regulation by companies? Because if you don't want to leave it entirely up to the state, which is, we've seen, can immediately shade into censorship, and we don't have another regulatory authority, at the moment we can only leave it to the companies, or how else could we regulate the digital sphere and make the large corporations accountable?
IK: I would say Shalini, of course, you're absolutely right. There isn't a global set of rules. There isn't a global governance system for social media at the moment and it's unlikely to emerge in the near future because governments have different points of view on it. So, what we have, on the one hand, is state regulation, sometimes good, sometimes not so good, because they're going too far into interfering with freedom of expression, and then we have self-regulation. And in some countries, for example, in the United States, self-regulation really is all that exists and that is also not enough, as we know, because companies being companies are driven by profit, not necessarily with public interest. And there is a big danger there in that they are not necessarily living up to their human rights responsibilities. Under the UN guiding principles, companies have a responsibility to uphold human rights through due diligence, risk assessment, and obviously, as I said earlier, being transparent and open about what they are doing. Now, what some companies like Facebook have done is under a lot of pressure, under a lot of heavy criticism, public criticism, and to some extent, fear of government regulation to avoid over-regulation or any regulation by governments at times, these companies have tried to show that they themselves are acting to improve their content moderation, to avoid breaching human rights standards and other standards, their own policies at times.
So, Facebook has created this very interesting Oversight Board, which is I would call it a kind of “supreme court in the digital world”, in the metaverse of Facebook. It has been created independently of Facebook through a foundation. It is composed of very well-known highly reputed experts, international law experts, journalists, for example, the former editor of "The Guardian," a former prime minister of Denmark, very well-known human rights activists, who are working, for example with Human Rights Watch, and so on. They are members of this Oversight Board. Their responsibility is to look into complaints that users are making about the way in which Facebook operates its content moderation policies and to give judgments. They have selected out of several hundred thousand such complaints about two dozen or so right now. And interestingly enough, they are actually applying human rights standards to determine whether Facebook is applying its content moderation policies well, as well as policy guidelines of Facebook itself. And these judgments that are coming through very interesting interpretations of international law. And I would say, you know, very high-quality judgments well thought out. What is unclear at the moment is to what extent Facebook will actually follow these decisions. And we will have to see how that emerges in Facebook's policies; the Oversight Board has been in operation for only about a year now. But at the same time, it raises some very interesting questions about global governance.
We have normally seen global rules being set and applied by governments through, for example, international courts, such as the European Court of Human Rights, or the Inter-American Court on Human Rights. We've seen standards being set by national courts. But here we actually have a body set up by a private entity, a company that's using the same international standard to make judgments, very well thought out judgments, making them public, through consultations, getting views from multi-stakeholder constituency. And what we're actually seeing is what is called “platform law” being interpreted. So, are we going to have a dual legal universe being created, one at the virtual level by the private sector, and another by state and state-owned entities at the real level? What will happen if national courts begin to be guided by the virtual world? Are we losing the nature, you know, of the universe that we've seen created by a state-dominated system? Are these early signs of the private sector taking over judicial functions? I mean, these are all very interesting questions that I think scholars need to look at.
SR: So, let me turn to one other aspect, which you briefly mentioned earlier, and that is the question of surveillance. And I think there are two aspects to this surveillance. On the one hand, it's the monopolization of the digital world by a few big tech companies and their collusion with government agencies for purposes of monitoring and surveillance of citizens which has impinged on privacy and the freedom of expression and movement of citizens. Edward Snowden's revelations of the global surveillance system led by the American National Security Agency made us aware of these dangers first, I think. But the recent disclosures by the Facebook whistleblower have shown the pervasiveness of digital surveillance that tech companies themselves have deployed to track and predict the behavior of their users on an everyday basis and such manipulative practices are, in a sense, less visible than the Chinese style crackdown on freedom of expression.
So the question here is, what kinds of mechanisms can be used, what kinds of legal norms can be used to address this question of surveillance, both by companies of their users, whom you, as very rightly have pointed out, should be seen not simply as paying customers but as rights bearers, and. on the other hand, to curtail the kind of surveillance that this entire technology has opened out for governments to use, hand in hand with companies, which then are subservient to the dictates of authoritarian regimes?
IK: Again, a very interesting challenge of our times, surveillance. Now, there are two types of surveillance here. One is the data surveillance, I would call it, of our behavior on social media that social media platforms do all the time, collecting data about how and what are we watching on social media. And from that, they seek to predict our behavior and actually drive us into rabbit holes of information, disinformation and so on and create a whole range of other problems. What is happening there is actually a violation of our freedom of opinion.
Now, freedom of opinion under international law is an absolute right. There can be no restriction by any authorities and effectively what international human rights law is doing is protecting our thought processes. When we express our thoughts, they can be regulated to some extent, for example, through defamation laws, or on grounds of public order or security. But what we think in our minds as long, as we do not express it, is protected fully. Yet today, social media companies, by following our behavior through algorithms, are effectively manipulating our thought processes without our consent, knowledge or control. This is re-education, digital re-education, digital surveillance at an unthinkable level. This is an issue that has not been examined enough and I think it's a very serious issue. The way to protect ourselves from that is through data protection. And governments have a huge responsibility there to control through data protection laws what companies are doing. That's one type of surveillance.
But there is another type of very dangerous surveillance that we see every day. For example, "The Guardian” reported about electronic surveillance to which the Human Rights Watch's researcher on Lebanon was subjected through her phone. This is the Pegasus spyware, as it's called, technology developed by a company NSO which is being sold commercially to a large number of governments, at least NSO says they're selling it only to governments, but, obviously, a huge number of journalists phones have been now infected by this spyware. There are thousands of others, dissidents, human rights defenders, and activists who speak out against critical of governments. And what we see here is again, rather like the social media platforms, a weak regulatory environment at national and international levels on the one hand, and on the other hand, and advance of technology without control, and public-private collaboration that is undermining human rights. And the rights affected, freedom of expression, it has a chilling effect on sources for journalists if journalists' phones are being surveilled. No one's going to speak to the journalist again.
There are, of course, direct threats, because that information, that surveillance, is then used to arrest people. Women journalists in Bahrain have complained, whose phones have been infected, they have complained that not only have they been subject to pressure from governments, but in their own homes, in their societies, in their families, that they feel that they are being watched, their social behavior is being watched and monitored. And we need regulation, we need regulation of trade and export controls on these kinds of surveillance technology. We also need at the national level stronger legal systems to protect people from this kind of interference by private actors colluding with the state actors. But when the state actors are part of the collision, it becomes extremely difficult at the national level to do much. There's not much remedy left if your government is surveilling you and you don't have a strong judiciary to stand up to the government.
And so, I think the international community has also to look at these areas and set some rules and I hope that the UN Human Rights Council, for example, will be looking at it. Because from both through social media surveillance and through these spywares, I think what we think, or what we do, what we say, is being controlled in unacceptable ways in violation of international human rights.
SR: So Irene, would multi-stakeholder regulation be one way to go?
IK: Absolutely. I think it has to be - it's actually the only way to go forward, because we cannot trust states entirely. They have a vested interest here. We cannot trust companies entirely because they are profit-driven. And yet, there is no representation of the users. So, I think what we need to look at more and more in this new world of regulating managing technology is a multi-stakeholder oversight body. We need to have more multi-stakeholder oversight bodies, where civil society, governments, and companies can collaborate to ensure safe space for free speech.
SR: Thank you very much, Irene, for these insights into the kinds of challenges which we are facing, new challenges on problems raised by the use of digital technologies, by the concentration of media ownership and power. But as you also very interestingly point out, disinformation has been with us since centuries. It's only the new technological challenges that we need to face up to now. So, thanks so much for this wide-ranging conversation and thanks for being with me today.
IK: Thank you. It's been a pleasure.
SR: So, what we have heard is a very nuanced view of how the space for freedom of media, freedom of opinion, and expression is shrinking. With the whole of independent, diverse, plural media landscape coming under pressure through media capture on the one hand, through disinformation, which is polarizing our societies on the other hand. The shared understanding of truth becomes then a victim of public distrust which arises as a result of this disinformation and media capture.
The shared understanding of truth is, however, a key to functioning democracies, as we have heard. Smart regulation could be one answer, not leaving regulation entirely to governments because this can slide very quickly into censorship. Not that it's entirely enough to leave the self-regulation of corporations to these actors themselves. We lack a global governance system for the digital sphere, and we are unlikely to get one. Self-regulation by big tech corporations seems to be at the moment all we have on the cards. But this is bringing into being a new legal universe, a universe of private international law, whose relationship to national state law is, at present, unclear.
What we have very clearly heard from Irene Khan is the importance of international human rights law in this context, as being one of the major instruments which can protect us both against surveillance of the kinds that the digital corporations have put in place, but also against the kinds that governments and the corporations together are using against citizens.
This was the 10th and final episode of season three. Thank you very much for being with us today. We will resume with season four of "Democracy in Question" after a short break of a few weeks. Please go back and listen to any episode you might have missed. And of course, let your friends know about this podcast if you're enjoying it. You can stay in touch with the work of the Central European University at www.ceu.edu and the Albert Hirschman Center on Democracy at www.graduateinstitute.ch\democracy.