IGF 2023 – Day 4 – Open Forum #139 Non-regulatory approaches to the digital public debate

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> JUAN CARLOS LARA:  I think it is now to start.  It is the moment we'll begin this panel session right here.  Welcome everyone who is attending this in the final day of the IGF of 2023.  This is open forum #139, non‑regulatory approaches to the digital public debate.  Are we going to speak Spanish?

OK, cool. 

So welcome to this session.  This is the final day of this year's IGF.  It is a pleasure to be with you all.  First of all, I want to thank the organizers of this here event representing the office of the Special Rapporteur for Freedom of Expression of the InterAmerican Commission on Human Rights, of the Organisation of American States.  Thanks also to the representatives of Sweden and the European Court of Human Rights that have supported the proposal for this session and also to the centre for the Foundation for the Freedom of the Press in Columbia and the Centre for Freedom of Expression and Access to Information at the University of Argentina. 

Second of all, I will introduce myself.  My name is Juan Carlos Lara.  I work for the Dereches Digitales, a Civil Society organisation working on the intersection of Human Rights and digital technologies in Latin America.  I am coming from the city of Santiago in Chile, and my colleagues are scattered throughout the Latin America region.  Our concern as an organisation is how digital technologies can be used for the exercise of Human Rights as well as they can be a threat to Human Rights when they are regulated or misused by actors both private and public. 

Finally, I am going to briefly introduce the panelists by name.  They will be introducing themselves when it is time for their own interventions.  We are accompanied at this hour online by Mr. Pedro Vaca, the special rapporteur for Freedom of Expression of the Inter-American Commission of Human Rights and the Organisation of American states.  Here on‑site we have Ana Christina Reles, a senior programme specialist at the Freedom of Expression and Safety of Journalist in UNESCO, the United Nation's educational, scientific and cultural organisation.  Chantal Joris, legal officer at Article 19, the International Human Rights Organisation working to protect and promote the rights of freedom of expression and by Ramir Alvarez Uguarte, the deputy director at the Centre for Studies on Freedom of Expression at the University of Argentina.  Thank you all once again for attending this and thank you to the panelists who will be speaking in turn in a few minutes. 

The rules of this panel are as follows.  We will be begin with a brief overview of the situation which has motivated this discussion here on what the digital public debate landscape and the challenges to Human Rights are with regards to online expression.  After that, each speaker will have 10 minutes for their interventions.  After that, if time allows, we will have a second round of reactions and participation hopefully for audience interventions, mediated by the moderators here on‑site and also online. 

The guiding question that will open this discussion is on the possibilities of non‑regulatory approaches where they can succeed and the challenges they present but to introduce the subject, a few words from the moderator here, because we understand that in the intricate terrain of the digital public debate we have faced for a long time a series of challenges to Human Rights that have been compounded, that have been reinforced, that have been worsened in some cases by events around the world, and the failure of both private tech companies and states to fully comply with their Human Rights obligations has had profound consequences affecting democratic institutions, Human Rights, and the rule of law, and with the background of global and local crises in terms of war, disease, authoritarian rule and Human Rights abuses that happen both off line and online are faced with challenges to Human Rights that oftentimes are addressed or attempted to be addressed through regulatory response but because of the presence and the importance of private actors, this always entails also an interaction with companies that often have more power or more resources than many states. 

Over time, we have witnessed the far‑reaching impact of online violence, discrimination, and disinformation in the digital public debate, issues that have cast shadows over the virtual landscape leading to harm, especially against marginalised and vulnerable communities and groups.  What was once a platform, promising, diverse voices and perspective have seen developments, hostile communicative environments, particularly for traditionally discriminated groups.  Furthermore, discourse has become polarised, distorting the conversations around essential matters and eroding trusts in authoritative sources such as academia, traditional media, and public health authorities. 

To address these challenges, some regulatory proposals have come to the forefront at a global scale.  We have seen that there are efforts by international organisations to provide guidelines, to provide guidance for regulatory response.  We have seen that regional blocks have also reacted with their own concerns, but many of these intricate systems have aimed to tackle various diverse different but interconnected issues, including competition, data protection, interoperability, transparency, and due diligence in the digital public sphere.  And while these efforts are critical for responsible behaviour online and for protecting Human Rights, they also introduce complex questions and concerns that demand careful consideration about the balance of rights, about the roles of states, about jurisdictional issues, and the enforceability of the provisions that are created. 

One of the pivotal questions that emerges is related to the fragmentation of the Internet and why regulation is essential for safeguarding Human Rights, it is vital that these regulations do not inadvertently infringe on the freedom of expression, privacy, and Human Rights, so striking a delicate balance in the digital world is a formidable challenge. 

Notably many regions and regulatory debates have been in their infancy or have been completely absent especially in many regions in the majority world, and in this context, several principles, the application of international Human Rights laws have played a crucial role in guiding the behaviour of companies that mediate online communications.  These principles have provided valuable guidance for alternative frameworks, but their effectiveness is a matter of discussion and debate. 

So in response to this debate, we are going to speak this morning here about what these challenges are.  Since we have seen the advance of a global trend to regulatory platforms in the Internet in general as a path to address the growing threats of Human Rights, what are the limitasions of these proposals?

If they have limited effects, in some cases, can present these tensions with the balance of Human Rights, what other policies, what other institutional and legal frameworks have been implemented globally or can be implemented globally or regionally to proper freedom of expression online and make diverse, equal, fair, nondiscriminatory and democratic online public debate?

The first word is going to be to Mr. Pedro Vaca, the special rapporteur for freedom of expression of the InterAmerican Commission of Human Rights.  Pedro, please go ahead. 

>> PEDRO VACA:  Thank you.  Good morning there.  I hope you're having a great IGF this year.  Thank you very much.  Firstly, I would like to highlight that in the Americas, we identified that the current dynamics of freedom of expression and the Internet are characterized by at least three aspects.  The first one is the deterioration of the poverty rate.  The second is the need to make processes, criterias, and mechanisms for Internet content governance compatible with democratic and Human Rights standards.  And third, the lack of access into interconnectivity and digital literacy to enhance civic skills online.  This is closely related to dynamics of violence, disinformation, inequalities in the opportunities of participation in the public debate and the viralisation of extremist content.  We understand that diverse and reliable information and free independent and diverse media are affecting disinformation, violence and Human Rights violations requires multidimensional and multistakeholder responses that are grounded in the full range of Human Rights. 

As people worldwide increasingly rely on the Internet to connect, learn, and consume news, it is imperative to develop connectivity, and access to the Internet is an indispensable enable of a broad range to Human Rights, including access to information.  An open, free, global interoperable, reliable, and secure Internet for all facilitated individuals’ enjoyment of their rights, including freedom of expression, opinion, and peaceful assembly, is only possible if we have more people accessing and sharing information online. 

Additionally, in the informational scenario of media and digital communication, citizens and consumers should be giving you tools to help them assess the origin and likely veracity of newsstoreys they read online.  Since the potential to access and spread information in this environment is relatively easy, malicious actors manipulate it to the public debate.  In this sense, critical digital literacy aims to empower users to consume content critically as a prerequisite for online engagement by identifying issues of bias, prejudice, misrepresentation.  Critical literacy, however, should also be about understanding the position of the media technology in society.  This goes beyond understanding the digital media content to include knowledge of the wider socioeconomic structures within which digital technologies are imbedded. 

Here, we have a few questions.  How are social media platforms funded or for instance, what is the role of advertisement?  To what extent is content free or regulated?

Given the importance for the exercise of rights in the digital age, digital media and information literacy programmes should be considered an integral part of education efforts.  The promotion of digital media and information literacy must form part of a broader commitments by states to respect, protect, and fulfill Human Rights and by business entities.  Likewise, initiatives to promote journalism are key in facing informational manipulation and distortion which requires a state's and private actors to promote the diversity of digital and nondigital media. 

On the other hand, the role of public officials in the public debate is highlighted.  It is recalled that state actors must preserve the balance and conditions of the exercise of their right of access to information and freedom of expression.  Therefore, such actors should not use public resources to finance content on sites, applications, or platforms that spread deceit and violent content and should not promote and encourage stigmatisation, which promotes protection of users against online violence. 

The state has a positive role in creating and enabling an environment for freedom of expression and equality while recognising that this brings potential for abuse.  In this sense in the Americas, we have a recent example in Columbia of a decision by the Constitutional Court that urge political parties to adopt guidelines in their code of ethics to sanction acts or incitement to online violence.  In this decision, the Court recalled the obligation of the State to educate about the seriousness of online violence and gender online violence and to implement measures to prevent, investigate, punish, and repell it, and also the Court insisted that the political actors, partners, and movements, due to their importance in the democratic regime are obliged to promote, respect, and defend Human Rights as a duty that must be reflected in their actions and in their attitudes.  Additionally, the Court ruled that the State should adopt the necessary measures to establish a training plan for members and affiliates of political parties and movements on gender perspective and online violence against women. 

Considering that unlawful and violent narratives are compelled by State actors, paid actors should follow specific criteria in the app market.  Any paid contracting for content by State actors or candidates must report through active transparency on the government or political party portals that are regarding the value of the contract, the contracted company, and the form of contracting, the content resource distribution mechanisms, the audience segmentation criteria and the number of exhibition. 

On the other hand, to make busy activity compatible with Human Rights possible, the office of the special rapporteur curates that Internet intermediaries are responsible of respecting the Human Rights of users.  In this sense, they should:  First, refrain from infringing Human Rights and address negative consequences on such rights in which they have some participation, which implies taking appropriate measures to prevent, mitigate and, where appropriate, remedy them.  Second, try to prevent or mitigate negative consequences on Human Rights directly related to operations, products or services provided by their business relationship, even when they have not contributed to generating them.  Third, to adopt a public commitment at the highest level regarding respect for Human Rights of users, and that is duly reflected in operational policies and procedures.  And fourth, carry out due diligence activities that identified and explained the actual implementation impacts of their activities on Human Rights which is called also impact assessments.  In particular, by periodically carrying out analyses of the risk and effects of their operations. 

In completion, to wrap up, the challenges facing the digital public debate require a multidimensional approach.  Soft law, as was stated before, education, and self-regulation, and legal mechanisms, can together create a framework to mitigate harms we face online.  Let us strive for a digital space where freedom of expression and the protection of Human Rights are promoted, fostering a society that values inclusivity, diversity, and respect for all.  Thank you. 

>> JUAN CARLOS LARA:  Thank you very much, Mr. Pedro Vaca.  Thank you for those remarks.  And thank you for for also starting this conversation addressing the need for a multidimensional approach.  This is not necessarily a discussion of the regulatory or non‑regulatory measures but apparently of different types of measures at the same time.  And we will now listen to the rest of our panelists, beginning, of course, with our second onsite participant here, Mrs. Ana Christina Ruelas, at the Freedom of Expression and Safety of Journalists section in UNESCO.  You have

10 minutes.

>> ANA CHRISTINA RUELES:  Thank you.  Thank you very much.  It is an honour to share this panel with you, Pedro.  Good to see you.  So as Pedro said, we have a holistic approach to try to deal with this phenomenon.  UNESCO tries to foster public debate for education measures that I will not speak about because this is not my area of expertise, but there's a lot of work done with teachers, with educators, to target potential harmful content and harmful content online.  There's a specific word that is being done to develop resilience in different communities, primarily in four countries, Bosnia, Indonesia, Columbia, and Kenya, through the Social Media for Peace project which is founded by the European Union and aims to create media and information literacy measures but also to develop a way of understanding how content moderation is happening in those different countries and why the different contexts and issues related matters that allow this harmful content to be spread. 

And there's another action that has been happening that relates to capacity‑building on different stakeholders through the barriers, such as judges, parliamentarians, regulators, in order to understand that when building a potential harmful content, there's a name to safeguard freedom of expression, access to information, and diverse cultural content.  And there's work done also through the cultural sector in order to understand the impacts of harmful content in artistic freedoms and cultural expressions such as indigenous expressions. 

And the last thing which I think is also important is that we also have another action that is related to policy advice and guiding member states in the process of acknowledging that governance of digital platforms requires, as Pedro mentioned, to safeguard freedom of expression, access to information and diverse cultural content while balancing and while addressing the phenomenon of this information of hate speech, conspiracy theory, and propaganda. 

So in this session, I will focus on two main specific projects that UNESCO has been putting forward lately.  I will start with the Social Media for Peace Project.  One of the projects as I said has started in four different countries and allow us to understand what is happening with content moderation and how it is affecting different communities and also how a nonregulatory approach can be successful while it's holistic with other different types of solutions.  So the first thing we learned within the Social Media for Peace Project is that context matters.  This means that when it comes to content moderation, language cannot be just let it slide.  There's specific languages in different regions that are important to understanding in order to address content moderation issues, and this is not happening in many countries or many in the countries that we're working on.  That specifically are also countries that are in crisis or that come from crisis. 

The second thing that is important that we found is that despite acknowledging the crisis, despite the lack of knowledge and context nuances that their platforms should understand and the problems that hateful content can create in an online world, there's a problem of not considering these countries as a priority and then not providing enough funding for the development of content moderation measures.  So companies have specific priorities to those countries that have global impact or that represent the market share that are important, and in those countries that this is not happening, they are not putting sufficient budget to them, and then this is increasing and creating more problems. 

The Social Media for Peace Project also understood that when dealing with these problems, the most important thing to bear in mind is to have the capacity to dialogue between the different stakeholders, acknowledging that in conflict zones, there are many issues that should be like that were in the offline that were happening that have to be considered in the online world, so that's why due diligence from the platforms is very important, understanding the context, having the possibility to develop risk assessment and identify the specific mitigation measures that they have to put in place in order to reduce the specific risk based on the context is very, very important. 

But while doing this work, and I want to say, there was two main approaches.  The first one is the faith on the companies to tune their economic interests on how content moderation was doing, through the public interests of making people know and reducing impact of this content that many times it's also  harvest through advertising, as has already been mentioned.  So that's the first question.  Are we keeping the faith on changing or shifting economic interests to the public interest from the companies?  Many people still believe in these countries that this can be one of the approaches to push for companies to increase their budgets in order to do better content moderation, and then have a safer space. 

Then there's other approach which mainly comes from the states that Pedro has already comment, which is try to reduce this phenomenon with bad regulation, with regulation that does not safeguard freedom of expression, that criminalises the user, and does not touch the companies that considers that the only solely responsible for harmful content is specifically the user.  And that is another approach. 

And then UNESCO, after the work that is being done after the Social Media for Peace has started saying, OK, we are not acknowledging that these are the two different approaches.  What we need also is to start a debate that allow us to understand if it's possible to balance freedom of expression, access to information, and access to diverse cultural content with ‑‑ while dealing with potential harm from content such as this information, hate speech, and conspiracy theories.  And while doing this debate, UNESCO started a consultation that led to more than 10,000 comments that came from the engagement of people from around 134 countries.  And what we learned is that when governance systems are transparent, have check and balances put in place, they allow content creation and moderation to the UN principles of Human Rights, when they are accessible and inclusive to diverse expertise, and when they actually bear in mind the promotion of cultural content, then it can be a game‑changer.  So that's why UNESCO started developing these guidelines for the awareness of digital platforms that on the one hand recognize the state’s responsibilities on enabling a freedom of expression environment, that such as Pedro has mentioned, had specific requirements for the governments to commit not only to freedom of expression online but also to all of their duties in respecting and promoting freedom of expression offline. 

And the second thing is that UNESCO acknowledged that creating a governance system requires the acknowledgment that any regulatory measures that has to be coherent and comprehensive with the different kinds of regulatory arrangements should be through a multistakeholder approach.  Which means there's no only statutory regulation that depends on state and companies but there should be a participation, an active participation of other stakeholders in the whole of the regulatory process, meaning the development of the regulation, the implementation, and the evaluation of the regulation. 

The third thing that the guidelines state is that companies have to comply with five key principles.  One, due diligence which specifically state that companies have to develop different Human Rights risk assessments when they are developing new operations, when they are enhancing new cooperations, create new ownerships, develop new products, they have to do it prior an electoral cycle.  This is very important considering for instance that 2024 is a super electorate year and at least 3/4 of the population that is able to vote will come to vote on 2024.  The third is that companies should develop a Human Rights assessment when it comes to crisis emergencies and armed conflicts, and the fourth is that they have to understand the different risks that the companies or the content ‑‑ the content that the companies post to specific communities such as journalists, such as environmental defenders, such as artists, or other vulnerable and marginalised communities. 

The second principle is transparency.  I don't have to go very deep into it.  The third is accountability.  The fourth is user empowerment, which means that within the governance there should be specific programmes that are developed for media and information literacy, and the fifth is the alignment of all the actions to the UN guideline principles.  So this is a work that so far has been done.  We definitely believe as Pedro said, and we state, that this is an holistic approach and that no action should be only and one only because if they don't come together with many other actions that relate to, yes, education to, yes, creation of communities, yes, (?) Then these different phenomenons will not be targeted.  Thank you. 

>> MODERATOR:  Thank you very much Ana Christina for that extremely informative intervention with all of the initiatives that UNESCO is carrying out, including trying to provide guidance for regulation for governments in a manner that has included many rounds of consultations and a broad discussion as you mentioned with the thousands of comments from the world over which of course as you have been mentioning also enriches the learning inside of organisations itself and how to address many of those issues from the perspective of freedom of expression, access to information, and access to diverse cultural contents which I think is a key factor in all of this and sometimes not necessarily addressed explicitly, so thank you very much for that.  Chantal, now can you please tell us about your own view about these subjects?

>> CHANTAL JORIS:  Can you hear me?  Thank you very much.  I will try not to be repeating too many points that have been made by the first two interveners which are obviously excellent and extremely relevant that we need to look at the whole tool book, the regulatory and non‑regulatory approaches.  Perhaps just very briefly, I think this discussion is very important because we do agree that many of the proposals that we've seen or legislation that has been adopted recently that was seeking to regulate platforms has indeed ‑‑ there is indeed a danger that these will do more harm than good because they talk a lot about holding platforms accountable but at the same time very often what they do is not necessarily focused on the business model of the platforms, on the data tracking, on the advertising model, but also on the other platforms to exert more control over in fact user speech so the focus goes from the platforms own systems to the speech of users, and it is critical that any regulatory framework that has this strong impact on freedom of expression, that it is seriously grounded, and that it is evidence‑based and, of course, grounded in the principle of legality, legitimacy, necessity and proportionality as Article 19 of the ICCPR requires.  This is also why working more or less globally, it depends also on the jurisdiction what sort of solutions we think would be appropriate with many governments we would not advocate for.  Although in principle, we think sound regulatory frameworks should be in place.  With many governments, we won't start to advocate for passing legislation that would control platforms because we do feel, of course, that it will be not a regulatory proposal that will be respectful of freedom of expression but give the government more options to control online speech, and also Article 19 has long advocated that it's extremely important to take this competition angle as well because there are very few dominant players in this field.  They are gatekeepers of these markets, and they are also really gatekeepers of our freedom of expression online, and we do strongly believe that decentralisation can, per se, have a positive effect on freedom of expression, more healthy competition, more empowerment for users, for example, if a user thinks I do not want to be on a certain platform because I do not think that they respect privacy enough.  This is important for me.  They should be able to leave that platform and still be, for example, connected to the contacts and families that wish to remain on that platform. 

As has been mentioned, the UN guiding principles can be a very important tool, of course, are an essential tool that we advocate for platforms to take into considerations all over the world really, so whether we have a good regulation in place, a bad regulation in place, or no regulation in place at all, that should always be the basic benchmark against which they should operate and a lot has been said about them so I won't go into details. 

Also, in terms of because we're also talking about risks of the different approaches, we think if we take the approach that enabling responses are also at the centre of this discussion then we think that the risks to freedom of expression are much more limited.  This is also linked to another observation we made.  Often we find that the discussions seem to say that the social media platforms are the cause of the problems and we do not deny that they have exacerbated certain societal tensions and increased polarisation.  There's no question about it.  There is enough evidence this is happening.  At the same time we think it's essential to look at the root causes, for example, of this information, of hate speech, of online gender-based violence and this may include certain regulation of the platform's business model but it also needs to look at very different areas outside the specific digital space. 

So for example Article 19 has published now a couple of years ago a toolkit when it comes to hate speech where what those different approaches need look like, where we also again need to look at regulatory and non‑regulatory responses such as antidiscrimination legislation, public officials, as Pedro mentioned should not themselves engage in stigmatising discourse or counter such discourse when they encounter it, they need to receive ‑‑ public officials need to receive equality training, independent and diverse media environment.  All these aspects are obviously key to ensure that we have say offline, so to speak, an environment that is also inclusive that will not translate into the more extreme speech online and, of course, civic space, a strong civic space, strong Civil Society initiatives are also a key component of that, and also to mention a follow-up on what Christina said, Article 19 is a partner of UNESCO when it comes to the Social Media for Peace project and there have been a number of research reports as Ana Christina alluded to that have really found also the failings of the platforms, again, taking into account sufficiently the contextual elements.  It starts from Human Rights teams that are not in place for many countries, so Civil Society in many countries, they don't have anyone to call at Meta, for example, if they say there's a video that needs to be taken down or that we see there is an election coming.  We see there's a crisis developing offline and online.  There's not anyone who might necessarily be able to talk to or who would be responsive.  Obviously very important.  Additional problematic element is the use of automated content moderation tools as well because they exacerbate why we recognise that obviously content moderation cannot only happen only through human reviewers.  It's also true many of these tools are not sophisticated enough and might never ever be to really make a proper assessment of some very complex categories of speech.  Even for a court, it can be very complex to make a judgment on, you know, was there really hate speech?

Was there the intention to incite hatred?

Was there disinformation? Was there an intent to publish false information and disseminate it?  Was there an intent to cause harm?

Obviously doing this moderation at scale can present very serious challenges and we always call for more human reviewers that are native in the languages that they moderate.  More local Civil Society organisations need to have direct access, meaningful access to the platforms because we also know that there have been these trusted partner programmes which have not always been very satisfactory to say it mildly, and Civil Society has often found it's a bit of time and a waste of resources, and the impact is limited. 

Perhaps because I know we are far advanced in time, I want to make a final reflection.  I think an interesting trend we are seeing now is also which is non‑regulatory trend but also based on regulation is a strategic litigation that we see increasingly brought against online platforms.  Some very prominent examples have recently been the US Supreme Court cases where families of victims of terrorist attacks in Turkey and in France have filed suits against Twitter and Google, for example, saying that their systems have failed in a way where they have enabled terrorist content to spread online and have also sort of aided and abetted these terrorist organisations.  We have also had other litigations happening in Kenya over the violence, the violent content that was spread in Ethiopia that was moderated from Kenya and also over the failings in Myanmar.  A strategic litigation has been brought.  That in itself from our perspective has some challenges because from a freedom of expression perspective, organisations have always said that it is essential that platforms do remain largely immune from liability for the content that they host, but at the same time of course there need to be platform accountability, and there needs to be remedies if they infringe on the Human Rights of the actors in the respective countries or affected communities in the respective countries. 

So here's where it will depend on how this litigation is brought.  We do not want to see courts saying, after all, you need to be held liable for hosting terrorist content because it has led to a terrorist attack.  At the same time, it can be very interesting if we're starting more litigation that focuses on remedies for failures to conduct these Human Rights impact assessments, to take Human Rights due diligence measures, and to do the mitigation measures properly, so I do think that it is a trend that we see that as a lot of publicity, so there's a lot of bad reputational aspects linked to the platforms and also could be pressure for them to essentially get their act together as well.  Thank you. 

>> MODERATOR:  Thank you very much Chantal for offering so many different pathways towards what we expect to see but are so difficult to achieve which is accountability to the platforms that speaks to the role they have in exacerbating social problems even though they might not be creating them according to some discussion and some views. 

So now, Ramir, your term.  What frameworks have been implemented or can be implemented beyond just the regulatory ones to address the problems that we have with online speech?

>> Thank you very much.  Should I introduce myself.  I am Ramir Desuarte, Deputy director of research center.  I don't want to be too repetitive of things that have already been said.  So let me just offer you ‑‑ I think a diagnosis that we have in terms of where we are, and also to highlight a few tensions that I think underlie our discussion and are not yet ‑‑ have not yet been resolved.  Seems like we’re in an interim.  The old doesn't die yet, and the new is not born.  So we are at that moment in which we are sort of in between the old and the new, and that's always interesting times to be, and it's also challenging. 

I think we are clearly moving towards the regulatory moment, so in a way, the question that has been posed in this panel, I think it's more or less in tension with the trend of where the world is going.  I agree with everything you just said, and I agree that regulatory and non‑regulatory measures are important, and they should take place at the same time.  But I think we are moving towards a regulatory moment.  The DSA, Europe is obviously the ‑‑ what will most likely be a model that will expand across the globe.  We have already seen copycat legislation ‑‑ not legislation but bills presented in Congresses in Latin America that hasn't been adopted yet.  But, you know, legislators in other countries look at the DSA and copy the language and copy some of their provisions and that is a process in and of itself full of challenges.  We have also seen calls to release Section 230 in the United States because of Congress and its gridlock, it's difficult to imagine that comprehensive review of Section 230 will happen anytime soon, but we have seen states‑level legislation that has been passed imposing on platforms' obligations.  We have already seen strategic litigation against companies but not in the direction that you mentioned.  In the opposite direction.  Like for instance, the job owning cases in which they basically say that the kind of relationship that the federal government has established with companies in the US violates the First Amendment.  In a way, litigation cuts both ways.  So it could be a litigation that questions companies for failing to stand up to their Human Rights standards, but it could be also litigation against companies for violating the First Amendment in the case of the United States.  So I think that's where we're going.  It will be interesting to see how we get there. 

In terms of alternatives, of course, the InterAmerican Commission has supported alternatives for a long time in non-regulatory approaches.  I was part of the 2019 process of discussing the guidelines to combat this information in the electoral context and the main outcome of that was to support non-regulatory measures.  So I'm not going to repeat what you guys just said, but literacy of course is incredibly important.  I would like to highlight though that literacy initiatives are in a way a bet on an old principle, that he was very cherished in the Human Rights and expression field.  Which is to an extent, it is our responsibility as democratic citizens to figure out what's fake and what's not.  The Internet makes it more difficult to exercise that responsibility.  But in a way, I would highlight and underscore that those kinds of initiatives are made on that old principle.  We haven’t yet renounced it. 

And, of course, all kinds of measures of counter speech are obviously very easy.  They are not threatening from a Human Rights point of view, and they are fairly easy to implement and apparently quite successful, especially what I’ve seen most successfully deployed is counter speech to combat this information in the context of elections in Latin America.  But again, calls for regulation have been happening.  Latin America has been very strongly supporting the kind of regulations on paper that looks very good and looks respectful from the Human Rights standards.  Same thing with UNESCO guidelines.  Of course, the risk that is involved in these initiatives is something Chantal already mentioned.  Even good legislation on paper could do more harm than good, and I think this has to do with in many countries sort of a lack of an institutional infrastructure necessary to adopt these kinds of regulations.  That obviously is a concern for activists, but as I said before, I think we're moving in the right direction, and we'll have to deal with that as the time comes, but I'm pretty sure in the next couple of years we will see legislation being passed outside of the European Union, and we will have challenges in that sense. 

Now, I would like to highlight a couple of underlying tensions to close my remarks.  So for instance, we have been discussing the importance of decentralization, and I also would agree with Chantal about the importance of antitrust legislation, which for practical reasons, it will happen where corporations are incorporated or places where they have important marketplace presence and where they have the kind of institutional infrastructure necessary to move forward with this process.  There is ongoing litigation in the United States against Google.  There is at the same time investigations in the European Union.  It is hard to imagine that Latin America countries could move in that direction but I think that's important. 

Now, it seems to me this is the in tension with the, I would say, framing of the DSA or the framing of the regulations that are being proposed because to an extent, those kinds of regulations depend on a few powerful intermediaries.  So if we would let's say break them all apart and have an Internet that is extremely decentralized as it was in the ‑‑ towards the end of the 1990s and beginning of the 2000s, I don't know how that would be compatible with increasing control even in a way that is respectful of Human Rights.  If we have truly decentralized web in which people get to choose, a lot of people will choose hateful content.  A lot of people will choose and engage in discriminatory content, and if it is truly decentralized, there will be no way of controlling that so I think that's an underlying tension that to an extent speaks about I think a really deep and profound disagreement in the field of Human Rights in terms of what kind of future are we imagining as desirable.  I mean, this is something that I think is there that it's underlying, and I think we'll discuss it as openly as we should. 

You know, are we waiting to support freedom of expression in the form that we have affirmed it through the 20th century where we formally rely on gate keepers to keep check on that?

Are we embracing the decentralized promise of the Internet of the late 1990s, and that means a lot of speech rights, that it’s really problematic.  It's harmful.  I think there's still a lot to figure out in terms of evidence.  A lot of speech called harmful, we just don't have enough evidence to support that it is actually that harmful, but I think that underlying tension is there and that we should keep it in mind and that we should discuss it more openly.  Thank you. 

>> JUAN CARLOS LARA:  Thank you Amir for your sobering remarks and also highlighting one of the trends that we see towards regulation even though we can discuss other forms of addressing some of these challenges. 

So I want to first check whether we have hands in the room that would like to pose any questions.  Otherwise, we will be starting to close this panel since time is running out.  Before we do that, I would like to pose a question myself, I see no hands.  It will be to the panel itself beginning with Pedro.  I don't know if you are there, but it will be a rapid round of one challenge and one opportunity we have.  If there is a future in which we will see regulation that will come, one challenge and one opportunity that we may find in non‑regulatory approaches that can be taken today, as soon as possible, among non‑governmental actors in order to provide for the Internet that we all want and for the platform responsibility with Human Rights that we would expect.  We will go in the same sense that this panel began with up to two minutes.  Please, Pedro, you go first. 

>> PEDRO VACA:  Thank you, Juan Carlos, and let me just thank the whole panel for this amazing conversation and a lot of conversations.  The challenge, I think the challenge that we have faced is the lack of capacity in a lot of member states.

I mean, we cover the Americas.  We monitor 30 different countries and at this moment October 2023, we do not have enough capacities, even knowledge among member states to be part of the conversation.  So I think we have to develop contact points at the foreign affairs ministers (audio cutting out)

we only have powerful countries with the capacity to then we have opportunities to deal with each other. 

And the opportunity I think why I highlighted the Constitutional Court of Columbia.  I think we can put all our efforts in the user and the consequences for the user, or we can also prioritise the role of public servants and political leaders.  I mean, if you have xenophobia or racism in a city, you have a problem.  If you have leaders that incentivize xenophobia and discrimination, you have a bigger problem.  As points of reference for society, democracies should and could frame a better way what is and what is not allowed at that level of representation.

I mean, the frame of freedom of expression of people that wants to become or wants to participate in the political sphere is limited if you compare it with ordinary citizens, and in that specific opportunity, we have a lot of InterAmerican and international standards so it is something that is not even soft law.  You have ruling at Interamerican court to support that.

>> MODERATOR:  Thank you, Pedro.  I'll ask also to the rest of the panelists, first Christina, one challenge, one opportunity.  (Off microphone)

>> ANA CHRISTINA RUELES:  That the discussions focus a lot on how legislation will look like and not how the second step of the process would feel, so I've been saying this in the different manners I have participated, is the idea is that many regulators have said once legislation is passed, no one cares about it, and they leave us alone.  And as mentioned, there's many regulatory authorities that do not know how to deal with this issue and that are not used to talk with the Civil Society so we need to break that tension and to be able to create a conversation among them, so that will be another opportunity. 

And an opportunity also is that since companies are based in the same country, what we see is that countries, stakeholders in different countries, in different regions, for instance, in Africa, the African Union are coming together saying, OK, countries don't care about one of our countries, per se.  They don't have a specific interest in X country, but what they care is if asked together.  They are getting together with Civil Society, with electoral management bodies, with the African Union.  They are coming together with the different stakeholders to go before the companies and say this is what we need and this is what we want it.  With that said, that creates a great opportunity because between 40 countries, you have countries that actually believe that a Human Rights‑based approach is the way to go through and other countries that do not believe so.  But there is a balancing process but for me it's a great opportunity. 

>> JUAN CARLOS LARA:  Thank you very much, Chantal. 

>> CHANTALThank you very much.  In terms of challenges, society tends to move slow.  Regulators tends to move slow.  Technology doesn't.  We are reaching this trend where they are trying to catch up.  There are a lot of initiatives that are a lot in the European Union itself, for example.  There's the AI Act, digital markets act, the digital service act, political advertising regulation, and it is a challenge for Civil Society active in this field already to be able to catch up with everything and cover everything, and not to mention, there are a lot of Civil Society actors that are very much impacted by what's happening in the digital space but are not necessarily experts in it.  They’re not experts in content moderation or experts in, for example, women's rights, and those are quite technical subjects, so it requires a lot of expertise.  So I think this is one of the main challenges of the expertise that requires and the capacity that it requires. 

I think the opportunities, we do feel that there is more recognition from say some of the platforms, some of the regulators that many of the issues they are dealing with Civil Society are experts in it as well.  They seek more ‑‑ there are more consultation processes to extend the opinions of Civil Society to take into account is another point.  But we do feel there is more, again, appetite from platforms of regulators to help us engage but at the same time we don't want this in a way where they outsource their own responsibility and say we don't need to deal with the Human Rights aspect.  Civil Society do the work for us.

>> JUAN CARLOS LARA:  Thank you very much.  You have the last word. 

>> AMIR:  Very quickly.  I would say the following.  I think one of the biggest challenge is that to move forward in regulation on non-regulatory measures, we have to do it generally in context of deep polarisation and that it's always very difficult.  But at the same time, I think that context offers an opportunity because I think that in most democracies around the world, there is a need to rebuild the public sphere and civic discourse.  There is a need to start talking to each other in a way that is respectful.  And even though that is difficult precisely because of polarisation, that underlying need is still an opportunity and we should take advantage of it.

>> JUAN CARLOS LARA:  Thank you very muchAnd with that, our time is upThank you very much to my fantastic panelists and everyone who has attended this session and have a nice rest of your IGFTake care, everyone.

[applause]