IGF 2022 Day 3 Town Hall #52 Infrastructure-Level Content Interventions and Human Rights

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> JILLIAN YORK: Okay. Does that mean that we're ready?

>> Yes, we can see you now. Welcome to the room. Yeah, I'm turning it over to you.

>> JILLIAN YORK: Fantastic. Thank you so much. Thank you, everyone, who's there. I can sort of see you. Wish that we were able to be there in person.

So today, we're going to be speaking about Infrastructure‑Level Content Interventions and Human Rights. We have a number of panelists. I'll allow each to introduce themselves as we begin to not take up too much time.

I'm going to set up this conversation by identifying what we're talking about when we talk about the stack. We're going to be using that term quite a bit throughout this conversation. Let me just set the scene.

So most users, policymakers, are familiar with all the social media platforms that we use. Facebook, Twitter, YouTube. Of course, those services don't make up the Internet. In fact, online communication, commerce, depend on a wide range of service providers which include a number of things which we call the stack. Internet service providers, telcos, domain name registrars. Amazon Web Services, certificate authorities, payment processors, email. So on and so forth. Together, that's what we talk about when we talk about the tech stack. It's a term borrowed from computer science and software engineering. In this, we're using it to describe the Internet as we know it. Basically the various players that make the modern web possible.

And so we will talk about a range of providers throughout this conversation. We're not going to define infrastructure precisely today. I think this is still an open conversation for many of us.

So I'm going to end the setting of the scene there and turn to my panelists. Today we have Corynne McSherry, Emma Llanso, Agustina Campos.

Let me kick off, when we talk about interventions, that's what this panel is about, what exactly are we talking about? Whoever would like to take this. I think Corynne, perhaps.

>> CORYNNE MCSHERRY: Jillian, you're cruel.

>> JILLIAN YORK: It's very early where you are. I can turn it to someone else if you want to get warmed up.

>> CORYNNE MCSHERRY: I'll give you an example of what we're talking about, why we worry about it. Infrastructure‑level interventions can include Internet shutdowns which is pretty dramatic example. One of the things we've seen the most is interventions that at least at EFF we think of as financial censorship‑type interventions. These are interventions where payment processes on their own or under pressure from governments or private parties will stop processing payments for a particular organisation. That can be sort of mission critical depending on what the organisation is if you're a non‑profit and depend entirely on donations. If your payments aren't getting processed, that's it. You cannot exist anymore. It's very difficult for you.

And at my organisation, we started paying attention to this I think even 11 years ago. This is not a new phenomenon. It's just an accelerating one. We started paying particular attention to it when Visa and MasterCard stopped processing payments for Wikileaks under pressure from the United States Government as well as some private parties. And this was in response to some of the work that Wikileaks was doing and particularly the U.S. Government wasn't happy about. Still isn't happy about. One of the ways of pressuring them was to say, okay, you can't get funding anymore. Therefore, you can't exist anymore. That's a sort of basic kind of intervention that we've seen more and more. I think at this point kind of accelerating. Around the world.

>> JILLIAN YORK: Let me follow that up with another question. Any of you are welcome to take this. Feel free to jump in. So we got examples like Wikileaks historically and a lot of examples around this that concern what many of us would think of as problematic actors. Yet at the same time, I know from my experience there are users all over the world who are affected by this. So give us a little bit more. Why should we be concerned about content‑based interventions beyond platforms? Who are they impacting? Who else?

>> EMMA LLANSO: Yeah, I'm happy to jump in. My name is Emma Llanso. I'm the Director of the Free Expression Project. I'm based in Washington, D.C. Wish I could be with you all there at IGF in person.

So I think there are a lot of different kinds of intermediaries and technical infrastructure that might get invoked in doing content blocking. One thing that we have seen experimented with and pursued in different countries around the world including in the U.S. back in the mid 2000s is IP address blocking or trying to leverage the domain name system to block access to websites. So this is an idea of rather than going to the person who has created a website that might have illegal material on it, trying to go to other technical support that they absolutely depend on. You know, whatever domain name provider issues their domain name. Whatever ISP assigns their IP address. And telling those intermediaries who really don't have that connection to the content, who don't know kind of either where it came from, necessarily, or have any context behind what may make it illegal or what may make it actually lawful and misunderstood in some context. These intermediaries can really be a lot more sort of arm’s length from the actual content at issue. They often don't really have that many incentives to fight for the content. If they think that there is an overbroad request or request that's going beyond the boundaries of the law or what might be against their own terms of service. It was often easier for intermediaries throughout the stack to say, look, this content is causing us trouble. It's problematic. We don't really know about it. Don't know the background of it. We're just going to block it. We're just going to interfere with people's access to it. So that the people telling us that it's a problem go away. Or so that they tell us that this is not ‑‑ they stop bothering us about it.

Often, any individual post or website really doesn't matter very much to companies' bottom lines. They can block this content and not suffer any negative business impacts. And on the other side, they face potentially significant reputational risks for continued kind of fighting on behalf of content or continuing to try to keep content online.

One example I can give, actually, from the United States was a case that the Centre for Democracy and Technology was involved in in 2005 before my time at the organisation. One of the states in the United States passed a law allowing a government body to issue IP address blocking orders and domain name blocking orders to ISPs. And it was framed around one of the most empathetic and important issues of online child safety. And reducing and removing child sexual abuse imagery from the service. So kind of the core goal and aim of the procedure and the law was very understandable. The technique they were trying to use was incredibly overbroad.

So in the court case of CDT vs Pappert, we demonstrated that the kind of blocking happening under this law was not only going after allegedly websites that were hosting alleged child sexual abuse imagery but also taking down everything from people's personal websites where they were posting their resumes or the websites of small businesses like dentist offices. The way IP address blocking was being used was sweeping in a really broad set of other websites. It was a technique that was very difficult to tailor to one particular website, let alone one particular post or set of posts on a site. So that's just one illustration of how some of these technical intervention measures can really be overbroad. And not something that can be tailored. And how the impact on kind of other speech that gets swept up in these approaches can be pretty significant.

>> JILLIAN YORK: Thank you for that, Emma. Yeah, sounds like the technique, we're talking a lot about the technique here. Are there other examples anyone else would like to bring up? Otherwise, I got more questions for you. Feel free to jump in, though.

>> AGUSTINA CAMPOS: Hello, everyone. I'm Agustina Campos. I direct the Centre for Studies on Freedom of Expression. We're a regional centre based within the law school in Buenos Aires. So good to be here and be able to share you this space.

So I think there's a lot of examples. We can spend the entire day here. Just mentioning different examples from around the world. I guess what I wanted to highlight was an issue that comes up in many of the examples we've seen at least from the Latin American region is a lot of these times these measures are overreaching jurisdictionally. And because of where we intervene in the infrastructure stack, there's a lot of examples, particularly when we think about payment processes that Corynne was mentioning where the measure does not only impact within the jurisdictions of a country or of a specific location, but actually has an impact outside of that jurisdiction. And this is particularly relevant when we think about interventions that may be mandated by courts, or by different national authorities of different countries.

A lot of times, we have these ‑‑ this decisions that are usually preliminary decisions before there's a decision on the final outcome any case. They're taken to prevent damages. And many times without realizing within ‑‑ that the impact of the individual case and the individual measure adopted for the individual case will have far‑reaching effects very much beyond the borders of the municipality or the city or the providence or even the state where the order is being taken.

And the example that I can cite to is decisions regarding Uber which is really interesting. There were a number of mandates to block payment processors linked to the company. And in places where this was ‑‑ where there was a contention about the legality of the functioning of this company within the jurisdiction of the country, a lot of different courts in different places around the region ordered payment processors to stop processing payments for these companies. So at the end of the day, credit cards from certain countries cannot be used for these companies even outside the borders of countries where the decision was ordered.

And many times, there's no recourse to those decisions. And no one to answer questions about the depth, the scope, the reasons behind that overreaching effect.

>> JILLIAN YORK: Thank you. Thank you. That's exactly what I was going to ask next, which is looking at this as a problem with a lot of collateral effects. Both in terms of jurisdiction and in terms of, you know, unintended consequences. Is there anything, or what can speakers or users do if they find themselves in the wrong side of a content intervention on one of these ‑‑ or with one of these companies or platforms? Are there any appeals processes available? Or, you know, is it, yeah, what else can you tell me there?

>> CORYNNE MCSHERRY: Yeah, I can speak to that. That's the problem. Very often, some of the people who are affected don't have an option. They may not have a relationship to the company that's doing the intervening at all. That's one problem. Also depending on the service, the service in question may have very little reason or motivation to be careful or even to have an appeals process.

Another thing that can happen which is why we worry about these things, sometimes things will constantly disappear from the Internet and users don't know that it's not there. You may not know what you don't know. You may not realize something has been taken down and it's no longer available. There isn't necessarily a notice left behind to tell you, like, there used to be a website here. Now there is not. It had X content.

We were talking a little bit about how they can often be overbroad, these kinds of interventions. It's also very often there isn't any obvious appeals process. So maybe you can appeal by making noise on social media. And maybe the company will pay attention to you depending on whether you power. But if you don't have power and don't have, you know, access to sort of broad influence of some other kind, you don't have a ton of options as a speaker, as a user, as an audience, as a person who wants to access content. To be influential. Much less as a speaker, but also as someone who might be interested in being an audience for a speech. This is the reason why it's very worrisome.

One other thing I want to flag, we're using the term, content intervention. That's deliberate. At the platform level, we talk a lot about content moderation. And usually what we're referring to there is that platforms can ‑‑ they're not very good at moderating often, either. And make a lot of mistakes. They have relatively or comparatively precise tools for targeting particular speech. Sometimes the largest ones have appeals mechanisms. They're janky. They're inadequate. But they exist.

At other levels of the stack, the ‑‑ it's not really moderation. They don't have a lot of choices. They have very blunt tools. They don't have appeals processes. They sort of intervene. It's an on/off switch which can make it very, very difficult or makes the collateral damage of the decisions that they make much more significant.

I should have said, by the way, my name is Corynne McSherry. I'm with the Electronic Frontier Foundation. I'm the Legal Director there. So ‑‑

>> JILLIAN YORK: Thank you, Corynne. That's an excellent point as well. I want to come back to something I think Emma had mentioned. I think we should dig down deeper into this. Where do states come into all of this? I know when we're talking about content moderation, we've seen a lot of troubling interventions. Content moderation on speech. User‑generated content platforms. Social media, more or less. We've seen troubling interventions in the past few years. I think we're going to see a lot more. Where do states come into this equation and what can we do there?

>> EMMA LLANSO: I'm happy to chime in. I wasn't sure if Agustina wanted to come in on another point.

>> JILLIAN YORK: Sorry if I missed a point. I apologize.

>> AGUSTINA CAMPOS: No, no worries. Thanks, Emma. It was very subtle. No. I just wanted to dig a little further on the notice thing that Corynne was mentioning. A lot of the thinking around content intervention in these other companies is usually thought of as, like, the problems that we see can usually be seen as easy to resolve. But they most often are not that easy to resolve. It's not a question of setting up an appeals process or setting up mandating and notice.

And related to what you were mentioning just now, many times what we've seen are state interventions that are confidential, per se. Where the state mandates that there not be a notice. And this is particularly problematic when we think about content moderation on any platform. But especially problematic when we project that intervention to other pieces of the stack. Like Corynne said, web pages, entire sites just disappear. And you don't know what you don't know, basically. And you can't find what you don't know is there. In any other alternative way.

So it's a lot of times not that easy to resolve.

And the second point is a lot of times you don't know whose intervention was it in the first place. There can be a lot of confusion. One example that I've been sort of thinking about lately a lot is there's a Colombian constitutional case that is being argued. That deals with access to information. They're basically asking the Technology Authority of the country to be clear, investigate, and provide information as to who are the companies involved, what happened, what was the role of the state in content disappearing during the protests in Colombia a year ago. These were parallel to the Palestinian protests that the Oversight Board addressed in Facebook. And there you had content moderation being done in Instagram. You had problems of telcos and signal. You had alleged Internet shutdowns. And there were a number of allegations related to signal interventions by the states on different points.

So you have a number of different companies involved in this. And the purpose of the case is precisely you should be able to investigate and tell us what happened, what was it. How did this work. Bottom line, there were allegations of censorship. But people didn't know where to point them at. They were pointing them everywhere. I think it's a privy example of how in one given situation, it can be really hard to see what exactly is going on. Or who is intervening. When you have a number of different actors that you don't know about in the ‑‑ in facilitating your Internet experience.

>> JILLIAN YORK: Thank you. Thank you very much. No, that's very helpful.

Emma, did you want to add to that?

>> EMMA LLANSO: Yes. Yeah. Commenting on that and the question you asked about, kind of the role of the law and states in this. Because I think, as Agustina is pointing out, there are when we think about what might happen between a user posting content and user trying to access that content online, there are so many different intermediaries. Any one of them might be intervening in some way. Frankly, no one's doing it great. Even if we're thinking about the content moderation that happens at the application layer. So you end up in a situation where it can be a big mess and as Agustina is pointing out really difficult to understand who could you potentially hold accountable for actually taking action against speech that is in violation of your rights. Or for doing some kind of investigation or monitoring of your communications that might violate your rights to privacy. So I think this is part of where we are often pushing for and trying to have more clarity on what are the specific intermediaries that are making an intervention. That's going to be really crucial to ultimately, you know, not just kind of figuring out what's going on in the minute but to really vindicating our rights. Whether that's in court or pushing with our legislatures to try to clarify what is and isn't acceptable for different kinds of intermediaries to do in ways that might affect our human rights.

And another just thought on this whole question of, you know, trying to understand what intermediaries are even intervening in the first place. A lot of good examples in this come from all the debates that happened worldwide around net neutrality in particular. The questions about whether by law should ISPs, in particular, ever be allowed to do content blocking or filtering or throttling? And that last one, in particular, can be incredibly difficult for the end user to notice. If you're going to a website or social media service, and you're trying to load a video, somebody has posted here's a video of the protests that were happening last night. Or here's a video of my really cute cat. And the video's not loading. You don't necessarily know is that because the website isn't working or is it because deeper into the stack your ISP or your access provider has decided that's too much data to transmit over the network right now. Or we're not sure about the copyright behind that video. And so we're just going to make sure that it kind of never loads for you. But in a way that's totally not transparent to the user. That's the kind of thing that also interferes with people understanding just what's happening to them online. Also trying to advocate for things like net neutrality laws that say ISPs aren't allowed to throttle or aren't allowed to engage in that kind of selective delivery of content.

And I think copyright and questions about Internet access providers have been a big source of a lot of some of the kind of legal proposals, at least. Or some different legal action that has happened around the world. In the United States, about I guess ten years ago, we had the big fight around the SOPA and PIPA laws that were looking to try to create a new law in the United States to leverage the domain name system in the service of controlling access to ‑‑ material that infringes copyright. It turned into an enormous debate in the U.S. and really got attention from around the world because it was just squarely putting this question on the table of very powerful interests in the copyright industry wanting to find ways to ensure that their rights and copyrights were protected. It created security issues and vulnerabilities. Created significant privacy concerns. Also raised a lot of free expression concerns around just normalizing this mechanism of using the domain name system as a way to decide not just individual videos or posts or pieces of content were inaccessible but entire domains could be kind of yanked from circulation through a legal process.

So that legal. That was ultimately defeated. We've seen things ‑‑ this was ten years ago. The Six Strikes Policy in France. The French Government was looking at a way to basically give people notices and some warnings. Ultimately, to terminate their Internet access if there were enough allegations that they were using their Internet access at home or a business to engage in copyright infringement.

We've also seen, for example, in the United Kingdom laws around requiring Internet Service Providers to by default filter out websites that lead to pornography. Lawful content by and large. But not kind of alleging that the information is, per se, illegal for anyone to access. In fact, it's lawful for adults to access. Child protection, wanting the default to be that people cannot access pornographic websites and would need to call their ISP to say please unblock these websites so I can access them. That has chilling effects on people's willingness to call up an ISP and say please give me access to the pornography. It's another example of how some different regulators and legislators around the world have already definitely been looking at how can they leverage different parts of the stack. Especially this concept of network‑level filtering as a way to do content control when they can't get to the content host or the people at the end points who are actually putting that content online and who could be more directly responsible for it.

>> JILLIAN YORK: Yeah, I'm really glad you raised these examples. My next question was going to be about the key design principles of the open Internet. I'm glad you mentioned SOPA and PIPA. I'm going to turn to Corynne.

>> CORYNNE MCSHERRY: We talked about the blocking would be overbroad. We talked a little bit about the problems of notice. But a related problem is that let's say you had all the notice in the world. Do you have a choice? Do you have any ‑‑ depending on the kind of service that you're talking about, it may be that ‑‑ this is why net neutrality is often, you know, very compelling argument for many people. It's because in many places around the world, if your service provider, your ISP, chooses to block X, Y, Z site or terminate your account, et cetera, if it chooses to intervene based on content, you can vote with your feet, as we say. You don't have a choice, another ISP you can turn to. So I think that's an important part of this is even if you know something, blocking is happening, there's not ‑‑ for most users there's not a lot you can do about it. Other than maybe try to make noise or as Emma said, call up your service provider and say, hi, I'd like access to this content. Which most people are probably not going to do. Or there might be a situation where we got sort of Amazon blocking many Iranian users in a lot of ‑‑ because of their particular interpretation of U.S. sanctions, which is not necessarily legally required, but it's Amazon's interpretation. So AWS is cutting off many Iranian users. Well, they don't have a lot of ‑‑ I mean, they have ‑‑ there are some choices. But there's a competition problem here. There are some companies that have particular power and influence. The choices they make are tremendously influential. For many users, they may not be in a position where they can sort of exercise other options. Either because they're not technically able to or ‑‑ so, again, it's not just notice. It's also what can you do about it?

And at the sort of beyond the platform layer, your options may be quite limited.

>> JILLIAN YORK: That's a fantastic point. Actually, I'd love to just throw one question that just came to mind at all of you. Has the pandemic exacerbated any of this, do you think? Are there examples, specifically, from the past few years that we should be concerned about?

>> CORYNNE MCSHERRY: One that think about is the ways in which, you know, during the pandemic, education and for conferences like this, what are we using? We're all using Zoom. We're all using YouTube video services. I think that there's a way in which these ‑‑ there are many different kinds of services. So there is sort of competition in that sense. But it's also true that some of them, you know, are more or less janky than others. We sort of may depend on them.

There's been instances where Zoom and YouTube have made decisions that they won't even host particular kinds of speech. They won't allow speakers, some speakers, to use their services. This is also based on, I think, a flawed interpretation of sanctions laws. Or based on pressure that they may get from private parties. As a result of that, in a practical matter, it means some kinds of speech isn't going to happen.

Again, I think Zoom would think of itself as more of a platform than something that's in the stack. For me, we have a situation where basic educational services depend on a conferencing service. You know, that feels pretty critical. Obviously, those services become critical. And when they make choices about what kinds of speech they're going to allow via their service ‑‑ sort of if the telephone company said, okay, only some people get to be on this conference call, I think we would think that is pretty problematic and unacceptable. To me in the pandemic, the choices that Zoom makes, for example, about what content it's going to host or not, is essentially the same thing.

>> JILLIAN YORK: Thank you, no, that's really helpful. It sounds like a lot of this ‑‑ that's why we view this as the stack. There are different services at different layers. Ultimately, some are more critical than others. It may be contextual.

Is there anything you'd like to add, Agustina, Emma?

>> AGUSTINA CAMPOS: Building on what Corynne was saying. So many areas of our life have been moved online and depend today on the many number of different platforms, companies, actors, that facilitate our everyday connections. And I'm thinking about regulation initiatives and how they are thinking about content intervention in many places in the world. And there's very few of those proposals that are actually thinking about, one, distinguishing these kinds of platforms, companies, and intermediaries from other platforms and intermediaries. I don't think we ‑‑ most of the times, I don't think our regulators or legislators are thinking about these actors that we're talking about. However, a lot of legislation that is being thought of and prepared on intermediaries will or might apply in very different ways to these many different actors.

The other thought as I was listening to Corynne that jumped to my mind was that how these intermediaries that we're talking about a lot of times are not public facing in the same way as a social network may be. So you don't have the same experience when a content is moderated from an online platform that is public and available and where you have a public or semi‑public conversation versus content interventions within platforms that are not public serving that you, alone, the user, know or can see or detect or that something's wrong. Which curtails possibilities of others noticing it for you. Going back to the you don't know what you don't know. A lot of times, other people are identifying content moderation mistakes. Organisations can be monitoring this in public‑facing platforms or intermediaries. It's quite hard when the platforms or intermediaries are not private. They're public‑facing. What happens when content disappears? What if your cloud‑computing system is denying service? And all your files are there and you're alone facing or more alone than public‑facing companies. Yeah, to face the intermediary or the state that is behind the intermediary. Because let's be honest, this happens a lot.

>> JILLIAN YORK: No, thank you for that. I'm recognizing we're getting close to the 20‑minute marker. We have a couple things to share. I want to ask one final question before we share something then turn it over to all of you in the audience for questions. Which is that if companies aren't going to refuse orders, if they're not going to ‑‑ if they are going to respond to public pressure or to governments, particularly unlawful orders, of course. What can companies do to ensure that users are at least protected, made aware of what's happening? What are your recommendations?

>> EMMA LLANSO: Yeah, I'm happy to jump in there because we've spent a lot of time talking about the significant risks and concerns. And there are very many. I do think we have to acknowledge ‑‑ as I think a couple people posted in the chat already, there are a lot of infrastructure providers that are already making at least ad hoc decisions or case‑by‑case decisions and intervening in different ways. And kind of taking action.

The elephant in the room I think is Cloudflare, the horrible actions they've been taken against Kiwi Farms. Filled with anti‑trans and hurtful material. It was an enormous campaign from a lot of activists around the world to say this website is so incredibly toxic, it shouldn't be available to anyone to anyone on the web and Cloudflare should intervene, the technical services it was making available to Kiwi Farms. They took action there. The action that they take definitely raises a lot of the concerns we've been discussing for this panel.

You know, you can understand why they would intervene. Technical intermediaries do have their leeway to make these decisions and develop things that are effectively content policy user terms of use. They say certain material might be over the line. Or just something they don't want to associate their company with.

When that happens, I think for all the reasons that we've been talking about, the kind of recommendations around transparency and prior notice to users of what the rules and standards actually are really crucial.

There's a big risk in any kind of content moderation, let alone the infrastructure content intervention for kind of arbitrary and ad hoc decision‑making. And that kind of system or that kind of policy for any company is vulnerable to abuse. Vulnerable to different people in the company making semi‑random decisions. Vulnerable to being leveraged by government actors or targeted by advocacy campaigns in ways that aren't necessarily principled or consistent or predictable to users who maybe are trying to decide what kind of infrastructure providers they want to associate with.

So I think it's important for any company throughout the stack that is thinking about taking interventions against content to actually before doing so or before those moments of crisis where they're facing some really strong pressure from a government or woeful and passionate advocacy campaign from people to have thought through what are their standards? What are the kind of the lines that a site or service might cross? Articulate those. Articulate them publicly. Actually think them through internally at the company so when the time comes it's not a rough and ad hoc decision. But there's actually some justification for what they're doing. Otherwise, my concern is that taking action against some websites or some content for undefined reasons leaves that same intermediary open to enormous amounts of pressure to just keep going. And that the ratchet of censorship only ever turns in one direction. And the same intermediaries will just be pressured on more and more different types of content in ways that, again, for all the reasons we described would probably end up having really significant impacts on lawful protected speech that users are trying to share over their services.

>> JILLIAN YORK: Absolutely. So consistency is key. I think from my observation, we've certainly seen more transparency or at least communication to the public around some of these pretty heinous examples than we have from the other examples from elsewhere in the world.

Agustina, Corynne, is there anything you want to add there? Otherwise, I'll make a small statement and turn it over.

>> MALLORY KNODEL: Jillian, can I make a comment? I've been quiet. You've made a case for why this is so important. I think in terms of what matters a lot is even after the decision is made, which I think is where a lot of your ‑‑ the panel focuses on, there are good ways and bad ways to do that. Also because I think one of the things won't to avoid is, like, the collateral damage to the Internet or to users or situations outside the current jurisdiction that aren't actually part of the blocking or filtering effort.

So to that end, there's actually a really useful RFC. This is an IETF document written by a former CDTer, Alissa Cooper and many different people from ISOC and so on that is focused around technical considerations for actually how to do it. It gives you a rundown of different ways of filtering and blocking all down through the network. In a very helpful but pretty densely technical way. I think the reason why they did this is to actually just try to help companies understand, like, okay, you've decided you're going to block something. Here's the best way and the least damaging way to do so.

I think it could also, perhaps, if there was a bridging document or some kind of translation effort between some of the really important policy considerations that you're bringing forward and this technical document, that could also help governments make the right decision when they're trying to define how some of this should be done and not, you know, create a situation in which companies are forced to overblock or do collateral damage to the Internet.

I just wanted to share that as I think something that, you know, should be brought into the decision. Because whether or not we agree, again, with the decision‑making around if something should be blocked or not, it could also be something as innocuous as spam or malware. Things that companies are definitely blocking and we want them to continue blocking. Maybe that doesn't extend to, like, image sharing and things, which I would agree with. But nonetheless, that could be potentially useful for this conversation.

Oh, yeah, I put it in the chat as well. For people in the room who don't have the chat link, it's RFC 7754.

>> JILLIAN YORK: Thank you very much for that. I appreciate it. I apologize for missing your hand there.

Are there any quick closing thoughts from other panelists? Or may I make a small announcement then go on to our Q&A?

Okay. Excellent. Well, there will be time for closing thoughts. Oh, sorry, did you have a hand? I apologize. It's hard to see. Okay. Great. Excellent. There will be time for closing thoughts at the end of the Q&A.

So what has this been leading up to? We do have something to share with you today. That is a website, a Civil Society statement. I'm dropping it in the chat now. That a number of us have worked on over the past year including a number of others who could not be here with us today either in person or virtually. But whose contributions are ‑‑ that we're very grateful for. That was a tough sentence. And this statement has 57 signatories from all over the world. You can see them on the website. The website is also available in a number of languages. So that's protectthestack.org for those who can't see the chat. It's out today in Arabic, French, Spanish, German, Hebrew, Hindi, Portuguese, English, of course. We welcome your feedback. You're welcome to get in touch with us. Feel free email EFF about this. We can put you in touch with the right person. Yeah, happy to share that with you today. Take a peek. And happy to turn it over to Q&A.

>> CORYNNE MCSHERRY: Just so add on to that quickly for those in the room and can't get quick access to the statement. To fill out a little more of what it's doing, a number of different Civil Society organisations have come together to sort of explain for lawmakers and for companies and for ordinary folks why we're basically a lot of the considerations that we discussed here. And to urge lawmakers and to urge companies to resist the urge to intervene. And explaining why. And it sort of ‑‑ it will vary depending on the service. But explaining, trying to give more detail into the collateral damage that we have been seeing so that people can have a sense of why this is a bad idea and why companies should be resisting pressure to intervene. And why lawmakers and governments should be not putting that pressure on people and others. Hit pause before you decide the next thing to do is pressure companies up and down the stack to take down content. Just to flesh out a little bit more what we're doing here.

>> JILLIAN YORK: Thank you for that.

>> CORYNNE MCSHERRY: Apologize for interfering with the Q&A. I'd totally love to talk to the audience.

>> JILLIAN YORK: Not at all. Thank you very much for elaborating further on that. It had not occurred to me that there might be folks who can't get to the link from there. Do we have any questions? I can't see the room. I think Mallory can. There's also the opportunity to ask questions in the chat as well.

>> MALLORY KNODEL: There's a question, go ahead.

>> Hi, I'm Masayuki from Japan. Let me share you a recent story. I think it was two months ago or something. And there was one of the biggest adult content online shop in Japan. And MasterCard, they start not accepting MasterCard anymore. And the rumor has it that MasterCard think they are selling child pornography. Actually, they're selling Japanese manga or anime. Being treated as child pornography, which it's not. Because these are, after all, cartoons.

So I think as already many panelists noticed, I think it's problematic when (?) I say judgments, transparent in the form of content moderation. It's a very big problem.

Also, I'd like to point out that both DNS blocking or payment regulation, restriction, technically possible or even now. Remember that cryptocurrency is originally developed for that reason. Also there's a lot of new DNSes.

So the reason currently no one uses ‑‑ small amount of people only use them ‑‑ is because they're inconvenient. If there's a strong demand or no other way, I'm pretty sure those technologies will be very developed and easy to use.

So I think content moderation is not really separate. You can use it, but maybe more how can I say, more bad actors will flourish. Thank you very much.

>> JILLIAN YORK: Thank you very much for that content. Yeah, I think EFF just recently became aware of that case. I believe my recollection is that it's U.S. credit card companies and PayPal. U.S. payment processors that are potentially making these demands upon Japanese websites. Yeah. Sorry.

Is there anyone who'd like to comment on that? A broader comment here.

>> EMMA LLANSO: Just to chime in and say that this is ‑‑ I hadn't heard of this particular example, but unfortunately, it doesn't surprise me. We've seen a lot of this kind of looping in of lawful sexually‑oriented content under the banner of calling it child pornography or saying that it's sex trafficking. And we've seen that kind of targeted at some of the same payment processors in the U.S. and other jurisdictions around the world. It's also I think a really important signal that as we think about how to advocate to these intermediaries with a campaign like Protect the Stack, we need to realize that they are facing some of the same intermediaries like a MasterCard or Visa are facing pressures from so many different jurisdictions all at once. And that we also need to be really kind of coordinated as Civil Society to even understand what are all these different campaigns or pressures or things. Because, you know, it's very hard to convince a credit card company to stop doing certain kinds of blocking of payment transactions when they're hearing from regulators across 50 different countries that they need to take stronger action against child pornography.

So not disputing the goal of taking stronger action against child pornography. I think a case like the one that the speaker in the room raised really kind of points to the lack of nuance or the lack of understanding of context that would help a company like MasterCard understand that there's not actually unlawful content being served on the site. And so that as much as they want to take action against child pornography, what they're doing in this particular case is probably overbroad.

>> JILLIAN YORK: Thank you for that, Emma. I see we've got another question. It's in the chat. But I'm also happy to read it out in case not everyone can see it. Should we have a global proportionality and transparency framework code of conduct that infrastructure providers at all levels of the stack governing how such measures should be used given content intervention by these providers is expected to just increase rather than disappear completely. And mentions that that is particularly relevant since most infrastructure providers are U.S.‑based. Oh, and sorry, one more addition to that. If so, what form would that take and how would the multistakeholder community get involved?

>> EMMA LLANSO: Yeah, I'm also ‑‑ I'm happy to chime in on this one. And I think that's a really important question. And one that is really good to discuss at something like IGF. You know, I think we are already starting to see some parts of the kind of technical community work out their own frameworks. So the collection of DNS providers put together ‑‑ I forget exactly what the name is. I need to pull it up. The DNS Abuse Framework where they collectively I think over a dozen different DNS providers talking through what their policies and practices and sort of standards are. And when they might intervene and when they don't.

And my understanding was that that was developed as a, like, industry or DNS service‑only conversation. I don't actually know how much consultation there was. It wasn't the broadly public multistakeholder, you know, totally out in the open kind of conversation that I think you're sort of alluding to in your comment.

And this is not necessarily a critique of that framework. But just to note that I think it reflects that segment of the infrastructure community. Realize that they, themselves, needed and wanted some guidelines and some structures and some policies to follow. So they went and they made some for themselves. That's really understandable. But, you know, I agree with the question that if these kinds of standards are going to be developed, having them done in a multistakeholder fashion where we can actually really think through all of these different kinds of examples, all of the different scenarios and situations that are happening in so many different parts of the world, what kinds of pressures are more relevant to different parts of infrastructure providers? Is it social pressure? Is it financial pressure from, say, advertisers that don't want you to process their payments if you're also processing payments for this other website? Is it government pressure, primarily? Getting all of that context in place is going to be really important. I think that could be best achieved through an open multistakeholder process.

>> CORYNNE MCSHERRY: So I think that's true. However, it's sort of why it's particularly difficult is that really if you want to have a multistakeholder process and you really want for it to be global, if you're going to involve Civil Society, you got to figure out how to make that participation meaningful and influential and how are you going to pay for it just as a resource level.

One of the things we've seen, one of the reasons I'm excited about the Protect the Stack statement is because it's signed on to by so many international groups. People outside of the United States. Outside of the EU. Sort of the majority world countries. Civil Society from those representatives from those regions. Because they experience these kinds of interventions differently. And so, really, a full multistakeholder process involves getting a lot of different kinds of voices.

And as we all know, as many of us know, getting that kind of Civil Society participation isn't easy because people are busy. And they can't afford it. Or, you know, they don't have the resources necessarily to participate in endless ‑‑ feels like endless multistakeholder processes.

ICANN, in particular, can be very, very challenging for Civil Society to engage with.

So I think part of what we need is a real commitment to if it's a multistakeholder process to have where Civil Society genuinely can be influential so we don't feel like it's a waste of our time. If we can't have that, maybe the better default would be to push to don't do those interventions as opposed to having a framework for doing them. How do we limit the interventions as much as possible given the difficulty of having that kind of multistakeholder communication.

>> MALLORY KNODEL: We have a question in the room and I want to put myself on the stack with a question.

>> Thanks. I just had a look at the ‑‑ because this case started interesting me. The Japanese case of the fictional child pornography. And so I start to think that when should MasterCard ban or stop giving a payment avenue, if there are many countries that are finding it illegal. And there's a good list of quite powerful countries here that do find it illegal. So, I mean, what is the point that MasterCard can do this where it is okay to do so?

>> JILLIAN YORK: Could anyone like to take that one? I know we only got a couple minutes left.

>> CORYNNE MCSHERRY: I'll get us started. I think that if a company receives a lawful order that may put them in a position where it's difficult to do something other than obey the order from a court, but I think the particular concern that I have, anyway, is not so much when they're doing it in response to an actual order. I mean, that has its own problems. At least, hopefully, there's a court overseeing something. It's when companies are taking these steps sort of voluntarily on their own in response to more subtle pressure. I think it, at a minimum, the first step should be you don't need to intervene unless you actually have a legal requirement to intervene as opposed to doing so voluntarily. Like, that seems to be sort of your minimum standard.

Again, when we're talking about something that otherwise appears to be lawful content.

>> EMMA LLANSO: I'll just chime in to point folks to one resource you might be interested in checking out. The Internet and Jurisdiction Network tackled a version of this question in trying to look at sort of cross‑border actions against content. Or actions against content that have cross‑border implications. Trying to identify, you know, are there areas where there's enough sort of global normative coherence around what kind of content is illegal or and/should be prohibited. I can say from having participated in the working group and conversations, it's difficult to find areas where it really feels like there's global consensus that certain content should be blocked. I would say that child sexual abuse imagery is one of those few areas. But I think what we're seeing here in this question of the website in Japan is that even with CSAI, there are edge cases. Or there are areas where there's less consensus, for example, around illustrations that don't actually involve an actual child to produce them.

So I think it's this question is difficult enough when we're talking about it at the content moderation and application layer of when is there agreement that certain content is and ought to be just forbidden worldwide. I think that ability to find that consensus is really hard as we look further down the stack and as we've been discussing the potential impacts of those decisions have potentially even broader ramifications. So I think it's a very difficult line to try to walk.

>> JILLIAN YORK: Thank you, Emma. Mallory, I know you had a question as well. I'm also aware of time. You're in the room so you can let us know if we have time for this. If so, please feel free to jump in.

>> MALLORY KNODEL: Really quick then we can clear out. It's similar to the previous question. Do you feel like there needs to be more of a line drawn between content moderation of end user generated content versus the kinds of interventions where it's more of a contractual relationship? Like, one of the reasons why there's always going to need to be a DNS abuse framework is because those are contracts. You have a contract with somebody's domain. If they're abusing it in very illegal ways, you have to not have that contract anymore.

Also goes for advertising versus user‑generated posts. Or, and I guess this speaks then to the credit card thing. Like, where there's a contract, there is some legal arrangement. And I do feel like there's different interventions for those cases than there would be, for example, on social media posts. Which I would definitely agree your framework makes sense. The latter case.

>> AGUSTINA CAMPOS: Just, if I may, I think it's ‑‑ I think your question is really interesting. As well as the prior question. I think it goes back to an issue of scale from a freedom of expression point of view. A lot of times, what payment processors or other companies that are higher in the stack are doing is basically blocking content based on a prior judgment that that content doesn't conform to a certain rule. And if we look at it from a contractual perspective, you're looking for illegal content. Basically. And you should be looking for illegal content only. And not necessarily this category of harmful but legal kinds of categories. Which are really a lot of the examples that we have seen from self‑regulation of police, from companies taking the initiative to do this thing, it's content that is harmful. But it's purely legal. It's presumably legal content that they are taking down.

And the question from a freedom of expression point of view is who's better suited to make that judgment? And who's better suited to offer a proportional and necessary response to reparation response sanction? And I think most of the cases that we cited point to very potentially disproportionate sanctions responses and actions. And really high costs for users to either move services or no alternative at all to change these services that are higher in the stack.

The other thing that I think is particularly relevant for this conversation is that the self‑regulation framework is being very questioned around the globe. And although I think it's really useful that we build a common stance, I still think we need to push harder the limits for the states to order or incentive this kind of content intervention. Because I think that's where we are shaping the incentives for self‑regulation right now. We're debating with regulators, with legislators, with drafters, what the incentives for the next couple of years will be for these self‑regulations that companies are doing.

So my recommendation would be to look at that. To make concrete recommendations for states on how to approach this issue. .

>> MALLORY KNODEL: Thank you so much. We have to close out in the room. Jillian, I'll let you close the room.

>> JILLIAN YORK: Thank you so much. Thanks for staying the extra time. Thanks to all the panelists. This fantastic. For everyone who asked a question or participated, feel free to check out the site. Feel free to get in touch. Thank you very much.