IGF 2019 – Day 3 – Raum V – WS #150 Tackling Hate Speech: A Multi-Stakeholder Responsibility

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR: GOOD MORNING, ladies and gentlemen, and welcome to this workshop on tackling online hate speech, a multistakeholder approach.  My name is Sabrina inventory bough.  I'm very delighted to bring you to this workshop.  This workshop is organized by the unsafe network across Europe in federation with the federation of justice and consumerism across Germany.  In this regard, I would like to thank Dr. Alexander for the great collaboration work following this merger.  This workshop will kick off for 30 minutes.  In this regard, I'm delighted that I have been joined on stage by Mr. Tom Thomas Bloink.  Ms. Sabine Frank, and Dr. ‑‑ the authority, the vice chairman for protection of minors in the media and the coordinator for the safer internet center on Germany.  You will see on the slide we were supposed to be joined by Mr. David Kaye.  Unfortunately, due to last minute emergencies, he is not able to join us, but he obviously sends his warm regards.

Following this panel discussion, you will be able to discuss amongst yourself as we will host four different table discussions with also different stakeholders in the field to discuss different aspects and ways to tackle hate speech online.  However, now, I'm happy to hand over to Mr. Dr.Ing Marc Jan Eumann.

>> MARC YAN EUMANN: Thank you, Sabrina, the ambassadors, panelists, our dear colleague around the world, the audience at an early hour at the IGF in Berlin.  A warm welcome to you all.

Organization, of people into groups.  The opportunity of people on networks, online hate speech is tremendously increasing in an often uncontrolled, unregulated environment.  The freedom of speech is one of the principle pillars of a modern and open suppose, and I think we all agree that it is one of the biggest and greatest treasures which we all should work to protect and guarantee.  However, to avoid disruption and impact hate speech can clearly inflict on our societal cohesion, clear guidance and action are needed.

More than 3,000 participants are coming together to just speak up, to meet and exchange the views on the online society.  The global multistakeholder approach of the IGF is unique and guarantees the diversity we are living in.

Hate speech is like a cancer in this colorful environment.  It is a threat to democracy.  How can we tackle hate speech is a big question?  Who is, and who should be responsible to counteract online hate?

The issue is huge and media literacy plays a prominent role but my message is short and simple.  I think we need both, regulation and education.  Mr. Blöink was working with a measure in 2018.  A whole measure of hate speech was adopted by the cabinet just a month ago.  He will give us an update of the law enforcement act.

I may add that we in the from the state media authority of Rheinland‑Pfalz are not only advocating for young people but also prosecuting with an initiative, that means prosecuting and deleting.  I'm very delighted that you joined us on this panel.

And Mr. Chan has been the attorney at law in one of the most prominent cases at time when you were representing versus Facebook.  This has the high attention of the electronic.  We are very happy to have you here and I'm happy to join you on this case.  From a neighboring country, Vienna, I would like to welcome Ingrid Brodnig.  Your country already wrote, in the decision of the European Court of Justice.  You've just told us about your problems actually dealing with the European Decision Court of Justice so we're very interested to hear about this immediately.

So, you are somehow per definition a very special expert, grateful to have you here.  And last but not least, and very close to me, Sabine Frank from Google, also here to for what you would like to tell us about the responsibility of one of the major tech companies.

I would also like to welcome our table leaders who will actively engage in our discussions.  Barbara from the Connect.  Kathirn and Joao from the youth ambassadors are very much welcome.  Ricardo Campos from Brazil has just told me about his soft approach on this issue and Sofia Rasgado from the policy center.  Let's welcome.

>> Thank you very much for these words.  As you said, online hate speech is a very growing phenomena, even though hate speech, as such, isn't something new.  If we look back in history, it always existed but obviously with the growth of the internet and latest technologies emerging, it is much more vile, present, and widespread.  In order to take according measurements, in June 2017, the government put the network enforcement action in place which required social networks with more than 2 million registered users in Germany to exercise a local take down of obviously illegal content within 24 hours of notification.  Where the legality is not obvious, the provider normally has up to seven days to decide on the case.  Mr. Blöink, could you please tell us a little bit more about the network enforcement act and also what it has achieved in the past few years.

>> THOMAS BLÖINK: Well, thank you very much, first of all, for the invitation to contribute to this interesting discussion.  I think starting off with the network enforcement law, I think, is a little bit short.  I think I would like to start with the general picture and I think that's something you already described but I think it's always to remind ourselves in what kind of atmosphere we are in at the moment.  I think if you look around yourself and if you look at online discussions, I think we have to state, first of all, online discussions are key for democracy.  Very good for getting people into discussions, into political discussions, and social discussions, but, online discussion also may lead to aggressive, abusive, and hateful discussions.

Now, this is something I think we have to accept as a starting point for all the measures we have to think about, and have to consider.  Are the regulations soft law approaches or are the discussions we have to put in place.

And I think everybody in this room, and myself, as well.  They can suffer as a result of opinions, social background, skin color, religion, gender, social background, everything is possible as discrimination in online discussions.

That was the starting point when we looked at any kind of regulations which we might put into place.

An asset before, I think, freedom of speech is the corner stone of every democracy.  But, freedom of speech, is there where criminal law step 92 the place.

And I think that was the fine line we had to draw when we had to look at new regulation in Germany.  The starting point of the concept of the network enforcement law, as I would call it, roughly, in English, was that social networks have to assume greater responsibility.  That's the one side.  But, that's not the only side.  I think on the other hand, also, stated authorities have to take more responsibilities to ensure that victims are protected, and there is some effective enforcement.

Now, looking at the bigger picture, let's say, r the international picture, we have to note that we have two developments to know.  One is on the UN level and I think this is one of the places where we should always side on what is going on the United Nations level.  We have the main this year, the U.S. Secretariat general, Antonio ‑‑ who presented on hate speech.

At the G7 level, the French precedency made a proposal for charter for free, open, and safe internet which was designed not only by governments but also by the networks and also by civil organizations.

This is important I think to set the picture for any kind of discussion about regulation.  You see that something is growing, something is moving, and our aim for the network enforcement law, was, then, to be part of this bigger picture which is, I think, I would say a multifunctional framework which has to be set up.

Now, I come to my point, I think it was important, I think, for me to describe this picture where we started off with a network enforcement act.

Now, what is in the act?  First of all, it applies to large social networks, offering a broad range of topics with more than 2 million registered users in Germany.  That's the starting point.

And the second one, which is, I think one of the most important starting points, is that we're dealing with criminal content.  We're not dealing with every kind of discrimination in online discussions.  We're dealing with criminal law.

And the question, is it possible that in online discussions, that there are online hate crimes which are not prosecuted.  And I think that was the line where we said, yes, let's concentrate on the criminal law area.

So, what are the offenses which are in a set of rules which are the basis for any kind of compliance system which has to be put in place by the networks.  P some examples are incitement to crime for criminal and terrorist organizations, incitement to hatred, disabuse or in child pornography.  What is the consequences under this regime?  We are a big social network.

There are some contents which may be criminal.  First of all, these large networks much some kind of reporting procedures so there should be a possibility to raise a complaint to the social network so the social network has to look at this complaint and then it has to decide within 24 hours if there's a manifest, a legal requirement, which is violating German law.  All content within seven days and even beyond that there are cases which are very difficult where the social network has even more time to look at the content.  This is not to delude our aim to fight criminal content.  This is just to find a fine balance of freedom of speech and the question of criminal prosecution, how do we get criminal content out of discussions.

And the second thing to get this balance right was the reason why we introduced some transparency requirements in the law.

So, these social networks which are under the law have to report on a six‑month basis in transparency reports about what is going on with the complaints, how many complaints have been raised, and what has been done with the complaints.

Now, as an example, I had to look at last 2018 reports.  Facebook reported 1,460 complaints, 377 cases were deleted.  Now, this is probably a very long number.  I'll come back to that one.  But YouTube, for example, had 250,000 complaints.  And complaints led to about 50,000 deletions, which is a rate of about 20 percent.

And Twitter has a very comparable rate although they let only to an average of 9 percent deletion.

Now, this whole compliance system which has to be put in place is under a sanction system.  Federal office of justice, has been enforcing this.  They have one compliance question on Facebook which is basically the question of how many complaints have to be reported.  This fine has been imposed and is now under Court revision.

Now, what's with the further development we have with the enforcement act.  Which will lead to changes of the network enforcement law.  The first one is we have the media director, which covers media sharing platforms.  That introduced a right for reexamination and out of Court dispute resolution that has to be put in place anyway.  But the second thing is, we have to look at what kind of experiences we have with the network enforcement act and there may be, probably, additional, changes as well.

The second bigger project is that what has been mentioned before is the bigger package passed by the cabinet to fight right extremists, we had two very prominent cases, where the internet had a not very good role, I think.  The one is an attack on a regional politician who has been murdered in his garden and there has been some complaining before that the internet in the second one is the known attack on a sin going where there was inform synagogue where there was antisemitist manifestos and a person who broadcast the whole thing online.

Now, these are just examples, the reason why the cabinet decided we had to increase the role of social network in prosecution.  So, the aim is to make in specific cases the social networks report automatically to the prosecution that there has been some deletion of criminal content.  Just to reIT rate that one.  It's criminal content which has to be prosecuted and that should be as information go to the prosecution side so we don't only want to delete but we also want to prosecute and I think that's something which is missing at the moment.

That victims probably see that the information are deleted, but there's not really a prosecution in place.  That is something we want to change.  That's, I think enough, and already much too long.  Thank you.

>> SABRINA VORBAU: Thank you very much for this very elaborate explanation of these very important laws.  So, we have heard a bit on the policy side and now would like to go to Mr. Jan who is obviously in the employment field.  You were involved in quite a popular case, at least in Germany, it was quite popular, because you defended a refugee who took a selfie with our chancellor, Angela Merkel and he received quite a bit of hate messages.

>> I would love to.  Thank you.  I run a law firm with ‑‑ in our practice we represent victims of hate speech but also users who were victims of overblocking.

When I first used the internet in the early 90s when I was 17 years old or something like that, the biggest annoyances of the internet were probably copyright infringements or spam, and both did not bother me very much at that time.

And I thought, well, it's good to have all these liberties and governments should stay out of it because they will not understand it anyway.  Well, 25 years later, something has happened in between, and I'll skip to that.  A young man contacted me and asked, would I help him because he had become famous because he took a picture with Angela Merkel in 2015 and his picture since then had been used for every terrorist attack or crime committed by a refugee.  Because people were saying, not just any, some refugee committed the crime but it was that refugee who took the selfie with Angela Merkel and what an idiot she was to take a picture with this person.

So, he was afraid of his life because he had always been pictured as a terrorist or murder restaurant.  As someone who has tried to set a homeless person on fire in a subway session here in Berlin.  He was working at McDonalds, he could not go into hiring.  Even though he tried outside, he put a scarf around his face because he was afraid to be attacked by someone.

And because he found out that this will not stop, he was one of the victims who was actually willing to go to Court with the case because it has not happened just once but many, many times, he had been accused of setting bombs in fire, making attacks, always, he was pictured always his selfie taken.  But whenever he reported the picture to Facebook, they would do nothing.  Facebook said, this does not violate our community standards and when we took the case, we thought, well, this is an easy case.  This is a clear case.  We don't have to argue about freedom of speech because this is a factual statement.  Someone says in writing.  I have the picture with me.  Fortunately, thanks to net CG, it's not online anymore.  It reads ‑‑ (speaking German)

Homeless person set on fire.  Merkel took a selfie with the perpetrator.  And when we reported that to Facebook, the law was on our side because libel, slander, defamation is an illegal offense, a criminal offense.  But p Facebook said, it does knot violate our community standards and they were right.  Until today, defamation is not covered by Facebook standards.  And yes, we're talking about today.  Community standards are different for Google.  I always mention Google to explain to people that it could be different if we wanted it to.  But, Facebook made the decision, they said, well, what did we do about nudity?  We outlaw it because it will not improve our reach.  We will lose users.  But, as for hate speech and defamation, they made the decision to keep it legal on Facebook unless certain protected groups were affected by that but in this case, community standards were not affected and we had to take this case to Court and so Amani filed a preliminary injunction.  That's what it looked like.  That one over there.  And what came out after that in 2017 was we lost the case.  The Court said, well, Facebook said it would need a wonder machine to track hate speech, to find this picture everywhere because we have so many new users or postings.  The judges said, we are not experts in technology.  Maybe it's true.  Maybe it's not.  We cannot decide the case.  Let's happened it back to the lawmakers and you should give us some law to tell us what to do with the cases.

And they did.  Just a few weeks after that, the Ministry of justice introduced the first act of the enforcement law which was highly critical at that time because people were afraid of the overblocking and freedom of speech at that time and that's something we have to talk about.

But, for Amani, this was the first chance for him to get this picture taken off the networks through the enforcement act.  If I report this picture through the regular reporting systems of Facebook, it will still remain online but if I report it over net TD then it will be taken down.

So, community standards are different from law because community platforms make different decisions on what they want to have online so I believe we should not leave it to the community or platforms themselves.

We have regulations or laws for good reasons.  For some reason, we decided to protect dignity of man, protect minorities, and these interests may not be the same platform drivers.

So, I believe lawmakers and regulators should play a more active role.  Now, that's not what I said.  That was a quote by Marc Zuckerberg.  He handed the mandate over to the lawmakers and here we are.  And the second case, I would like to go over briefly and I think I'm probably over my time anyway.  We also represented people affected by overblocking, whose account was blocked for 30 days because they said something, commented on a post which Facebook considered to be a violation of community standards even though courts said it was perfectly legal but Facebook fought vigorously to defend the community content decision there and it took us one year to get a Court decision to find out that the comment was not illegal.  But it was covered by freedom of speech so we could say we were successful then but it took us one year and that's just incredible because no one would ever go this way again to invest in these resources and have an account be opened after one year.  Of course, she was already released from Facebook prison after 30 days so it was just a matter of principle in the end.

So, what we do need in a second or third version of the network enforcement act is a better protection for users to claim their rights for freedom of speech.

This is something that was criticized in 2017 and overblocking was not caused by the net speak easy.  But it did not cause the problem, but it could.  Thank you very much.

>> SABRINA VORBAU: Thank you very much, Mr. Jun for sharing a bit, the practices and also highlighting, yeah, the thin line online between what is hate speech and what is freedom of expression.  We'd like to go to Mr. Brodnig who is a journalist but you have the policy side and different cases and also look into the topics on online ethics so based on the two statements, maybe you can elaborate on your side.

>> INGRID BRODNIG: Let me start with the example of a journalist called Colina.  On Facebook message, a man wrote to her.  I translated it roughly.  He was very upset and he wrote, I wish you would be attacked on your way home by a horde of wild Africans, then you will understand what's going on.  Or maybe we will be rid of you, which would be even better.

The post was much longer than that but you get the point.  So, she was threatened with rape and this user was wishing she would die.  And she did something most people who get most messages don't do.  She wrote back and she got the address of this person and she visited him.

So, she was standing in front of this nice little family home at the outskirts of Vienna.  It was a middle class home, very beautiful, idyllic.  And there was this man, very ordinary Austrian who has written this who was clearly upset and they found out he was upset because he had read the news story that some Africans had raped a woman in Vienna and the police covered it up and the media organizations were not allowed to write about it.

And I guess you're already thinking about this.  This story was made up.  It was a fake story.  It was completely wrong.  But, he believed that and it is a rare case where somebody really visited a person and can show, come on, this is not true.  And the good thing is, he believed her.  But, you know, we cannot visit every person at home or we even don't know everybody who reads such things and what you see is it's often not like the traditional common far right people or the very common people who would produce hate speech earlier on.  It's often ordinary citizens which get more and more aggravated.

And by reading such stories, for example.  And I think we just don't need to talk about take down, like how to take down certain words or how to punish people for illegal credit posts.

I think we also need to talk about the mechanisms of platforms because, for example, we know that anger inducing content has better numbers.  Angry people click more.  There was a good study by a political scientist in the United States who published political adds on Facebook and he saw that political ads which induced anger had more than double the amount of clicks than other political ads.

So, it's a good strategy to put anger inducing content.  Online people will click, comment, like.  There's a danger that today's algorithms are built in a way that they even make such content more successful.  Because, for example, we know at Facebook when your post has many likes, many comments, it will have a higher chance to reach a lot of people.  The algorithm will probably show it nor people.

I mean, the same question is regarding YouTube, for example, which also has important algorithms deciding which video show to show you next or which video to show the starting page.

So, we also need to talk about the mechanism there's and the biggest problem here based the criminal law aspect, I think, is that we are lacking transparency regarding the new logics of the most powerful platforms we have ever had in the history of mankind.

So, this is the one side.  I think we should also see hate speech not just as something you shouldn't write.  Hate speech is a tool to suppress other people.  To suppress the views, and the visibility and the opinions of other people.

For example, we see that every trend, every man, every woman can be a victim of an abusive comment.  It can happen to everybody.  But, there are factors which will make it more likely that you will really get some severe forms of online abuse.

We see that, for example, there is a gender difference.  The Pew Research Center found out that four in ten Americans say they have experienced online abuse but when you look at the more severe forms, when you ask people, the last time that you received online abuse, was it severe, women are twice as likely to say, yes, it was severe.

When you look at sexist, abusive sexual harassment, young women have twice the likelihood to be victims of that than young men and we have the real danger that certain groups within our society are being silenced.

So, this is not just a fear or feeling.  We have numbers backing that up or numbers that suggest there might be something to it.

Just one last study.  Amnesty International in 2007 asked people, asked women in eight countries if they had experienced online abuse and those women who said yes, that they had experienced online abuse, they were also asked if it had consequences on them.

And one third of the women who experienced online abuse said that they have become more reluctant to talk about certain issues.  And I think we must understand online abuse and especially hate speech is a method to silence certain groups with aggressive means.  And for example, for decades, now, we have been fighting for gender equality and that women can publicly voice their opinion without being afraid to be called out, intimidated or getting rape threats.

and these are Democratic values we fought out for and there's the real danger that we have a new media ecosystem in which such threats and intimidating comments which are sometimes illegal are the new normal and we are actually making steps back in our society.

>> SABRINA VORBAU: Thank you very much.  You will have the opportunity, obviously, to ask questions later on.  I would like to go to Sabine.  We have heard, of course, yeah, social media companies and internet service providers need to take more action.  So, we're obviously very excited to hear your thoughts and maybe you can also respond to the network enforcement law and yeah, your opinion.  Thank you.

>> SABINE FRANK: I understand all of this in five minutes so thank you for having me here, inviting me.  I'm very excited to be here.  There have been so many points raised that it's actually lard to address all of them.  I try my best to do this.  Starting off with responsibility, I think we're all sitting here in the room because we understand that there is a real issue at place and we all have our shares to take responsibility for platforms, just to speak about YouTube, it's also a matter of scale and size.

So, the number might not be new to you.  We have 400 hours of new video content every minute so out of these, it's even 500.  Sorry.  500 areas of video content.  From the removal side of things, we know that less than 1 percent of this content is actually either illegal or violating our community guidelines.

That doesn't say that there is not an issue and we don't need to take up responsibility and scale up our operations.  We have done in the past, and we will do.  I'm just saying that we have to also take this into perspective.  To the Ingrid's point, it's not just that voices are suppressed.  I think, platforms give the chance to people who have been suppressed in the past actually have means now to make themselves much more than 15 years back to I think we need to take both sides into consideration.

So, how do we address the topic of responsibility?  We have four pillars under which we do this so the four pillars are all starting with an R.  Remove, raise, reduce, and reward.

So, let me very briefly go, especially to the first one but, it has its most connection to our topic today.  So, remove obviously is content we found either vile rating our community guidelines or violating local law.

How do we find out?  We have flagging systems in place?  You're hopefully aware that next to each video, you have the possibility to flag each content.  Next to each content, you have the ability to flag content.  A lot of people actually use this which is very good.  Then we have trusted flaggers.  These remain experts in certain fields across the globe who have specialized tools to give us information would be content they see violative against our community guidelines but they also give us information on substance, on subject, so it's very important we run a steady dialogue with them because we need to learn.  We are not Subject Matter Experts in all these abusive behaviors and contents so this is very important.

And the third topic to be named is is machine learning.  We have invested a lot into machine learning technologies to help us find problematic and illegal content.  This works especially well in the areas of spam and child abuse content.  We have made great progress with terrorism.  It is not the magic bullet for all content especially in the more nuanced cases like hate speech.  It's very complicated and we are not there yet to actually have machine learning to help us detect content after reviewing it, removing it.

So, for other areas, it's very successful.  Coming to enforcement, we have people across the globe who review it and remove it.  Just to give you statistics, that means in Q2 this year, we have removed 9 million videos.  78 percent of the videos have been flagged by machine learning technology and 81 percent of these cases, no human view has been on that video before it has been removed.

I think this shows a power where technology can actually help.  We have removed 4 million channels and 530 million comments.

So, there's a huge upscale process we have in place and I think this is part of what we do.  But we also, and this is something that has been addressed, not only about our community guidelines, with respect, local law as well.

So, an SDG is that example here in Germany but it's not just in Germany.  We do respect local law across the globe where we operate.  We have invested huge engineering resources to bring the legal complaints form, which we always had so you could always report illegal content and we reviewed it after local law so this is not new.

We just brought these two forms together so the flagging flow and the legal removal flow.  We have received, actually, we have updated statistic.  We have received 300,000 complaints in the past so it's not publicized so it's very new, also, to you.  300,000 complaints under net CG so legal complaints R. 77 of these have into the been removed so you also see that we have to take into account that a lot of people just use it.  They just click and file complaints so it's not that ease why I to just say, well, there has been a click.  Just remove.  No, there are trained people who actually look if this violates local law.  In 23 percent of the cases, the content has actually been removed so I think that shows that we do take great respect into the national law.

If I may, I just have two comments to be made and then maybe tell you also the other three asks in the discussions, if I may.

Because I think it's important, not just to reduce or remove content, but also to raise content.  So, a lot of the hate speech discussions and disinformation discusses can only be countered if we give more visibility to authoritative sources so this is what we have strived for and take efforts do better.

Reduce violence.  Some of the hate speech contents which sets us off is not illegal but we found that disturbing anyway how so here we can actually reduce visibilities.  Happy to talk more about this.

The third one or fourth one is rewards.  So, unlike a lot of people say, it's not in our interest to hand out big chunks of money through people to advertisement about, on harassing videos.

So, it's not in our interest so we see rewarding and monetization as stricter regulations so it's even stricter mechanisms for people who want to monitor content.

Lastly, the new ideas the German government has, I would welcome very much to actually evaluate the current law, intensively before we go to new grounds.  Self‑regulation is one of the key ideas the law has introduced also to tackle hate speech.  We have filed an application to set up a self‑regulation body together with Facebook almost a year back.  It's forgot up and running because the office of justice has not approved so far so there might be a chance to improve the pace of these mechanisms.

On the other improving law enforcement, we've always said that this is a very good idea.  It's not enough to just take down content.  We actually need to have um proved law enforcement actions.  But, I think we need to look very carefully into the mechanisms here.  And I told you the numbers, if we would forward all the content we deem to be illegal to law enforcement, then law enforcement would have maybe a hundred thousand cases just from YouTube.  Is that something they can actually operationalize and won't we have the totally different discussions a year from now if this would be in place?  I don't think this is actually meant so I think we need to think about what good mechanisms so we can help law enforcement to take action and on the other hand, safe guard user rights, privacy rights at the same time.

So, I will stop here.  Looking forward to the discussion.  Thank you.

>> SABRINA VORBAU: Thank you very much, Sabine.  It's obviously great to hear that such good progress has been made.  But, as you said, it's not only about take down and we have heard the key word before, which is education.  And we would like to invite you now to discuss amongst yourselves.  You will be able to select between four different group discussions.  Here, you can see on the slide what will be discussed in the different groups.

So, the first group, yeah, we'll talk about children's rights.  And yeah, how are they protects and is the voice of children and young people heard and I'm very delighted that this group discussion will be facilitated by two of our better internet for kids youth advisers, Kathirn and Joao and I invite them to give a quick wave from the back so if you would like to discuss with them, they're right over there.  Then we have another group who will look in what more can be done in terms of media literacy, education, and this will be facilitated by my colleague, Sofia Rasgado from the Portuguese Safer Internet Center who sits over there.  A third group will obviously look into the role of the internet platforms and I'm very delighted that we have Ricardo Campos from the University of Frankfurt but he also works in San Paul owe for log rim.  He sits over here.  If you would like to join a final group, obviously looks into the corporation on national and region but also international level to counter act hate speech and we have Avalina sitting over there.

You have the chance now to discuss for 20, 25 minutes, then we will come back in plenary and you obviously get the opportunity to ask our panelists questions and also give a feedback to the sessions so I wish you a very nice group discussion.  I invite you all to stand up and find a table facilitator.  Thank you.

(breakout groupsed (breakout session)

>> SABRINA VORBAU: Ladies and gentlemen.  Ladies and gentlemen.  Sorry, ladies and gentlemen, to disturb you.  But, I'm afraid we have to come back to the plenary to come to a conclusion of this workshop.  Obviously, we hope that you will continue this discussion and all our panelists and also table facilitators will, of course, be at the IGF today.  We would like to hear now back from the table discussions and there will be a rotating microphone going around for the table facilitators to briefly summarize about the discussion.

Okay. We have been joined on stage by our youth ambassadors, Kathirn and Joao.  And your table discussion evolved around the topic of children's rights and what more can be done to ensure that that the voice of children and young people are heard when it comes to online safety.

So, please fire out.

>> Yes.  Is it working?  Okay. Yes, thank you.  Yeah, we had a really nice discussion about a lot of of different things concerning children online.  We kind of started with a really interesting point in the beginning that we are talking a lot about the symptoms and not really the exact problem in society but I think we kind of got to the conclusion that it's really hard to totally change the society and we had to think, we had good ideas on how to make it concrete which I think the basic point was that a lot of things were about community guidelines and that we have to work on all different parts of society have to work together on the platforms and not only work together.

>> So, I'll be, thank you for this, Kathirn.  My name is Joao Pedro.  I'll be also focusing on the recommendations and the actions it that we actually discussed so first of all, related to the symptoms, I think it's important that we start connecting the dots.  What does this mean?  That we need concrete approaches, for instance, community guidelines in the platforms that young children are using.  Why?  Because sometimes it's actually not clear if we are actually providing or promoting the good content or not and we are talking about risks but sometimes not the opportunities and the great space to speak about both risks and opportunities is when we think about the parent relationship with the children.

Another point that was mentioned was the gaming content, so, sometimes the negative content is being perpetrated and influenced by example.  What does this mean?  For instance, influencers are doing things that are actually not correct or right.

What can we do about this?  It was actually the last point we were discussing.  And something that came up was reaching out to the influencers themselves as something that should be done and trying to, making them aware of the responsibility they have in the digital sphere.

Also, a very interesting point would be to have kind of age rating system for the consequences being circulated in the ‑‑ content being circulated in the platforms.

>> KATHIRN: Just one more thing.  I think we were talking about children online, education in schools and not only educate the children but also the parents.  I think that's something a lot of us talk about a lot of times that we try to wrap our heads around how to do that and that it's more about children teaching their parents and older people as well.  So, that's.  Yeah.

>> Thank you very much.  I think that's a good point and also good bridge to the second group that talked about the importance of media education and what can be done to support more children and young people.

So, Sofia, please come ‑‑ Sofia, please come up and give us a short summary of your group discussion.

>> SOFIA RASGADO: Thank you, Sabrina.  Our group to sum up the discussion we had, the conclusion is that it should be mandatory to have digital literacy in the school curriculum and also including ethical dimension, human rights.  And then we also talked about how to reach adults about these questions, the digital literacy, the media, and the digital literacy, and that it's not easy.  As the network, say, for instance, centers, we know it's easy to reach your parents and then grave the example about that they are working on off reaching adults, but to the families.

So, we also talked about how to reach the other people and vulnerable people.  That's not easy and we don't have a conclusion about that.  I'm sorry.

What more did had we talk about?  And it's also important not only to focus on that media and the digital literacy but it's also good to take account and to teach children the critical thinking.  I think it was ‑‑

>> SABRINA VORBAU: Thank you very much.  Don't worry about not finding any con crease solution.  That's all why we're here to discuss.  Thank you very much.  The third group, we have heard this before about internet platforms that they should take more action and there was a third group facilitated by Ricardo.  Please come on stage and give us a short summary about the discussion within your group.

>> So, I think it's not fair to summarize the discussion here because we have really different point of views, raised by different peoples.  Different people around the world.  And, but, I would like to put focus into main questions that we discussed.  One was raised about the Italian colleague about any talk, IT lawyer, about the terms and conditions.  The comment to the colleague was if we wanted to take a look on the self‑regulation side, maybe we lose important points, for example, and so on.  And I think that's the most important message maybe from Germany from the new regulation because it puts in some way a new equilibrium in the self‑regulation side and the state side and that is the example for us, for example, from Latin American, that we are trying to bring back home the message from germ any.

So, equilibrium between platforms and states and public interest.

And another question was raised about the freedom act of expression by you and above all the problem that the compliance and the accountability.  How can it be in compliance with local laws, traditions, and not only from the departing expression from the U.S.  That was the two points I wanted to put focus.

>> SABRINA VORBAU: Thank you very much, Ricardo.  You have mentioned already, it's obviously a multistakeholder approach and this is why we're all here at IGF and especially when we want to counter act hate speech.  This is what the final group, what more can we do to add national and international to counter act hate speech.  Please join us on stage for some short takeaways from your group discussion.

>> Thank you for joining me in this group.  It was a very broad group and community platform.  I can only share some broad spotlights from our discussion or the very broad inputs.  One thing that was brought forward very strongly was we need an exchange of experience and good and also bad practices between parliament warrians, and that on the international level, so a lot is moving forward with legal actions and having that shared on a broader resource, but also the experience and the review of it is a very important aspect that was mentioned several times.  Same for the legal system, the case developing and also the qualification of staff is how we can move forward on something, something we can work on where collaboration and exchange on an international level makes very much sense.

So, we have also the need, so, the participants mentioned we have the need to move forward toward an international standard so shared common guidelines that make it easier to also interact and to build accountability towards the big platforms that act on a global level as well.  But, also, we were very much aligned that the goal is not to harmonize an international set of laws but to build on shared principles, a shared basis, and then still allow for national and regional diversity.  Granularity was a word used I liked a lot.  So, of course, exchange, how gender‑based violence which is such an important aspect of hate speech can actually be developed in a legal stance so this is really groundwork, in many cases, we don't have legal standards, first have to develop legal standards to allow us to counter it and then a self‑regulation of, related cooperability and all these cooperative aspects need to happen in a multistakeholder way.

So, no single stakeholder group can solve this on their own.  Finally, so, yeah, one last thing and of course that's something that I really love to hear.

There is a lot of groundwork or like reality check knowledge that is in the hand of Civil Society so a lot of Civil Society gripes are working with the ones affected by hate speech and can transfer this knowledge, towards the other stakeholders and that's why we all agree that Civil Society is very important in this process.

>> SABRINA VORBAU: Thank you very much to all the table p discussers.  As we're coming to the end of this workshop, we also would like to give you the opportunity to ask questions or share any feedback with the facilitators so if you have a question, please raise your hand and we will set you up with a microphone.

>> Thank you very much.  University of Kratz.  I have a full understanding that it's extremely difficult to deal with content in this amount.  But I miss a transparency regarding who are the cleaners or example in our group, the fireman who are doing this cleaning on both principles.  Are they doing it, taking it into account that we have to deal with, affected by remedies.  If you are affected by such blocking, how can you appeal or how can you get such a discussion and then, if you think it is not correct?  Thank you very much.

>> I guess that question is for me.

>> I guess so.

>> So, thank you for the question.  And you're right.  It's not easy, actually, to do all of this and so many different jurisdictions and I think we have to differentiate between our legal operations and community guideline operations and I can only refer you to the net CG transparency report because we explain there and will even in more detail in the next one coming out early next year, who are the people looking at the content, evaluating the content.  We take a great lengths into educating these people with external law experts with law professors, so they're really well versed in German law because this is what they do.

Your second question was, so, what happens if your content gets removed?  You get a notification that your content gets removed and in that notification, you have a direct link.  You made a mistake.  And the reason why I believe you made a mistake and then we will revisit this claim.

So, there's some, like, very easy to use process because we honor that by the amount of decisions we have to take that can be and will be mistakes being taken.

So, obviously, we have no intention of removing contents that is legal but we have a great, we want to remove legal content very quickly.  I get this tension.

You have means you can counter.

>> Thank you very much, Sabine.  I think Ms. Brodnig.  You wanted to add.

>> INGRID BRODNIG: Just wanted to add the professor here is from Austria so we don't have it, the situation in use trough.  The question is, every country in the world, questions, saying what kind of transparency, what kind of numbers, regarding content moderation has to be published or like the digital European union.  When you're not determining, you don't have such numbers, not the same numbers.

>> So, we have a global sensory report outlining what kind of content is removed.  What complaints did we give?  What kind of moderators?  This is not true, actually, so we take a great deal of, which is the question we get from law enforcement and how do we like.  Please look at the global issue transparency report.  Come back with questions to mark.

>> If it's necessary from a European point of view.  It's not good to have different laws in different countries.  So, I'm very in favor.  Right now, it's the right time to talk to members of the commission, newly elected commission to talk about the essential needs of the digital service act.

>> SABRINA VORBAU: Thank you very much.  There was one more question just.

>> Hello, I'm from Hong Kong.  I just came in from Hong Kong.  Just wanted to bring out an example from my group.  One of his friends or some friends of him.  That, the other owner is just stating out his own opinion.  So, it's not really, it's just his own words but maybe he's not really trying to attack from the boy, but he's just trying to bring, he's not trying to attack them because it's different.

So, are these also kind of hate speeches or is it just a boy who said the other's opinion, or the boys problem.  Thank you.

>> Thank you for your question.  I think this was what we explained earlier in presentation.  We have a microphone there.  Maybe you would just like to give some feedback.

>> There is a line between criminal content and harmful content.  And I think what we will learn in the future is that a lot of things that are harmful, we discussed in good, is in the end, maybe not criminal.  So, bullying can be harmful, but this can be judged by harmful law but not common law.  We know a lot of cases, actually, the network enforcement act will only protect us from the most severe criminal cases while others will have to be dealt not by lawyers.

>> Thank you very much.  We are really, really running out of time.  But, maybe one final question.  It better be a very important one.

(laughter)

So, pressure is on you.

>> Hello, my name is Howard, studying in the UK.  Currently looking at a study in Europe looking at governance in online hate speech.  My question was for Sabine.  You mentioned that Facebook and Google in the last year have been applying to set up a self‑regulatory institution under an SDG framework and you mentioned that this is taking some time and without throwing anyone under the bus, is the reason for the slowness that they're taking extra care in assessing your application or in your opinion, is it setting a too high bar?

>> The question probably goes to Mr. Blöink, not me.  So, I think he's better placed to answer this.  I think it's a matter of prioritization.  I think the organization who applied is the self‑regulation organization here in Germany, they're called.

So, they have been accredited by the commission of youth protection for youth protection.  They operate under that framework for more than ten years.  So, they're well versed in what they do.  They have done the same process so they know what they're doing.  And I hope we see action very soon but maybe we hear more positive answers.

>> No definitive answer but I think it's not always so that if you apply to something that everything is granted immediately.  Sometimes, authorities think that not all the conditions which are set by law are fulfilled.  So, we have to get into discussions, I think, which took place.  But I think it was to come to an end to that decision, hopefully.  Yes.  As we do.

if I may, just a word to the digital services act.  I'm very European.  I've worked on European for a couple of years.  But, in this area, about hate speech and hate crime, I would be a little bit more reluctant about any kind of harmonization.  Harmonization is good if we look at compliance, framework, et cetera, which should all be the same but when it comes to speech and the question of what is legal or illegal, I think that's a different matter.  And we have to see that member states, at least remain in a position to have some kind of power.  To fight illegal content.  Even if we have some kind of harmonized framework.

So, I think its harmonization is good, but I think this is a very special area where we should be very concerned if we just only look at the seat of a company in Europe.

So, I'm a little bit more reluctant on that one.  Just to make a final word.

>> SABRINA VORBAU: Thank you very much.  Originally, I planned to ask all of you to give one takeaway from the session but I think there are so many and coming to a conclusion, thank you very much for joining us today.  We obviously hope this discussion will continue.  This is what we're all here for.  Please come and find us in the IGF village at the Insafe booth.  We will be there until Friday so you can continue having this conversation with us and thank you very much, and have a pleasant day here at the IGF.  Thank you.

(applause)

>> Just give me the opportunity to thank Sabrina to lead us through this session.  Thank you very much for safely drive through a very important topic.  Thank you very much.

>> Thank you

(applause)