IGF 2017 - Day 1 - Room XXIII - NRI Collaborative Session: Fake News, Disinformation, Misinformation: Challenges for Internet Governance


The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 



>> CHAIRMAN: Folks, we're going to get started, but I'm going to invite you to take any other conversations -- (banging gavel) we're going to get started.  I'm going to invite you to take other conversations outside of the room.  Because this is being transcribed and extraneous noise will interfere with our ability to have a good transcript.

And we really want your close attention and engagement in this.  But I'm also going to invite you to consider moving up closer so that the panelists can see you well when we go to the interactive session.

For those of you in the very back, please consider coming up here and filling in some of these other seats.  That would be fantastic.  Can we invite all of you folks not sitting in front of a microphone to come sit at the table, so when we get to the interactive section, we can make sure if you want to ask a question or make a comment that you can as well.  We'll wait one more minute while one of our panelists come in.  Thank you so much for that.

>> CHAIRMAN: Good morning, everybody and welcome to this NRI session on fake news.  It's a great topic and it's great to see so many people here.  We have a wonderful panel ready for you.  I want like to introduce myself first.  My name is Nick Wenban-Smith, I'm the secretariat for the UK IGF and I’ll be one of the moderators for today's session.

I will try to speak clearly, slowly and to give everybody on the panel a good opportunity to express their perspectives on the fake news phenomenon from all around the globe.  And we're really privileged to have a good diversity of national and regional Internet governance for tools to shed light on this.  Without much further ado I will ask each panelist to introduce themselves and say a bit about themselves.  And then we'll have a session where we hear perspectives from around the world and then we'll have an opportunity to discuss themes of commonality and differences, and then we'll have a wider Q&A session.  We want that to be properly interactive and to get some really good questions in from the room before finally trying to see, maybe this is optimistic, but we'll try to find some solutions and where next, to get somewhere on this thorny topic.

>> MARILYN CADE: If I might just make an introductory statement first?  Before we go to introductions, my name is Marilyn Cade, I'm going to be co-moderating.  I just wanted to make an introductory statement about what this session is and what it isn't.

Nick mentioned we are an NRI to NRI collaborative session.  The national and region IGF initiatives are organic, bottom-up, consensus-based activities that are driven out of the community that organizes them.

They have no official relationship to the global IGF.  They have a voluntary relationship with them.  There are over 101, and there are now, only 57, two years ago.  And we are doing eight sessions where NRIs are collaborating with each other.  There will be several other fake news workshops later in the week.  And you'll see probably many of these folks participating there.  But for this session, it's the national and regional IGFs collaborating with each other in talking about this.  So I just wanted to mention that so that you are then later, when you go to a workshop, you're going to hear very different people speaking and very different perspectives.  But you may see some of us here as well.  Thanks.

>> CHAIRMAN: Thanks.  Sometimes we need a good amount of order.  Starting next to Marilyn, if you could start to introduce yourselves please and state your name and affiliation.  Thank you.

>> ABDUL-HAKEEM AJIJOLA: Good morning, my name is Abdul-Hakeem Ajijola.  I'm actually from Nigeria.  My background is once worked for the Nigerien national security adviser to the president.  I also do quite a bit of work with the Nigeria Internet Governance Forum and why this topic is of particular interest to me is I do a lot of work on Islamic cooperation, on countering extremism and violence online.

Both in terms of that and also in terms of what we do in Nigeria, I take a lot of interest at looking at the weaponization of fake news and hate speech.  Thank you.

>> CHAIRMAN: Thank you, Nigeria.

>> CROATIA: Hello, everyone.  I come from Croatian IGF.  I'm a professor of law faculty at University of Zagreb.  I am dealing with topics of the regulation of the Internet.  Being it fake news, being it cybercrime and all of the topics related to the regulation of the Internet.

>> JUAN LOPEZ: Hi, everyone.  My name is Juan Lopez from Colombia.  Expanding the civil rights on Internet and digital environments.

>> USA: Hi, I'm a visiting fellow at the American Enterprise Institute in Washington, D.C. where I focus on the digital economy and I run the Internet global strategy program.

>> NETHERLANDS: Hi, everyone.  I'm from the Netherlands and I helped organizing the local IGFs.  And (?) organize the session about fake news.

>> NIGERIA: Good morning, everyone.  My name is(?) I'm from Nigeria and I'll be monitoring the online participants.

>> JULIAN CASASBUENAS: Good morning, I'm Julian Casasbuenas, and I will be supporting the remote practice participation.

>> CHAIRMAN: Thank you very much for those introductions.  First of all, can we just hear very briefly, three minutes, more or less, feeding back on what happens in the fake news topic area for your IGF.  I think we'll start in the same order.  Going down the row from the right, please.

>> NIGERIA: Thank you.  Basically when we held our own discussions in Nigeria, we did tend to focus a little more on what to do as opposed to what is the problem.  But we did discuss what I called the whys.  And I would like to run through a couple of the whys in our case.

First of all, we believe that fake news, hate speech and the like actually undermine democracy.  We have seen this in other countries and we know that in our particular situation, this is exacerbated by the (?) chauvinists that we have in our country.  What we find is that there are hate speech in particular, which is underpinned by fake news.

Actually, exacerbates the various social and political divides.  Interestingly, in a country like ours that we believe ourselves to be quite religious.  We found it very ironic that basically all major religions view peddling of fake news as a sin.  One would call it slander.  For some reason, people still do it, because it's online.

One of the other areas that we were particularly concerned about was the loss of trust in institutions.  As a developing country, we are very concerned about confidence in our institutions, not just by our citizenry, but also by foreign investors.  So many times, when some of these false pieces of information come out, it cause as lot of problems.

And then a very unique kind of fake news that we find in our part of the world, for example, is somebody will spread a rumor that a particular food product has pork.  And especially for example, the Muslim community, that's a big no-no.  So we found that there was a lot of that.

But having said that, I think we have elections coming up in 2019.  One of the great concerns we have is, is there any need for a country that doesn't like us to really hack our military.  Why don't they just hack the commander in chief by hacking the voter, so that you choose a commander in chief.  I think there's something about that.

But I'd like to dwell on solutions when it comes down to me again.

>> CHAIRMAN: Thank you very much.  That's Nigeria.  Perhaps the originator of the Internet scams and famous (?)


I think we've all had e-mails from the Nigerian princes.


Thank you.  Move on to Croatia, please.

>> CROATIA: OK the main topic of IGF was called media literacy and fake news.  Because we thought that the best way to tackle the fake news is through programs of media literacy.

During the session, we discussed the problems that we have in Croatia.  One of them being lack of trust in traditional media.  We especially, in public service, broadcasters, because there is a lack of professional standards in the journalism.  And because of that, people have a huge distrust towards the traditional media, especially on people.

And they inform themselves, mainly through Facebook.  Because they have more than half of our population -- more than half of our population is on Facebook.  That is also the problem for the traditional media, because they have to chase picks.  And they do not uphold their standards.

And also one of the main conclusions of our session was that we have to introduce media literacy programs in our schools, because in Croatia, we don't have any.

And we should also the literacy program under something called news literacy, how to show young people how to distinguish fake news from the real news.  And how to distinguish fake media from real media.  Thank you.

>> CHAIRMAN: Thank you, Croatia.  Colombia, please.

>> COLOMBIA: Hi.  The idea with Colombia, we as Civil Society organization were in charge of organizing this topic of fake news.

So we tried to create, to bring speakers from different stakeholders and also create an equivalence in gender terms.  We invite some NGOs in that area of fake news.

And they tried to show different solutions or projects of analyzing fake news.  One of the most interesting was made by the press.  The press create a system to try to receive fake news, and try to check, fact-checking this WhatsApp fake news.  In Colombia, the WhatsApp, it's very important to spread out all these fake news.  And it's really, really, really, really impressive, because it's already obscure -- WhatsApp it's a really obscure system.

And it's really difficult to bring this logic in the current economy.  So yeah.  It was really, really interesting.  And the other one is the government is working on some -- it's fighting against child pornography, cyber-bullying, extortion, with different campaigns of awareness.

And they are trying to use a new campaign called "Let's Lower the Tone" to try to soften the political discussions in our country.

It's a really difficult issue to speak about politics in our country.  So, yeah.  I think it's the basis of the discussion.

>> CHAIRMAN: Thank you very much, Colombia.  USA?

>> USA: We had our meeting in July, so we were really just kind of coming around to what had happened in our election cycle.  We were lucky because we were able to have the moderator of our panel be Dana Priest from the Washington Post.  We had Vince Cerf, well known in this area, a current evangelist at Google and the founder of Craigslist and the Craig Newmark Foundation, come in and talk about how they saw this.  What they had envisioned when they were starting their thoughts about the Internet.

Vince had some very interesting things to say.  In the beginning, he knew that openness was crucial, because it allowed permissionless innovation, but he hadn't thought about what would happen in political speech.  He was processing this himself.  He had a concern, he thought about what to do if anyone was going to develop and create an unprecedented social control mechanism.  He had recently talked to Henry Kissinger and told him that Henry was concerned that the Internet was training people to be satisfied with too little information.

Craig Newmark's key point was that he was very interested in getting proactive on bringing fact-checkers online.  We had a really interesting dialogue, and I'll be fascinated, because I know Nigeria is going to go back to solutions.  We spent more time talk about real naval gazing on where we were, but we'll talk about solutions as well.

>> CHAIRMAN: USA, another country claiming to be an originator of the topic in the first place.  Thank you.


>> USA: We organized the IGF in October, and we have two sessions.  One was with students and the other one, not only students, still students participated too.  And we had about 100 people and they were from industry, politicians, social organizations, scientists.  That was a really mixed group.

We organized sessions about different topics of which one was fake news.  First, speak with users using topic and starting the discussion.  Then we went apart in smaller groups and then we really started to dive into what could be solutions.  How are we already dealing with this issue.

A few solutions that were called were blockchain and maybe issue penalties.  We'll talk about this later in this session.

I think that a short (?) about our IGF.

>> CHAIRMAN: Thank you very much.  So that's quite a nice introductory sort of feedback of what happened at each of the NRI organization meetings.

So just drawing together some of the things of commonality there.  We've heard quite a lot about social media and particularly because this is now sort of an unregulated, new borderless form where the traditional controls on editorial quality, content, and fact-checking don't apply.

So that's obviously a big area.  Stemming from the UK perspective, we have Facebook at our IGF.  And actually, they were pretty frank about the countermeasures you can realistically put into place.  We also heard a bit about elections.

And dare I say, sort of moderated neutrality, we could extend that to referenda and referendums and the impacts that can be had.

Seem to come across in several parts of the world.  I'm quite interested, because no one has yet mentioned, it about what actually is the definition of fake news.  Is it deliberate misinformation versus just carelessness, versus parody.  And I don't know whether that was actually a topic of any of your meetings as well.

So I'd be interested in these three areas.  Social media, the definition of fake news, and impact on electoral democracy, whether you could expand a bit on those topics and say whether we can draw any things together across those areas.  It would be logical to start in the same order again.  Thank you.

>> NIGERIA: We basically just looked at the dictionary definitions in terms of our thinking about fake news, basically false or hoax news.  Purporting to be real news.

Again, given our own environment, we really were very concerned about the issue of hate speech which was an attack on a personal group based on their race.  In our case, it would be based on tribe.

And there's been quite a lot of that going on in our country.  We have certain movements.  There are, I guess like most countries, there are (?).  There are two areas however, of great concern and lot of debate.  That we really are not able right now to resolve.

Maybe it's something that can be addressed by some of the people in the audience.  The first one was the issue of the impact of artificial intelligence.  We actually, in our main meeting, watched a small video that demonstrated how artificial intelligence can take a one or two-minute speech, for example, of a politician, and basically, in this case it was the former president of the U.S.  And basically echo something totally different.

Of course, in a country like ours, with a relatively low literacy rate.  When someone says I saw him, I heard him say so, that can cause a lot of problems for us.

The second big issue that we really didn't resolve, and it's something I throw open to the audience as well, is the issue of censorship.  Do we really want, with all due respect, an American large, global media company censoring speech in Nigeria?  What's our national sovereignty?

But at the same time, do we really want the Nigerian government censoring our speech.  So like I said, these are the kinds of things that we've been very concerned about.  And I guess I would throw those open to the audience to also think about, ponder on as we continue our discussion.  Thank you.

>> CHAIRMAN: I think those sounded like rhetorical questions to me.  We'll have a session for Q&A, I promise you, shortly.  Thank you.

Maybe USA will respond when we get around to them, in terms of the domination, by the USA, pretty much of the whole of the rest of the world.  And what they're going to do to keep them out.  Let's move on.

>> PANELIST: First, I have to say that as a law professor, I have to say that we have to make sure to distinguish the illegal content from the fake news.  Because illegal content, like hate speech, is something that is overregulated in most of the countries.  So we have to distinguish that from something that's called fake news, although this term, fake news, generally don't like it.  Because it gives a negative connotation to all of the news.

And politicians also use them to debunk certain information that they don't like.  So journalists try to use something like false news or some different terms.

The definition, the closest to fake news would be some particular prevalent form of disinformation.  Because it has an attempt to deceive the reader.  It's actually, the language of prominent news outlets, sometimes faking the web pages of prominent news outlets.  But then it is also the question of legality, because of the intellectual property.

So it can also be treated as an illegal content.  It also exists, in my opinion, and with the fake news, we have rumors, false claims, false advertising is also something that is very important.  And conclusions that are not based on evidence.

All those categories can be treated as something like fake news, false news, or whatever we call it.

But one point I would like to make here is who, from where does main initiatives come when we are talking about fighting against fake news?  Certain governments and politicians, media outlets and journalists and online platforms.

When we look at the history, governments often use various forms of propaganda that are fake news.  And they lost their monopoly on that.

Journalists are trying to compete with social media platforms right now, and they are not upholding professional standards, because they are chasing clicks for the revenue.  So they also have lots of interest in fighting fake news.

And when we are talking about big online platforms, up until now there was talk of free media, market of information, deregulation.  That market would be regulating itself.  Now we have actually in some ways, especially in information market, unregulated monopolies.  You have five companies that are actually monopolies, in that sense.

And the problem for them is that fake news uses the same concept and same business model as the companies are using for the online marketing and creating their revenue.  So these are actually the sources of the problems.

And then there is a question at the end.  Who is to blame?  Who are we fighting with?

So maybe we'll answer those questions later in our discussion.  Thank you.

>> CHAIRMAN: That's really interesting.  I just wonder whether you drew any distinction.  You talk about chasing clicks.  I wonder if there's any with chasing clicks with financial gain, which could be a civil if not a criminal offense.  Versus the deliberate misinformation, in order to change an election outcome or to do something -- actually much more maliciously incentivized rather than just sort of a pure sort of click-chasing.

>> PANELIST: There is different motivation for everyone.  You can have motivation for propaganda, you can have for monetization, and you can have motivation in (?).  Those are different motivations for disseminating fake news.  When you talk about journalism, chasing clicks, they have different motivations.  They can do it because they like certain political part.  But they can do it, and often do it to date, to gain more readers and to get more revenue from the market.

>> CHAIRMAN: Thank you.  Zambia.

>> ZAMBIA: So first of all, I should say I'm a historian.  So I tend to bring the problems and social concerns.  And we talk about our fake news, I think it's new concept just two years ago.

But I don't feel like that, because I think that there is misinformation have been, the whole history.  I think it tends to be related that is a problem that has been emerging in some countries in the north.

In the global, we always, fake news is spreading through different media.  And that follows to the elections.

We had a referendum for the agreement, and it's very interesting to analyze how the fake news about the peace agreement, where it's spreading through the radio in really, really, really (?) areas and rural areas, you have a radio owned by the states and by the government.

It's really interesting to try to analyze in that terms, the problem of the fake news.  And if we think about social media, it's very interesting also, the idea that the brain of the fake news through social media is maybe the politicians.

The politicians are trying to fight to define the memory of the conflicts.  So they are spreading fake news about, I don't know, try to say that these politicians is part of the guerillas or this politician is a far-right movement.  So it's really interesting to see how the politicians are spreading fake news to define the memory of the conflict that is quite important in a country that is going through peace process.

So yep.  I think the things, that I was interesting to say.

>> CHAIRMAN: Thank you.  Well expressed.  History is written by the victims, right?


Anyway, so the government's also losing their monopoly on media and fake news outlets is an interesting viewpoint.

>> PANELIST: Again.  IGSA was in July and the plenary session was entitled (?) At this point we were still focusing on a lot of the technical aspect of what had happened.  If there was a way to manage it.  One of the things that Craig Newmark mentioned is he thought there should be a taxonomy created in news that would help you decide what was fake and not fake.

He thought there were going to be tags for satire, parody, opinion pieces and factual pieces, but noted the onion to be much less funny if you believed at first it was real news we had an ambassador, Karen Kornblum from the OCD, who talked about microtargeting.  And it makes it much easier to disseminate this information because you don't understand how it's getting to really, very specific targets.

We also had Amie Stepanovich on who is (?) and she really wanted to focus on the consuming media mindfully.  We were just starting to do the shift to OK what do we want to do about this.  Because of the nature of who we had on the panel we were much focused on the technical element.  Since then we have had congressional hearings and one of the key things that has come up is looking at this funny ideas.  Are these companies that were spreading a lot of these information actually media companies and what is their level of responsibility in that.  So that was something that we were just starting to manage into in July.

>> CHAIRMAN: So (?) come with a full warning.  A joke.

>> PANELIST: I don't have that much to add, but I want to tell you that at (?) IGF, we mainly focus on journalists and spreading fake news.  And most of the time focused on the children and how we could solve it with education.  So not only the technical part, and how we can use education to prevent fake news and spreading and increased awareness.  I think we will talk about solutions.

>> CHAIRMAN: At the end of the session.  I do want to save a bit of time to have a bit of an exploration on the solutions and really what we can take away from this session collectively in terms of ideas and what could be solutions for the communities.

I think that's a really nice opportunity now to widen it out into any audience participation.  I don't know whether ma'am if you could help with me questions.  When you ask a question, can you please state your name for the record.  We just want some quick questions and really to explore.

>> I'm going to run this in a very organized fashion.  Here's what I want you to do.  Waving at me doesn't help.


I want everybody who wants to make a comment or ask a question to raise their hand.  And then I'm going to go one from this column, one from this column, one from this column and then come back again.  OK?  Sound fair?

OK.  Everybody who wants to speak.  And online as well, yes.

>> We're going to go from here first to online.  First of all, I want to get a count quickly, how many of you want to make a comment or ask a question?

One, two, three, four, five, six, seven, eight.  OK.  So here's what we're going to do.  We're going to see if we have any remote questions and then we're going to start in reverse order.  One, one, one.

OK.  Do we have any remotes?

>> CHAIRMAN: We have a question from Marquim.  From Europe NGO.  And he said, I have two questions for the panel.  First one is whether they think that fake news and the fact that many people fall prey to, it is completely disconnected with the context framing, the growing inequalities, political scientists from the 1930s, Germany.  That the rise to power in the Nazi party is that there were fact-checkers that destructed the Nazi propaganda for fake news.

Dark economic social context, it had little impact.

Second question is whether the participants don't believe it is better to expose people to a wider diversity of news sources, via algorithms instead of trying to fact-check everything.

Fact-checking carries a heavy risk.  For instance, the claim that the Iraq held weapons of mass destruction was fake news.  By any reasonable standard.

>> CHAIRMAN: OK.  So if I heard that right, there's a question in two parts.  Firstly around -- well, even with fact-checking does it even matter, because people will decide what they want to decide.

Secondly, one around technical solutions, and they are in the role of trying to -- because I think part of the problem with fake news is just the sheer scale.  So if you could automate it or build in some for the technical solutions, then maybe that's an interesting line of discussion.

About to go from the other end of the table first this time.  So we'll start off with the Netherlands.

>> NETHERLANDS: The way I heard the question.  The questions were quite long.  Makes it quite --

>> CHAIRMAN: The questions were quite long.  Difficult for the panelists.  But if I heard the first question right, it was around even with fact-checking it doesn't necessarily prevent some of the countermeasures that you want to prevent the impact of.  Because you've got to take it into context.  As with Nazi Germany, with the political and social and economic realities within which this stuff is being presented.  This is obviously pre-social media, but even in that sort of situation, it didn't actually make any difference.

And secondly, was around other technical solutions where you can address sort of fake news on a scalable way using artificial intelligence or some other technical thing.  Whether that came up in your Dutch IGF meeting.

>> We didn't really talk about this at the Dutch session.  However, we did talk about the education part.  Of course, you can present it.  Fake news will still be possible even if we have some solutions.

What we came up with was a quality check.

(?) This could be an option to at least take away some -- yeah, websites that only spread fake news.  But I agree.  You can't really make sure that no fake news will spread at all.

>> Ambassador mentioned we have to push back on the narrative that designed decisions about the Internet -- simply because they are.

The question leads itself to on the algorithm, how you manage that becomes a challenge.  Leading back to the first question, going to these media companies and the algorithms designed to be making an economic impact or are they there solely to be a media news source.  Which there's a definite variation on that.

>> In my opinion, I think that there is no technical solution to the problem of the fake news.  I think there is a question of education, first of all.  And the second one is of controlling the press.  The real press.

Because in my country, it's very interesting how the fake news have been spread -- have been -- are spreading, through the real press.  That receive this WhatsApp and they think that this is a real fact, and they spread all these fake information.

So it's important to take into account this situation.

>> Seem like my predecessor.  I don't think that right now we are currently have the technology that is able to solve our problem with fake news.  Because artificial intelligence is not there yet.  Especially when we talk about the context.  As previously said, there's a problem with satire.  Recently we had in Croatia, similar problem regarding misinformation and disinformation.  Where both left and right think parties, and NGOs were banned by Facebook by some algorithms, including some satire sites.  So that is a problem.

The other problem is when we think about the regulation we have to be more careful about giving more power to already powerful.  That is a huge problem, and we have to take into account Human Rights like freedom of speech.  And we can't solely rely on Facebook or Google to actually, in a certain way, impede our freedom of speech.

So I think that also when governments are talking about the regulation, like in Germany.  This Natz DG (ph.) that will be enacted in first of January I think.  That is a huge problem.

>> First of all, I think that general economic social economic frustrations do tend to exacerbate some of the challenges or increasing spread of fake news.

However, I think from our experience, it's not as a necessity that it's people who are either at a lower social -- people from high socioeconomic status.  People who are well educated.  You know, they all spread it.  We all spread it.  Let me put it like that.

So I'm not in any specific kind of individual, let me say, that falls victim to this.  But generally in terms of the issue of fact-checking I think all of us are guilty to some degree of being simply too lazy to fact-check.  And nation states of course have been guilty of that as well.

Are there technical solutions?  Well, I'm sure there are.  For me, from the global south, are those technical solutions affordable?  This is one of the questions my government was asked, and others.

And then the second question that arises is -- and this is to echo the algorithm issue, whose perspectives is actually, are the algorithms reflecting.  If I may, I'll give you a very specific example.

Some time ago I was in a similar discussion with this with a gentleman from Facebook.  And basically he said, Facebook does not allow certain posts, that Muslims criticize Islam.  But Facebook has no problem with posts that criticize the prophets of Islam.  For anybody who knows the Muslim world, it's the exact opposite.

You could criticize the religion all you wish.  You can criticize the appearance of the religion all you wish.  But there are many, and we have seen in certain countries in the Muslim world where some kind of criticism of the prophet as a person or maybe even the Koran, has led to riots in the streets.  I'm not trying to justify it, I'm just trying to show in this particular instance, from a -- got it backwards.

>> CHAIRMAN: If there are any more online questions, then just wave at me and I'll try to get them in after we've had some inside the house.

>> We are talking about definition of (?) about algorithm of fake news.  And we have heard Facebook participated in IGF UK.  I am sure that Facebook have definition.


>> Curiously, that question is for me.


I can't speak for Facebook.  But I can tell you a little bit about the discussion.  And the discussion, quite extensively, did focus on what is fake news and what is the distinction between just sort of click bait, where you're trying to get clicks to ridiculous stories.  The top ten things you must know before Christmas or whatever.  Versus actually a more targeted.  I think we're talking about state intervention targeting here.

And certainly, Facebook is putting a huge amount of resources into this in terms of actual fact-checking.  But they did not disclose a huge amount about their algorithm and automatic filtering.  They obviously do have some.

But they are approaching it along the lines that it's actually very hard to distinguish some of these things through machine learning, and you actually do need people to make these sorts of judgments.  And their responses seem to be more of a manual process.  Whereby you try to react on (?) desks with humans checking them.  Because only an informed human can make those quickly.

>> We had two over here.  You were first, I think.  No, he was.  I know.  Please.

>> Thank you very much.  I'm from Republic of Congo.  And I work for the NGO Congo chapter.

Question going to Mr. Abdul.  I want to make a focus on region and associate.

The good question was should not only the company make the social shape.  According to your experience, you have been person (?).  What can be the good approach.?  Because when you see the problem, like in my community.

(?) major, no one talks that.  Because we know you use it for propaganda.

And if you start to bring contra information, when we can have somebody else trying to make an information coming for anywhere, you probably believe that, because don't have the source of the information.

But what is really a point of view on this?

>> Having worked for the states, I don't always have as much confidence as maybe some others would.  Because I think sometimes the states fail to understand the difference between regime stabilities and national stability.  They are not necessarily the same thing.

But clearly, there is the influence of the major multinationals is a significant concern.

And possibly, I'm just throwing this out now.  Possibly, this is an opportunity for the Civil Society to engage.  For example, we've seen examples from the U.S. where you have fact-check.  You have snoops.  We do not belong, as far as I know, to a major multinational, nor do they belong to government.

Frankly, many of them are voluntary.  So that might be a model to look at.

But I think also that there is a need for some government input.  And again, I'd like to look at two models.

One is the rumor control web page of the U.S. PhRMA.  And basically after a hurricane, there are a number of rumors.  FEMA, in this case, a government entity, would say no, we're not coming to our village tomorrow to give us cash.

I think there is a definite rule for government.  But I think when you're dealing with issues that boarder on propaganda, then I think there's a major opportunity for Civil Society and the NGO community to actually develop these kinds of websites and the mechanisms that go into debunking what I call "social rumors."  So I think there's some literal division of labor.

>> CHAIRMAN: Thank you.  Are we ready for the next question?

>> We're going to go back behind you.  We'll come back to you the next round.  Sorry.  Yes.  And then we're going to go back over here to you.  So let's get going here.

>> Thank you.  I also have a two-part question the first question.  There's evidence of the audience for fake news and the people who read it and spread it, and the audience for fact-checking don't overlap.

Do you think it's technically feasible, one, and two, desirable, if it's technically feasible, to sort of expose the first audience to the fact that a certain piece of information or news has been debunked.

And the second, I think a wider question about the interplay between polarization and fake news and do you think there are maybe technical ways to address that.

>> So I'm going to establish a rule that we have one question at a time.


To be fair.  And until you give your name, they can't answer.

>> CHAIRMAN: And also helpful if you're directing the question towards any panelist initially.

>> Sort of a general question, actually.  To some extent, the question has been this discussed at all in any of your IGF meetings.  And my name is Stefan, a researcher at the (?).

>> CHAIRMAN: Which question was it?  There's one question, right?

>> One question.  So do you think it's helpful and feasible, helpful, to sort of try to expose the people who read and spread the fake news in the first place, on social media particularly, to the fact that it's been debunked afterwards by fact-checkers?

>> CHAIRMAN: So who wants to answer that one first?

My own perspective is that there are lots of situations.  For example, say an election.  Where there are quite strict rules in most jurisdictions around funding and transparency, when a statement is made in an intellectual context.  That's supposed to be verified, and there are limits to how much can be spent on it.

One of the problems certainly about the sort of social media type things is that you don't know necessarily who is sponsoring the content and what their motivations are.  Transparency, in some context, does seem to be desirable, is my personal -- but how you disaggregate that from making fun of -- there are many good opportunities in the UK at the moment to make fun of the politicians and their various shortcomings.

So it's difficult, and certainly the UK perspective and our IGF was that it was very difficult to automate.

>> I'd like to just comment first of all, that I think your perception of no overlap is an assumption, not a fact.  I think it's a great assumption on your part.

I'm not particularly sure I agree with it.  I think also that it's very important to always endeavor to shine the light of truth.  As I say, it sets you free.

I think one of the things we found in Nigeria is that sometimes when you have people who propagate or rather, when you have people that come up with some of these things, we found in a couple instances, when those people actually are called in by the victim politician, if I may say that.  Usually it's a high-level politician.  And that person says, listen, this thing that you've said is not true.

First of all, the propagator knows you know them and that tends to discourage.  Also sometimes simply having a picture of the propagator with that politician, then circulates, undermines the propagator.

And this is a lesson I think from the U.S. election.  You need to use also unlikely avenues to refute rumors.  I'm specifically talking about your political opposition.  In a past U.S. election, a specific instance was when a Navy stood up in a John McCain campaign rally and basically said a whole bunch of things about Obama.  And this is John McCain, listen, that's not true.  You know.

He is what he says he is.  Let's talk about the issues.  Now it's very difficult to get your opposition to come to your defense like that.  But I think that one of the things that the N IGF, the Nigeria Internet Governance Forum has tried to encourage our political classes.  Listen, your opposition might be the victim today.

And if you don't stand up and quash this issue, you might be the victim tomorrow.  So it is actually the political class itself's interest to try and kill or undermine or deal with this kind of situation.

>> CHAIRMAN: Next question, please.

>> I'm Federico from the Caribbean IGF.

And I want to say that your assumption about (?) I think is a problem.  Yes.

Because we want to -- we face the same issues in Nigeria.  But we have to be realistic about our societies and where we come from.  And that situation, actually proves something for me.  Because it's the same situation, we have.  This whole phenomenon comes from our political class.

Longstanding parties who have used what we have to spread what is today, because of the American intervention on this, today called fake news, which we've known in Africa for the longest time.

And that's why in Africa, in Libya and southern Africa, we sort of generalize.  This is nothing new.  This is created in societies where there is no level of trust in government for the longest time.

So you can't just -- and (?) you make light of the issue of socioeconomic issues playing a role in this.  I think this is a big issue, because of the low levels of trust in government service delivery and governments' engagements with citizens.  It's easy to create environments, bad things about politicians.  It's easy to believe bad things about other powerful actors in society.  We can't minimize that.  We're coming from societies where there are low levels of trust, and it's proven.

Just as what you call his assumption, is starting to also be proven by research.  What has happened.  Whereas you made a statement about what happened in Nigeria and sort of interventions from political actors.

What does the Nigerian research say?  So that guy is talking about actual research.  So before you debunk, let's do an evidence-based -- let's base our debunking on evidence.  Because that's also fake news, right?

>> I think a couple of things.  First of all, I think it's this assumption that there is no overlap.  I think that is what I was referring to, because from my experience, there is overlap.  Do I have specific research?  No.

Do I know about the credibility of the research you say you have.

But I think also when you say one makes light, I think I specifically mentioned that socioeconomic frustration have an impact.  That's what I said.

But what I was trying to qualify was that just because somebody is from a particular socioeconomic category, or educational status, does not automatically mean they believe or not believe fake news.

Again, from my understanding, from what I've seen, some of the people you'd least expect to believe.  Whether it's by education, exposer, socioeconomic income, actually believe the fake news.  That's really what I was trying to get at.

>> So we're going to go here.  And before you speak, I just want to do one more quick count.  Because we're going to be running out of time shortly.  We're going to go here.  I have one more, and is there anyone else in line?

OK.  So remote.  Here, here, and here.

>> CHAIRMAN: For the purposes of time, we probably won't get all of the panelists to answer all of the questions.

>> First question.  That spread fake news, (?)

>> That's sounds like a really good game show.


I guess it goes to the level of being able to manage through -- if it's not done through anonymity, if you can verify that's the person that's spreading fake news.

>> More a question.  Is the same person that asked the question.  OK.  And he's saying.

>> CHAIRMAN: So here's the question and comment.

>> OK.  It's comment from Marko from Belgium.  OK.

He's saying, also to the current discussion.  Can we think about alternative business models to support media besides advertising.  For instance, some (?) meant to do just that.  That is a comment.  I don't know.  Maybe you want to amplify it.

>> CHAIRMAN: I think it's the same person who's already asked the question.  So if we should move on to questions inside the room.

>> JOANNA BRYCEEN: OK.  Thank you.  My name is Joanna Bryson, I'm a professor in artificial intelligence at University of Bath, and I'm here because they want me to talk about policies for smart machines on Wednesday.  I want to comment on a couple of the things you've said that I think are very important.  First of all, AI forgery is a thing.  We can forge handwriting, speech, video, with speech moving to the speech.

We need to think about policies for dealing with communicating this fact to our people.  Secondly, parody is actually not a problem whatsoever.

I strongly recommend you look at a conference called text as data.  It's by political scientists working with computer scientists.  And they're really good at dealing with this kind of thing.  Parody signals itself often by a twist in the last sentence.  And they find no difficulty in pulling parody out.  Some of the confusion and conflict is what is really fake news.  Is it the stuff being generated by algorithms or is it the stuff that the real politicians are telling the real newspapers and getting in there.  Some of the things you've been saying, you're combining these two things.  That's a real continuum, like our African colleagues are telling us.  There is no simple line between those.  We need real people to confirm what you said, we have to have real people, not only because people are magic, but just because multiple people work better than one algorithm.  Any one algorithm or one person, you can learn your way around.

You can use Facebook and have 20 million people click and see what they click on.  That's the real problem with fake news is that it's not trying to reflect the real world.  So it's easy for it to optimize to that algorithm.  That's what's going on with that.

And politicians are using that to get elected.  So you can't discriminate fake news from real politicians.

>> I'm going to make a quick responsive comment before we go to the next question.  That is a clear advertisement for all of you to pay attention to the workshop you’re speaking at.


Please. Please.

>> You're saying that IE can distinguish parody.  The thing is you have to take into account language.  Creation is a very specific language and there are no AI tools, and it would be very expensive to develop them.  It distinguishs news from fake news.  When you're talking about debunking and removing fake news from English-based sites.  It's OK.  I'm aware of those tools.  They're working a lot on those tools.  But in specific language region, that is a huge problem.

>> I think we should take that off-forum and get that going.  Yeah.

>> I was going to say, a lot of the lead campaign in the UK referendum was based on the claim that the UK would be 260 million pounds a week better off.  And at the time, it was clearly not true, but it still was propagated and still a headline figure.  Even if you know it's not true, the fact that people say it, sort of like the Obama not being born in the U.S. type thing.  You can debunk it or contradict it at the time, but the message sometimes lands in receptive minds and they want to believe it anyway.

So I don't know whether there's ever going to be a solution to that.  People are just quite complicated.

>> Thank you.  Media issues and digital rights.  Before making my session, who is interested to step in to answer it.  I would like to stress two points that we discuss.  Fake news is not a novelty in the global south.

We have a private oligopoly that was already very used to lie lie and didn't have alternate means of communication to counter it.

The second is the danger thing about giving more power to the already powerful platforms.  They are already very powerful in disseminating the content and seeing who see what by their algorithm.  It's important to be aware of this when talking about fake news.

And the question that in my view lies behind this debate, the debate of fake news is who is the person or who is the person allowed to say what is truth or what is not truth.  Who's the owner of the truth?

I think that when we talk about solutions, this is the question that we have to have in mind.  So it's much more in my view, about transparency and regional literacy.  Than criminal solutions or user content solutions.  That gives us what I want to hear about the panel.  What are the best solutions to deal with the question who owns the truth in our current context?

>> CHAIRMAN: Are there any questions after this?  Just trying to manage the time here, because we want to leave a bit of time at the end to sum up.  There's one, maybe two more questions and that's it, please.

>> I completely agree with you in everything.  I think it's very interesting to analyze how we are expecting these huge tech giants to solve the problem of the truth.  And we are tending to create a referee of the truth.  I don't think is the current way to solve the problem.

And in my opinion, there is no easy solution for this problem.  At least completely.  And when you talk about the owner of the truth, I think that it's quite important with this issue of the transparency and also the work stakeholder perspective.  It's a work of all of us to try to fight against the problem of the fake news.

I'm talking about the NGOs, the press, the government.  The academical and the individuals.  We are citizens.  We need to be the fact-checkers of these social media.

>> CHAIRMAN: Thanks.  Next question.

>> May name is Larry and I'm CEO of connect safely.org.  We just published a booklet called media literacy and fake news.  I might add it's also involved in emotional intelligence.  Because the reality is there will always be hacksters and pranksters and demigods and Donald Trumps and Vladimir Putins and others who want to lie people.

But the thing is emotional literacy to have the critical thinking skills to process that information and also have the emotional intelligence to not be misled and whipped into a frenzy simply because someone is confirming a bias of yours.  It's really a broader issue.  I wish technology could solve the problem, and obviously applaud any serious efforts to do that with technology.  At the end of the day, it's education, emotional literacy and media literacy.

>> I agree with you, because actually when we talk about fake news, one of the biggest things of fake news is that they're appealing to emotions of the readers.  That is their core operator.  So just saying.

>> CHAIRMAN: Thank you.  So I think we've got just two more minutes of questions, and then we're going to have a little summing up.  So last question, please.

>> Thank you.  My name is Julia, I'm from Brazil also.  And I'm a law student.  And one of my main concerns regarding fake news is public removal of the content.  Because if you just remove it and you pretend it never happened, people will still be disinformed.

So it's also very important to talk about the responsibility of big companies and newspapers to actually own their mistakes and say, yes, that was wrong.  So we'd like to correct that.  Because that's actually very important to spread truth about what they reported.

So I don't know if you have any.

>> CHAIRMAN: I heard that more of a comment rather than a question.  You can publish something and remove it, but that doesn't remediate the fact the false news has been brought in in the first place.

I want to ask each of the panelists to talk for a minute or so about possible solutions from the regional and international perspectives.

And I think possibly we can start with (?) a big focus of their IGF meeting.

>> OK.  As I said, we solved the problem of fake news, is multi-stakeholder solution.  So we have, for example, the press, it's really interesting how they are working on, as fact-checkers as all these WhatsApp information that I have been talking about.  These WhatsApp spreading.

And also how the government is trying to abort the polarization in the country taking into account the problem of the peace process.

So it's interesting that the government is creating literacy solutions to try to avoid the polarization and the violence in the media talking about political situations like the peace process.

So yeah.  I think -- we are working on fact-checking also.

So we are trying to help the press to analyze the spreading of social media and also to analyze the specific politicians that have been in our country, creating news that could be really dangerous for specific persons.

In our country, if you are marked as guerilla, as part of the guerilla, or you are part of the army, or you kill someone.  They create fake news about it.

It's a huge danger for the person that is targeted of these fake news.  You could be killed in whatever, in one moment.  So it's important to be really aware of the movements of these politicians that are spreading fake news.

>> CHAIRMAN: I'll keep them on their toes and pick them at random, USA next.

>> USA: The importance role of making Internet maintain freedom and openness.  But we need to be careful about keeping truth in journalism.  Craig Newmark commented he's trying to find a free market solution to fact-checking.  He’s working on that.  And then Vince finished up with trust and verify.  So I think echoing things we're all hearing on the panel.

>> CHAIRMAN: Thank you.  Croatia next.

>> Croatia:

(No audio).

>> CROATIA: I think it's a huge problem, because it will definitely affect Internet in a huge way in the future.  I think there has to be some decision of FCC approves that.  Because discussion about discussion on FCC of net neutrality was also affected by some posts of fake news for that discussion.  Because of that, we have to take care of a younger population, for media literacy problems.  News literacy, especially to give them the tools to understand the news, to understand and distinguish real news from fake news.

And I have to say also, I'm aware first-hand that Google and Facebook are doing a lot in regard to this issue.  Because it directly influences their business model, as I said before, and they're trying to find a focused solutions for that.

But the question that was asked or commented.  We have to be very careful in the future when we remove the content.  That is a huge problem regarding the Human Rights.  And who is removing content, why is the content removed.  Those issues can be raised.  In a legal sense.  So that is also a huge problem in the future.

>> CHAIRMAN: Thank you.  We go to the Netherlands next.

>> NETHERLANDS: So the solutions that we talked about were the quality mark.  This way users can see which websites are trustworthy and in the audience, I believe there was a question about exposing the fake websites.  While this is another way sort of round sort of pushing websites as trustworthy and making sure they are spread more.

The other solution that it talks about is the more social, education.  That we have to teach kids at a young age noticing fake news and what to do with it and how to make sure they use trustworthy sources.  And then another solution, paying for media.  Because there's a lot of competition for journalists.

So it could be an option to have more paid platforms.  Yeah, so that good quality.  Also block chain was mentioned.  We believe it was too expensive to implement.  So we didn't really focus on block chain solution.

And also we shortly talked about artificial intelligence.  We believe it's not ready to implement yet.  These are the things we talked about in the Netherlands.

>> CHAIRMAN:   Thank you.  That just leaves Nigeria.

>> Nigeria: First of all in Nigeria, come up with some kind of solution framework, and the framework basically consists of four key components.  The first one is really we need to have a mechanism to identify the sources of fake news and fake speech, in our case.  And part of that process, we felt was to develop some kind of early warning system that would charge the messages as they come in.  And basically some kind of categorization and monitoring system.  There are ways once you get that kind of information, you can actually legally take down fake social media accounts.

The second part of the framework was to actively discredit, discourage, derail or diminish the sources of fake news.

One of the ways to do this was to drown it in factual narratives.  We needed to also look for ways to reach out to those who generate or propagate those kind of messages as I mentioned earlier.  And of course, we also talked about leveraging unlikely avenues for refuting such rumors.

The third pillar of what we looked at was basically the drowning of fake news and hate speech with the (?) narratives.  And here we had three subcomponents.  And they're divided into some tasking.

First of all, at the individual level, I think there was a need, or we identified there was a need for each and every one of us at the individual level to be a bit more skeptical.  To question some of these narratives when we see them.

The second part was that we saw an opportunity for Civil Society to basically set up some kind of credible validation website or mechanism.  And again, this speaks to frankly the challenges of trust towards governments.

Having said that, yes, we looked at some of the rumor control web pages and felt on the more technical level, the rumor controlled web pages.

And we actually looked very closely at a model that we understood or we learned in Malaysia.

In terms of, for lack of a better description, some kind of offensive operation.

But they need to take what we would call a whole of society approach.  In the case of government, a whole of government approach.  So it's not about a particular government department.

And basically the model that we studied, the Malaysian model that we studied, you have some kind of group of people who basically interact -- for example, on various WhatsApp chat sites.  And then we send information that really didn't add up to some kind of coordinating, clearing site.

In the Malaysian model, it's actually under their communications commission.

And basically what they do is they send this fake news or alleged fake news to an institutional expert.

So for example, we had a very strong socialization of information that HIV tainted blood was put in some kind of soft drink.  And therefore, don't buy the soft drink.

So you would have the institutional experts say, the Ministry of Health, within 24 hours would publish on their website, for example, the virus can't live more than a few seconds outside the body.

The idea was for other websites, particularly in government, but also outside of government, now point links to people who wanted clarification.  And also send out some kind of advisory notices.  I think that was to address one of the issues raised from Brazil about clarifying and not just removing it.

And then the other NGAs and the general public.  This was the kind of model, the framework that I think -- not I think, we really have advised our government to work towards.

>> CHAIRMAN: Thank you very much.  To sort of summarize what I've captured, the themes coming out of the panelists.  It's reassuringly old school in the age of the Internet.

So critical thinking skills, literacy, transparency, quality and thoughtfulness of approach when writing journalism.  Being able to call out something for being fake when it is that.

And I suppose I'll add my own one.  A bit of a sense of humor about some of the sorts of things you do see about the Internet.  It's absolutely brilliant.  The Australian comedian said if we couldn't market ourselves, we might miss some of the best jokes.

A huge thank you to everybody on the panel.  And to everybody for contributing.  And my co-moderators, remote participations.  Everything has gone extremely well.  Thank you so much.  A round of applause please.


>> Now I'm going to make a closing invitation to remind all of you.  There are eight NRI to NRI collaborative sessions this week.  So you've just been in one.  I hope you'll look at the rest of them and perhaps show up at some of them as well.  You'll see different NRIs participating in many cases.  Our goal, we have about 30 of the 101 national and regional NRIs here at the IGF and actively participating.  I see some of their members actually in this audience.

(Session concluded at 1:17 p.m.)