IGF 2023 - Day 0 - High Level Panel II - Evolving Trends in Mis- & Dis-Information

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> DEBORAH STEELE: Ladies and Gentlemen, we are about to begin the next session.  So, if you would take your seats, please.

Hello, and welcome to High-Level Session II, Evolving Trends in Misinformation and Disinformation.  I'm Deborah Steele, the Director of News at the Asia Pacific Broadcasting Union, based in Kuala Lumpur, and former news editor at the Australian Broadcasting Corporation.

We will continue on. 

Misinformation is the unintentional spread of inaccurate information shared by people who are unaware they are passing on falsehoods.

Disinformation is deliberately falsified content that aims to deceive.  It is a deliberate act.

This year the enormity of the challenge in responding to both has skyrocketed.  Generative AI has transformed what is possible and the scale of risk.

For decades, AI has automated tasks, such as detecting and completing patterns, classifying data, honing analytics and detecting fraud.  But generative AI is a new paradigm.  It can reversion and summarize and create new content.  Some of it serves a very valid and beneficial purpose.  But some is synthetic content.

Synthetic content refers to the artificial production, manipulation and modification of media for the purpose of misleading people or changing an original meaning.  Examples we have seen already include an image of a bomb blast on the grounds of the Pentagon, a video of the Ukrainian president telling Ukrainian troops to surrender, video of Donald Trump in an orange prisoner jumpsuit, celebrity porn videos, and pictures of Pope Francis wearing a Balenciaga puffer jacket.  All of these were, of course, fake.  But they looked real.

As a result, there's growing concern about the potential use of generative AI to spread false narratives, to spread lies and disinformation.

There are new developments in generative AI almost every day.  And predicted timelines have been smashed.  In some cases, innovations that were predicted to take five years took less than six months.

And so, we can expect a massive surge in online content at this time of what has been described as a tidal wave of sludge.  This is happening at a time when information consumption continues to be highly polarized.  This is exacerbated by the way in which online algorithms determine what you see, content that aligns with your pre-existing beliefs, limiting exposure to other perspectives, thereby reducing critical thinking and reinforcing echo chambers.

Most people don't realize their feeds are determined by algorithms, and they don't even realize there are in an echo chamber.  The echo chamber effect not only amplifies the spread of misinformation, it also makes it difficult to engage in constructive conversations based on accurate information.

All of this at a time when, as we heard in the first session, public trust in institutions is sliding in many countries.  And this is why we are here today to get the balance right between the opportunities presented by new technologies and platforms and the need to limit risks.  Addressing misinformation and disinformation requires a multipronged approach, from improving media literacy to making a political commitment to protect the integrity of information-sharing systems.

It requires regulatory measures and technological interventions, including authentification tools.

So, now to our panel.  And first of all, let me say, we have an apology from Salima Bah, the Minister of Communication, Technology and innovation in Sierra Leone.  Unfortunately, she is unable to join us today.  So, she has sent her apologies.

To the rest of our panel, Mr. Tatsuhiko Yamamoto, Professor Of Keio University Law School.

Dr. Ms. Vera Jourova, European Vice President for values and transparency, whose work includes ensuring democratic systems are protected from external interference.

Ms. Maria Ressa, 2021 Nobel Peace Prize Laureate.

Ms. Randi Michel, Director of Technology and democracy at the White House National Security Council.

And my fellow Australian Mr. Nic Suzor, member of the Meta Oversight Board, the body that people can appeal to if they disagree with decisions Meta makes about content on Facebook or Instagram.

So, let's start with our first question, Mr. Tatsuhiko Yamamoto, Professor At the Keio University Law School.

Advancements in generative AI are producing new information with a higher degree of complexity.  What's the impact on disinformation and misinformation?

>> TATSUHIKO YAMAMOTO: Thank you very much.  I just want to speak in Japanese because this is my language.  Generative AI is a (?) that's how I call it.  Generative AI content has some biases or mistakes or wrong information, in other words, poisons are embedded.  However, on the other hand, it's smooth and it's very tasty, tastes like something that human prepared for you.  So, you can keep on eating one apple to another.

So, I would say this is very close to very tasty, poisonous apple.

Now, this poison, however, is going to eventually uplie our cognitive process, so that we won't be able to make our independent decision making for ourselves.

And also, if our people are poisoned, those who are poisoned, if they speak something or the generative AI which contains poison are used as a teacher of AI, the training data, then the poison would actually infect all of the society.  So, that's a concern that I have.

And also, generative AI can have the misinformation or disinformation in a very large quantity instantaneously, so the information tsunami can be created or tsunami of disinformation can be created by the generative AI.  So, our challenges therefore are becoming very complicated.  So, this is of the entire society, then collective hallucination can happen.  Hallucination can happen.  That's something I can call.

>> DEBORAH STEELE: Maria, to you.

>> MARIA RESSA: Thank you, I completely agree, but I would push it one step further, which is, not only do we lose free will, which is what our colleague said, we -- it essentially hacks our biology.  It weaponizes our emotion.

Let me first start with the first time that happened, which isn't social media, right?  And what was weaponized at that point in time with the first contact, our human contact with AI was our fear, our anger, our hatred, tribalism is the code word that we use, but it's us against them or radicalization.  Terrorism of al-Qaeda seeked through our society, how did they radicalize people, how did a person, an Indonesian become a suicide bomber because of that?  In many ways, fear, anger, hate, tribalism, all of these things that separates the person from their family and their community, this is, actually, what was weaponized.

So, that's the first one.  Generative AI, which was released in November 2022, goes a step further, and this goes hand in hand, the U.S. surgeon general came out with a report in May where, finally, the harm to children was brought up publicly after so many studies have shown it, but here's the part that I thought was fascinating.  The epidemic of loneliness.

So, what will it -- how will it hack our biology this time around?  Remember the first time, fear, anger and hate?  The second time, generative AI is going to be loneliness, that seed of loneliness that is in each of us, and you have seen this now, so from November until today, we have seen people commit suicide.  We have seen crimes committed.  There are lawsuits that are out.  And there's still impunity in terms of protection for the public.

It's very easy to think this person, this AI, generative AI, is real.  And at 2:00 in the morning, when you are being -- when you are looking at it, you turn to it.  Some of the startups, actually, say, here.  This is your friend.  Your friend that will be a better friend to you than anyone else.  This is dangerous.  And not only is it dangerous individually, I think it's dangerous for our society.

Having said that, I am not a tech Luddite.  I love technology and we were one of the first adapters in our country, but I think we need to see right now this is a moment that is critical for the world, and we must take the right steps forward.

>> DEBORAH STEELE: Thank you, Maria.

Ms. Vera Jourova, what are your thoughts on the impact of generative AI on disinformation and misinformation?

>> VERA JOUROVA: Thank you very much.  Can you hear me?

>> DEBORAH STEELE: Yes.

>> VERA JOUROVA: Well, I represent here the EU, which is the European Union legislature, so maybe I will not enrich the fantastic analysis which we had from Professor Maria, and rather to share with you how the EU is doing in regulating the space.

First of all, what we see is that the generative AI plays an increasing role in the context of disinformation.  How we tackle the issue of disinformation, you said it's unintentional spread.  We speak about intentional production of disinformation.  And this is our definition of disinformation, which is dangerous and where we need to regulate, and it's when the disinformation is being produced in a coordinated manner with the intention to do harm to the society.  And the harm we define as the harm to security of the society and to electoral processes.  So, to elections.

So, we can imagine that the combination of disinformation as this intentional to do harm, using the AI, and especially generative AI, it's a dangerous cocktail to drink.  That's why we are now reacting on it by including the new chapter into the AI Act, which we are now finalizing in the legislative process in the EU, where we want to introduce several principles which have to be maintained.

First of all, that the AI must not start to govern the people.  The people have to always govern the AI.  Which means that the human beings are at the beginning of the development of the technologies over the life of technologies.  So, having the chance to look into the algorithms and to guarantee that the dangerous technologies or uses of technologies will be stopped, and as well as at the end.  So, we have three categories of AI, which is low risk, medium risk and high risk.  And especially for the high risks, we want this increased control.

Also, we have several case uses which are unaccepted, unacceptable in the EU legislation.

Coming back to generative AI, we believe that at this stage it's very important to say that the rights, which the human beings have developed for themselves, for ourselves, like the freedom of speech, the copyright, and I could continue, must remain for the real people, that we must not give these rights to the robots.  I'm simplifying horribly.  But this is very, very important in our philosophy that what belongs to human beings have to belong to human beings.

Coming back, and I will stop here, to our AI Act where we are adding now the chapter on generative AI, we have a very strong plan or vision that the users have to be informed that what they see and what they read is the production of AI.  So, we insist on labeling of such texts and images and also some watermarking of the deepfakes, so that the people immediately see that this is it.

And also, the deepfakes use in, for instance, electoral campaigns, in case they have the potential to radicalize the society or heavily manipulate the voters' preferences, in my view, the issue to me should be removed or stopped.  But this is still in the making.  We are now discussing with legislators.

>> DEBORAH STEELE: Thank you, thank you.

Ms. Randi Michel, director of technology and democracy at the White House National Security Council, what are your thoughts on this?

>> RANDI MICHEL: First, I want to thank the organizers of IGF and of this timely panel for having me here today.  It's an honour to share the stage with such distinguished panelists.

I want to thank all of you for being here to discuss such an important issue.

Generative AI technologies lower the cost and increase the speed and scale with which bad actors can generate and spread mis and disinformation including deepfakes and other forms of synthetic material.

And as AI generated content becomes more on realistic, this content can threaten human rights, democratic institutions, national security, trust and public safety, especially to women and girls, LGBTQ+ communities, other marginalized groups.

For example, inauthentic content or even claims of inauthentic content in the context of elections may erode trust in democratic institutions and put the credibility of electoral processes at risk.

In fact, just recently in Slovakia, a deep fade appeared to portray audio of a candidate discussing how to rig the election.  While the media was able to fact check the recording, the audio was released during a 48-hour media moratorium ahead of the election, presenting unique challenges to address the synthetic content.

In response to these evolving risks around the world, the U.S. government is working to increase transparency and awareness of synthetic content.  The Biden/Harris Administration has secured voluntary commitments from 15 leading AI companies to advance responsible innovation and protect people's rights and safety.

This includes a commitment from these companies to develop mechanisms that enable users to understand if audio or visual content is AI generated, including issuing and authenticating content and tracking its provenance, labeling AI generated content or both for AI generated media.

Building off of that, we are working to develop guidelines, tools and practices to authenticate digital content and detect synthetic content.  We are working to build the capacity to rapidly identify AI developed or manipulated content while at the same time sufficiently labeling our own government produced content so that the public, including the media, can identify whether content that claims to be coming from the U.S. government is, in fact, authentic.

These measures include, for example, digital signatures, watermarking and other labeling techniques.

Together these efforts will help us implement safeguards around AI generated media that could mis lead or harm people.  But the U.S. government alone cannot protect people from the risks and harms posed by synthetic or manipulated media.  We need to work together with our partners and allies to ensure that these safeguards are implemented around the globe, and it was great to hear my colleagues from the European Commission speak about this as well.

We hope that other nations will join us in these efforts to establish robust content authentification and provenance practices.  By working together, we are hoping to establish a global norm that enables the public to effectively identify and trace authentic government-produced content and identify artificial intelligence generated or manipulated content.

And most importantly, we are prioritizing engagement with civil society, academia, standard setting bodies and the private sector on this issue.  Forums like today's are an important opportunity to bring together a wide range of stakeholders.  President Biden and Vice President Harris have repeatedly convened top AI experts, researchers, consumer protection advocates, labour ask civil society organizations on this topic and we look forward to many more conversations to come.  Thank you.

>> DEBORAH STEELE: Thank you very much.

Nic Suzor, what's your perspective as a member of the Meta Oversight Board?

>> NIC SUZOR: Thank you.  And thanks to the panelists so far for articulating a really quite concerning set of issues that are seriously challenging our existing responses to content moderation, but also the authentification, the flora, the trust of media now in our ecosystem.

I want to start by apologizing.  I know we are only two or three hours into day 0, but I am going to take us -- I will make two points that I think will really, although I hope to maybe provide a little bit more technical detail, but also more pointed set of challenges for us.  It's easy, I think, to talk about AI and generative AI at a high level.

What gets really complicated is as soon as we start to unpack the types of responses that we have spoken about, the types of content that we have spoken about, the types of relationships that we have spoken about.

So, the first point is mis and disinformation.  We -- the introduction to the panel provides a way of thinking about these two things as separate concepts, that disinformation is intentional and misinformation is not.

Now, that distinction, you know, it makes sense when you are thinking about rules and punitive regimes and attributing fault to people for spreading intentional disinformation.

But when you look at how disinformation spreads, it spreads through the actions of both malicious actors, but also mainstream media, social media platforms that are up for engagement, ordinary users who are just participating in the debates of the day, and they all play a huge role in enabling harmful material, harmful false material to circulate.

So, it's really hard, first off, to make that distinction.

The second is what even is synthetic media?  And this is really hard, because we can talk about labeling.  We can talk about the importance that people are made aware of, what is generated by AI and what is -- and how AI systems prioritize and shape the way that information is presented to them.

But when I write an email, I am relying really heavily on auto complete.  When I take a photo, I use a lot of post processing if I'm removing someone's face who I no longer want to be associated with before I post it.

It's not very long where we live in a world where all media fits that description.  All media is manipulated.  So, there is an internal limit, I think, to where we can get to with pure -- with approaches that focus on labeling and authenticity when we have to accept that the changes -- the innovations that we have seen over the last year with generative AI, a lot of them are here to stay. 

People are going to find really cool, interesting, useful uses for them.  So, we need to make sure that we don't -- that we are not confusing, I guess, the issues when we start to -- when we start to figure out what sort of solutions might work in one context and making sure that they are appropriate for digital age.  That's going to be tough.

>> DEBORAH STEELE: Wonderful.  Thank you.

Just at this point, I would like to remind everyone that the number that you see in the top of your screen is the time allocated to speakers.  So, if you are wondering why they are not expanding further on some points, it's because of the time keeping that we are trying to do to manage this discussion.

Moving on now to the next question and to you, Mr. Yamamoto.  Public concerns over misinformation have long existed, coming into view more recently in the context of political campaigns and disinformation.  Where are we now in combating misinformation and disinformation online?  What are the lessons learned so far?

>> TATSUHIKO YAMAMOTO: Thank you very much.  I believe there are two directions that are now beginning to appear.  One is that the issue has become more serious than before.  And the second is that we are beginning to see lights of hope.  So, there are, actually, different directions.  But the issue has become serious because the attention economy, which is the business model of the platform economy has become so predominant, the attention economy meaning I'm sure you know about the attention economy, but it means that we are in this flux of information, in this flood of information against the amount of information that's supplied to us, the time that we can devote to use of information is very limited and, therefore, focusing on such developing formation is creating a business.

So, it can but rob the engagement -- robbing or stealing the engagement and the eyes and ears of the user has become priority and so we have algorithms and recommendation systems that are now more and more sophisticated to steal users' time and that has led to the echo chamber and filter bubbles and this is leading to the amplification of misinformation and disinformation.

And under the attention economy, fake news or disinformation can win engagement, higher engagement, which will mean it can win more profits and, therefore, tends to be disseminated more.  And that's really aggravating the situation.

On the other hand, we do see signs of hope, and that is that this information pollution, the issue and awareness of this information pollution is now shared across collaboration and cooperation among the public and the private sector.

(Technical difficulties.)

>> The OP, Original Profiler, OP stands for Original Profiler, which means it's to place the Original Profiler, OP, on authenticated data that's generated by authenticated organizations. 

So that sort of technological standard must be discussed at the global level and also literacy education will be very important.  I believe this will become critical if we want to tackle the structural issue.  Thank you.

>> DEBORAH STEELE: Thank you very much.  The issues of media literacy and as was mentioned in the first panel this morning, the importance of public understanding of these issues are key to helping to reduce the risk.

Nic, from your point of view, what are the lessons learned so far?

>> NIC SUZOR: I think -- well, it's been a busy few years for mis and disinformation, from election interference and into a global pandemic.  There is a lot of pressure around the world for platforms and tech companies to come up with things that actually work.

The challenge is, it's really hard to know what actually works.  One of the things that I think is most pressing is that we still don't really have a lot of research on things like inoculation and media literacy, for example, that we were talking about.

Absolutely, increasing people's ability to detect the -- and understand and correctly interpret the media that they are presented with is a really important goal.  We have seen during the global pandemic -- we see a couple of things that I think you could draw out as lessons, that, one, tech companies can do something, right?  For a long time, tech companies have said that they don't want to get involved in questions of truth, in questions of content and opinion.

The urgency of responding to disinformation in the context of the COVID-19 pandemic made it impossible for tech companies to continue down that road.

So, I think we are at an interesting point now where it's clear that tech companies do have a role, but also that that role is highly controversial, and the requirements of tech companies to make decisions about how people talk, particularly if you, as you go into more of the disinformation side of that spectrum, is incredibly complicated.

Tech companies have become good at using, say, spam reduction techniques and so on to recognize coordinated inauthentic behavior.  But what to do about people spreading falsehoods?  That's a social question.  And we don't know really what the answer is.

I want to put in a plug.  On Tuesday, the Oversight Board is going to announce that we are taking a case about a post by a Facebook user were posting a digitally altered video of the U.S. President Biden, and this video has been around for a while.  But the board has -- we will have a public comment process.  Because this is really a conversation that we have to work it out.  It cannot be a conversation that is left to tech companies and to technical solutions.  Because the problems ultimately are social problems.

So, I really want to encourage people here, if you can, to help us out because as we start to work through, I think the board is an exciting way that we can bring some of these issues to light and bring forward the conversations, the sophistication of that conversation.

I would really appreciate the input from those of you in the room, and certainly those on the panel, if you can help us figure it out.  But there are big outstanding social questions about where we draw the limits between parity and satire, between acceptable and nonacceptable, imitation, between acceptable and nonacceptable speech on private platforms, particularly in election contexts where that's so important, not just in the U.S., but around the world.

>> DEBORAH STEELE: I'm sure you will get lots of ideas from your other panelists shortly.

Moving on, women and girls, refugees, racial and ethnic minorities and LGBTQ+ people usually bear the brunt of harm caused by online disinformation and misinformation intended to target them.  And just a couple of weeks ago we saw the case of Spanish school girls being mis represented in porn videos.

Given current trends what can we do to protect and empower these communities and what tools can they use to protect themselves?

Vera, would you like to go first on this one?

>> VERA JOUROVA: I have so many things to say on the previous topic.  But I will be disciplined and answer what you ask about.

Well, with social media and internet, we, unfortunately, see incredible increase of shameful practices.  And they have usual victims.  Women, especially women in public space, politicians, judges, the leaders of the NGOs, when they open their mouth on internet, they are immediate targets of horrible attacks.

LGBTQ people, of course, every kind of minorities, everyone who is different, different from who, different from what, answer yourself, are the immediate targets.

And when we see such level of aggression, of course the law has to react.  And in the EU, we have a very strict rule.  What's illegal and these are aggressive and very often illegal attacks.  It's not about satirical or offensive content.  No, these are the messages and reactions inciting violence.  It's illegal in the EU.

After horrible experience from the previous century, when the Holocaust started with words and the reaction of the society was passive, we have this experience, we have the legislation enforce for offline world which says which content is illegal and it contains hatred against individuals or groups of citizens.

So, our mantra in the EU is what illegal offline has to be treated as illegal online as well.  And here comes what the professor mentioned, like how did you say it?  Attention economy.  I would call it dirty business.  Of course, for those who are running the algorithms, the big tech for years, they were making big money on hatred and alarming news and apocalyptic visions and, of course, dangerous information.

And the EU, you set around the table with the big tech, and we discussed that they cannot continue like that.  And for some period of time, we had a code of conduct against hate speech and code of practice against disinformation.  And now we have the legally binding Digital Services Act, which says this.  You have to resist making money on illegal content and on disinformation, which is dangerous for the European societies.  And it will be under the enforcement structure.  There are penalties.  So, we mean it seriously in the EU that we cannot continue just passively be watching what's happening online against the selected groups.

Last word on women.  For me, the Digital Services Act is not enough.  So, I proposed the first ever European directive against violence against women, which contains a very strong chapter about or on the digital violence.  And it is now in the legislative process.  And I believe that once the law enforcement authorities, like police and prosecutors in the EU Member States will have these laws in their hands, they will enforce more the fulfillment of the rules, which should protect women and everybody else who is the target.

>> DEBORAH STEELE: Thank you.

>> TATSUHIKO YAMAMOTO: Yes.  Thank you.  I would like to use the word attention economy that I used earlier, but under this business model, hatred and also fear, as well as anger, I think, can gain more engagement.  Therefore, in that sense, under the attention economy, the communities that you referred to are placed in a very vulnerable situation.  And I think that was the case in the physical real world, and now that will become more serious in the online world and hate speech and disinformation and misinformation when they are combined, it could become very impregnable force.  And when it it's focused towards one individual, then I think the speedy moderation would be the solution. 

But for such acts addressed to the community, there are several solutions.  One would be to have some human rights society or organization that has the trust of the world, can under reports from such individuals, it can start fact finding, and also to as an organization issue an article on the fact finding results of such mis and disinformation as a reputed organization.

And these fact-checking articles must be read, must be read.  So, for that, the platform companies should be engaged, I think, to enlist their cooperation so that we can issue these articles.

And it will be the responsibility of these platform companies to provide fact checking articles on misinformation and disinformation to the community and then also to feature them prominently.  And I think that will be their responsibility, I believe.

Now, if I may cite a case of Yahoo Japan, which is a news platform here in Japan and they have the Japan fact check centre, which was established last year.  And they will also be sharing articles issued by such centre.  So, international fact checking organization and the enlisting of media platform companies, I think, would be critical.

>> DEBORAH STEELE: Terrific.  Thank you very much.

Maria, your thoughts, please.

>> MARIA RESSA: So many.  I mean, let me pick up, you know, from Professor Yamamoto.  Let me pull up some of the stuff, and I will hit my three minutes.

First, when he says structural design, it is the design, it is by design, meant to spread lies faster than the facts, right?  In 2018, MIT said lies spread six times faster.  But now, in a new world that we have today, it's probably significantly worse because all of those safety measures have been rolled back.  Twitter, now X, Facebook now Meta, they have changed their names, but it's gotten worse.

So, that's the first step.  And Vera talked a little about the victims of it, that this is illegal in the real world, the online violence is real world violence, because your mind, your person on the cell phone is the same as the person walking in the real world.  There's only one person that's being influenced by all of this.

And I think the key part to this, let me, with all respect to Nic, I have many friends on the Oversight Board, but I think what we have seen from the platforms is the three Ds, deny, deflect, and that leads to delay, which means more money, right?  This is if more lies spread than facts, you don't have facts, you can't have truth, you can't have trust.  This is part of the reason; it is significantly worse and that's that only the first generation contact.

Finally, with what Randi said, voluntary is nice.  We tried it in 2016 through 2018.  And it didn't work.  So, the question is, what can we do differently today?  Because of the harms are going to be significantly worse. 

And, again, if you don't have facts, you don't have truth, you don't have trust, you don't have integrity of elections.  And if you don't have integrity of elections, you get statistics like what we have right now, V-Dem in Sweden matches Freedom House in the United States, but V-Dem pointed out that last year 60% of the world is now under authoritarian rule.  This year that number went up to 72%.  2024 will be the tipping point of the world.

Thank you, EU, for the laws that are coming out.  But I joke and say, it is the race of the turtles.  While the technology is coming out, every two weeks, right, it's agile development.  We must move faster.

I sit on the Leadership Panel of the Internet Governance Forum, so you can see how pushy I can get.  We must move faster.  And we cannot rely on the tech companies alone, because their motive is profit.

We must move.  And, again, thank you.  The EU is putting in place things that, frankly, are still too late.  2024, look at the elections.  Taiwan in January.  It's getting clobbered by disinformation from China right now.

In February, Indonesia, the world's largest Muslim population, the son-in-law of the former president is the front runner.  So, you may have a repeat of the Philippines.  If elections were held today, you have the EU coming up, maybe Canada, the UK.  You have, of course, the United States.  Anyway, I can't list all of them.  It's just we are there, folks.  This is it.  We are looking at the abyss.

And not to mention the answer to the question, which is who are the ones who are harmed the most are the most vulnerable.  The ones where institutions are weaker, the Global South, the ones who are first responders, last, I'm 42 seconds over.  The last part is, misinformation is something that the tech lobby would like you to use.  But you can definitely test for disinformation.

We are only a news organization, but we can tell the networks, we call them recidivist networks if you're doing counterterrorism.  You know who they are.  You can pull them down.  And that just means you will make a little less money.

>> DEBORAH STEELE: Thank you. 

Nic, you are next.  And just a reminder, the focus for this discussion -- we move on to regulatory measures further on, but the focus for this is what can people in minority groups, in groups that are discriminated against, what can they do to help protect themselves?

>> NIC SUZOR: So, I'm going to try to be brief and reclaim some of my time that I went over in the last two responses and I will stick to the question.

I think the first thing we need to do, if we are thinking about where next, is acknowledge, recognize that power matters, that any technology that is built and machine learning technology that learns from an existing system of hierarchy, a world that is unequal, will likely perpetuate and if it's useful, often exacerbate those inequalities.

So, with that in mind, I think when we are talking about responses, when we are talking about how we can help marginalize uses and vulnerable communities, one of the things that often gets lost in this debate is that acknowledgment, that it matters.  It matters where the speakers are, who the speakers are, what sort of power they have.  What sort of networks they have.  It matters who the targets are, whether there is continual existing risk of exacerbating violence.  Context matters.

But this is something that's been very, very difficult for tech companies, I think, to grapple with.  When tech companies generally provide error rates for their machine learning, classifiers, for example, there are only high-level figures.  And you can say, we can classify hateful content with a 98% accuracy and we have remove 100 million pieces of content before anyone even sees it.

That, actually, doesn't help address this major problem.  If power matters, then there is -- it is incumbent on the tech companies to do more to proactively look at how their tools are being used against and by marginalized communities.

This is one of the things that when we keep talking in terms that are neutral, when we disregard the power, we lose on this point.

I think it's incredibly important that we continue -- we know that people from -- that vulnerable people experience a greater proportion of abuse and misinformation.  We know that people from marginalized backgrounds have a harder time seeking a remedy and dealing with it.

Given those things, why aren't we doing more proactively to ensure that the systems that we are building are built with historical inequality in mind.  The systems we are building have built-in safeguards against perpetuating hierarchy.

I think it's easy to get distracted on tools that people can use to help structure their own experience.  I think autonomy is incredibly important.  But I don't want to see our solutions focused on putting the burden back on the people who are already marginalized.  We need to build that in.

>> DEBORAH STEELE: Thank you.

Vera Jourova, what are some of the regulatory challenges to addressing these issues?

>> VERA JOUROVA: I might simplify it one only sentence, that the cure must not be worse than the disease, which means that when we regulate the online space in the EU, we always have in mind that we have to keep as the main principle the freedom of speech.  It means it must not happen that in the near future, the uncomfortable opinions of others will become disinformation from the point of view of those who have the power at that moment.  This must not happen.

But I want to say, and I always start by speaking about disinformation, it has always been with us.  But with the existence of internet and social media, it's being spread in supersonic speed.  And the intensity and the massive impact is really the matter of concern, especially now, when we are close to the war in Ukraine, in EU, the EU space is overflooded with Russian propaganda.  And it's always the same narrative.  The aggressor says that he is the victim and the victim is the aggressor.  You will see, we will see a lot of it now regarding the horrible war starting in Israel.  It will -- we will see similar narrative.

So, I want to say that the disinformation serving the aggressive regimes, it's something we have to pay attention to.

Coming back to the time of peace, regardless of the wars, we started in the EU to apply the code of practice against disinformation, which is a voluntary commitment of mainly big tech, which cooperate with us on increasing the fact checking in all our Member States, still criticizing that they are not doing enough in the states with smaller states, with their languages.  You mentioned Madam director, the Slovak elections, this was exactly the case when we saw insufficient fact checking.

Also next, demonetizing the code of practice contains rules, how to deprive the producers of disinformation of the financial sources.  So, we engage the advertising industry.

Two more things.  Next to the code of practice, which is about -- mainly about the cooperation with the platforms, we are strengthening the independence and power of the media in the EU, which is key, because we want the people to have the places where they can get the facts.  We are not touching opinions.  We want the facts to be delivered to the people so that they can make their autonomous free choice.

If the people want to believe, sorry, stupidities is their right, but I think it's our obligation to enable them to have the facts available.  That's why we are supporting independent media.

And last, but not least, it's what Professor mentioned, the media literacy, but it's the long distance run.  I think we cannot wait for the society to become resilient and become ready to -- for the everyday confrontation with disinformation.  But we have to also work in this direction.  We have a lot of funding in the EU and a lot of projects with Member States.  But I know I am beyond time.  Thank you.

>> DEBORAH STEELE: Maria Ressa, your thoughts on the regulatory challenges.

>> MARIA RESSA: Yes.  I think the first is that as the first panel pointed out, everything is about data today, right?  Content is what we gravitate to, because that's what we were used to in the old world.  Really what we need is something that the DSA will now give us, realtime flow of data.  And that data then will be able to show you the patterns and the trends.

Once you have that, then you will be able to see the harms, right?  Which, frankly, that data is available to the tech company.  So, once there is oversight, you know, which is also what you do with news organizations when we used to be the gatekeepers, then civil society can come in and hold power to account.

So, the problem is not freedom of speech.  Anyone can say anything on these platforms.  The problem is freedom of reach.  And I am quoting a comedian, Sacha Baron Cohen from years ago where he said, it is the distribution model that is the problem, right?  Who says lies should spread faster than facts?  Why do fact checkings not spread at all?  And in order to be able to get it out to you, we have to create a whole civil society influencer marketing campaign for facts, which all of that then gives more money for the surveillance capitalism model.

So, you go back to that design.  I, actually, was at the Vatican with 30 Nobel Laureates, and we told the Catholic church, isn't lying against one of the Ten Commandments?  Sorry.  I am going to try to joke because it's really difficult stuff.

Anyway, distribution.  And then finally, the last part, in terms of a regulatory framework, right?  I think there is a lot -- I never used the word misinformation.  I use disinformation.  It is, as does Vera, right?  We agree on this, because it is the insidious manipulation, and the Nobel lecture in 2021, I called it toxic sludge because it is laced with fear, anger and hate, and it will be worse with generative AI.  Because in order to fight that AI arms race, you are going to need AI to fight AI.  It is beyond our comprehension.  We can talk about this more later in terms of how generative AI is now exponential, exponential compared to.

What needs to happen.  It's really simple.  There's a huge lobby to try to convince you it is difficult.  You don't need to know how to build an airplane to regulate the air industry.  You don't need to know how to build this hall in order to put in place safety measures, to make sure the building doesn't fall down around us.

So, it's a simple thing.  We need transparency, which is what the DSA is looking to give.  I would suggest we move it beyond just academics, because civil society journalists, this is where we will look for accountability.

And then the second thing is, how do we define safety in the public sphere?  Because think about it like this.  It's almost like during the COVID, if there were no regulations in place for pharmaceutical companies, I gave vaccine A to that side of the room and then this side of the room I gave vaccine B.  There's no law against trying it out in the public sphere in real time, right?

Oh, vaccine A people, so sorry, you died.  We have these examples globally.  It's dangerous.  A toaster, Cambridge analytical said this, a toaster in order to get in your home has more safety regulations than what we carry with us everywhere we go.

So, stop the impunity.  Transparency, accountability, that means roll out the code, but if there's a harm that happens, you are accountable.

>> DEBORAH STEELE: Thank you.

Randi Michel, how do we become better producers and consumers, better prosumers?

>> RANDI MICHEL: I think the important thing here is that in today's complex and rapidly evolving technological environment, as Maria just said, transparency is key.  The public needs to be given the tools to understand what information is authentic and what is synthetic.

And on this, I want to make three key points.  First, I agree with Nic that technological solutions are not everything.  They are not a panacea, but I think it's really important to remember that they are a key element of this equation.  And in that discussion, I think differentiating between falsehoods and synthetic content is really important.

While falsehoods is a public conversation as you alluded to, synthetic content is something that we can identify and label, and grant it, there is still conversation needed about how exactly to define synthetic content but that's something we can work towards.

The Biden/Harris Administration believes that companies that are developing emerging technologies like AI have a responsibility to ensure that their products are safe, secure and trustworthy, and the voluntary commitment are a key part of that.

The second point I want to make is in response to your point about voluntary commitments.  Certainly agree, that is not sufficient.  And that's why the Biden/Harris Administration is currently developing an executive order that will ensure the federal government is doing everything in its power to advance responsible AI and manage its risks to individuals in society, and the administration will also pursue bipartisan legislation to help America lead the way in responsible while mitigating risks and harms posed by this technology.

And the third point I want to make is regarding the bottom-up approach that Nic was alluding to.  And what is the second component of the technological solutions, the civil society aspect, the empowering communities component.

Building resilience requires a bottom-up approach.  The U.S. Agency for International Development and the state department announced the promoting information integrity and resilience initiative which will enhance technical assistance and capacity building to local civil society organizations, media outlets and governments around the world, providing the tools and training to develop effective communication in response measures.

These three things together are key and are necessary combined to build the resilience throughout the world that we need, to address these kinds of threats.

Governments need to do our part to ensure the public can verify our authoritative government information, while at the same time the dynamic digital ecosystem will require continuously evolving technology, regulations and international norms.  It's incumbent on all of us, governments, civil society, and the private sector, to work together to make sure that AI expands rather than diminishes access to authentic information.

>> DEBORAH STEELE: Maria, how do we become better producers and consumers in this age?

>> MARIA RESSA: I think this is, again, a whole of society approach, right?  So, and let's start with our industry, the news industry.  We've lost our gatekeeping powers in many ways.  And I would say that began in 2014.  And one of the things we can do is begin to look at elections.  Not as a horse race, but as critical to the survival of democracy, right?  Critical to the survival of values.

So, I would say the first -- I mean, Rappler is working, for example, with OpenAI and working out how we can use OpenAI.  I have a team with them in San Francisco now looking at how we can use this new technology for the public sphere.  How can we make it safer for democracy?

The second part is that we need to tell our people how we are being manipulated and we don't do it enough.  We are fan girls and fan boys of the technology.  We need to get beyond that and look at the entire impact.  We are at a different point in society.  And, frankly, at a different point from when the Internet Governance Forum was created many years ago.  This is an inflection point.

The third part would be, so more transparency, more accountability.  We have a lot of values now that have come out.  The AI -- the blueprint for change of the White House that was done by Alondra Nelson.  We have the OECD.  We have the -- we will be unveiling #theinternetwewant from the Leadership Panel of the governance forum, of the Internet Governance Forum.

The values are there.  We know where we need to go.  Now we got to get off our butts and operationalize this.  Thank you, EU, for moving forward.  But we have to do a lot more and it's great to hear what you said.

Oh, my God, I have 56 seconds left!  I'm sorry.  I went over a little -- before.  But I would say the last part is civil society, where are you?  We need to move from being users, users, to citizens.  We need to go back and define what civic engagement means and the age of exponential lies.

I know this firsthand.  You know, I have been targeted by an average of 90 hate messages per hour.  In order to keep doing my job, I had to be ready to go to jail for more than a century.  I have to have Supreme Court approval to be here in front of you.

But the time is now.  We need a whole of society approach.  And we need men and women who are going to stand up for the values we believe in.

>> DEBORAH STEELE: Thank you.

Tatsuhiko Yamamoto, your thoughts on how we can be better producers and consumers.

>> TATSUHIKO YAMAMOTO: Yes.  Thank you.  Well, as has been mentioned by many speakers before, literacy is critical.  And if I may use a metaphor, the act of eating information or ingesting information, you must be more aware of how you ingest information.  And I often talk about information health.  I am promoting this concept of information health.  Because when you eat food, I think people are more and more aware of what they ingest, what they eat.  So, we want to trace back to who produced this, with what materials, through what process, the ingredients.  We are increasing more aware about the safety check of the food that we eat.  And maybe 50 or 100 years ago, we were not so aware of such food consequences, but with literacy, we are more and more aware of the safety of the food that we eat.

So, we must eat -- take a balanced diet and that's something that we have learned over the years.  And the same concept, I think, should also be adopted in the field of information consumption, that is information health, because I believe that we are eating a lot of data that's really fettered with chemical additives.

So, the concept of information health, who, using what materials produced this data through what processes.  Did they use generative AI?  I think we have to be more aware of this.

Also, I talked about the filter bubble and the fact, rather than not eating a biased diet of data, we must eat a balanced diet of data.  So, literacy is very important.

Now, I am the head of an ITC literacy subcommittee with the MIC, Ministry of Internal affairs and communications of the panel.  We must communicate this kind of approach.  And I also would like to take this concept of ICT literacy and the information health to the WHO for discussion, if I can.

So, I believe that we need to become increasingly aware of our health as we are exposed to such harmful data.

And once we can proliferate this kind of an approach, then we will be able to counter tech companies who are attentive only to disseminating harmful information for their profit, and we will be able to, of course, make them exit the market, which will increase our transparency and I believe that by so doing, we will be able to criticize such companies publicly, and that would lead to their restructuring or structural change.

So, I think we need literacy and also the concept of data or information health to become better prosumers.

>> DEBORAH STEELE: Thank you very much.

An excellent concept, and I think we can all see the value of that proceeding.

It's time now for our ministerial high-level respondents.  I would like to invite Mr. Nezar Patria, the Vice Minister of Communication and Information Technology from Indonesia.  Thank you.

>> NEZAR PATRIA: Thank you.

Thank you.  Excellencies, distinguished speakers and delegates, Ladies and Gentlemen, good afternoon.  First of all, thank you for having me to be a part of this panel, Indonesia is very glad to be able to share our story, in the hope of having our collective effort of countering disinformation and misinformation.

As we all know with the increased internet usage, the distribution of false information to miss direct public opinion also increasing, especially with the emerging of generative AI.

In 2021, the central statistical agency report 62% of inland niches users reported seeing content online use outlined that they believed to be false or abuse.

Many of them also adopting the accuracy of news they read in the social media.  This is, of course, concerning, since it might polarize our society, although Indonesia is noted as one of the countries that believes AI might bring positive impact to their livelihood.

Responding to such situation as one of the world's largest democracy in Southeast Asia, Indonesia has been very active in promoting effort to counter misinformation and disinformation.

Through the Ministry of Communication and informatics at the national level, we have developed a comprehensive strategy to counter mis information and disinformation by establishing national digital literacy movement.

At the upstream level, the banking hopes at the intermediate level, as well as supporting law enforcement activities at the downstream level.

Under general level, ASEAN has documented government information on combating fake news and disinformation in the media.  This framework acts as a roadmap for the Member States to better identify, respond and prevent a spread of false information.

Again, such backdrop, I stress the importance of countering misinformation and disinformation through following measures: First, we need our society to be more digitally literate, especially to be able to do prebunking of disinformation.  Only through this way, we can better equip the people not to fall into being a victim of false information.

We must not, I emphasize, we must not let the bleak history of when our society fail into the false information on vaccines and COVID-19 happen again.

Second, admitting that our digital ecosystem can no longer rely on the economic that incentivize the spread of misinformation and disinformation, we must do better in developing a governance that incentivize productive, meaningful, and accurate information.

Last, but not least, we shall intensify our cooperative efforts to further technological adoption that is useful for counter disinformation and misinformation, especially investing the emerging technologies such as generative AI.  That accelerates the generation of synthetic information on digital ecosystem.

Indonesia believes, through a collaboration, we can nurture our society to be more emerging on technologies that might threaten our society and well-being.  Thank you.

>> DEBORAH STEELE: Thank you very much.

Next, Mr. Paul Ash, Prime Minister's Special Representative on Cyber and Digital in New Zealand.  Thank you.

>> PAUL ASH: Thank you for the opportunity to be here today.

I don't have a prepared statement.  What I would like to do is just reflect a little of what I think I have just heard, and posit some ideas about where we might go next.

We have heard from the panel today some significant issues about the timing that we have to grapple with, with the fact that while mis and disinformation, however you define them, was previously an issue.  The rise of new AI systems has really sped that issue up and you think there's a really important takeaway for us there in terms of how we focus on finding solutions together.

The second thing we have heard is the impact, the disproportionate impact on affected communities, particularly women and girls, particularly members of the LGBTQ+ community, particularly refugees and migrant communities.

And we have heard the importance of involving them in solutions from the very beginning of the process, difficult though that can be sometimes participatory environments that work well for them.

I think we heard a clarion call from Maria Ressa and from others about the need for urgency.  If the statistic that 72% of states are now in an authoritarian -- that means that there are 44 that are not.  And of those, many of them are challenged at the moment by the impacts of disinformation and have influenced operations on their institutions on societies.

That means we are starting from a really difficult place to start with.

And I think we heard a range of different solutions posited from regulation, as the Vice President outlined, through to societal solutions, literacy, et cetera.

But what I also think we heard was that none of those in and of themselves is going to be sufficient to solve this problem.  We are going to need, I'm sure my Swiss friends will like this, a Swiss cheese of solutions that enables us through layers to solve the problem.

Where does that lead to in terms of next steps?  The first thing that leaps out to me is the need for pace.  We do not have time to admire this problem.  We, actually, need action on it quickly and carefully.

The second and it came out from this panel, but also elsewhere, is the need for influence, for leaders who will speak up for the values that we share and that are under threat as a consequence of disinformation.

We also heard of the need for whole of society solutions, that citizens can participate in them, that, actually, governments can work alongside industry, civil society, the technical community, that phrase where I use civil society was, actually, quite pertinent as this conversation works through from what is largely a government centric perspective.

We heard about the need for that conversation to be based on common values and principles, international human rights law needs to be at the foundation of that.  So, too, a free, open and secure internet where there's much more adjectives that we need to stick in front of the word internet to make that work.

And finally, we heard about the need to operationalize those values and a construct that actually works.

We have some experience of that in New Zealand, with what happens after the attacks in March 2019.  And I am delighted to see a full panel of supporters in front of me.  Those who helped stand up with us against the scourge of violence extremism and terrorism and look for a truly multistakeholder response that brought all of those parties together and continues to do so in trying to find solutions.

And I think that's probably the lesson I take away from the panel I have heard and from my own experience in this area.

This challenge of information integrity and I use that phrase instead of disinformation and misinformation.  The challenge of having confidence in the environment in which we operate is perhaps the most urgent issue facing us amongst the stack of those involved in the discussions around frontier models, generative AI, foundational models, whatever you wish to call them.

Because it has the potential to undermine if it's not correctly and well, the very institutions that would enable us to governor all of the other issues that we are going to need to grapple with in the AI environment.  And it has the ability if it's not handled well, to undermine the human rights principles that underpin everything we do.

But it also, as Maria Ressa and others have so eloquently articulated, there's a step on the pathway from information integrity issues to radicalization, to violent extremism and terrorism.  Those two issues are linked in an age of algorithmic amplification.

I guess the one thing I would say here and it's something that's come through from this panel very loudly and I expect we will hear it from civil society over the next two or three days, in closing, I would take one of my mentors talking as we were doing the call work, be less failure in your response.  The solution to this will not be entirely generated by states.  It needs to be a solution that industry participates in because, actually, longer term, beyond the next quarterly reports, they have as much of a stake in thriving democracies with human rights underpinning them as anybody else.  Same with civil society coming from a different perspective.

So, I would say we tackle this together, let's think really hard about the institutions we have and whether they are fit for purpose.  Our experience of multilateralism is often it moves much slower than technology and we need to find a way to create truly collaborative, multistakeholder collaborations that build from the ground up.  Thank you very much for the expertise and wisdom from this panel.  I think some of the distillation of things we can take forward from here.

>> DEBORAH STEELE: Excellency, thank you very much.  We are going to have time now for a two-minute summary from each of our panelists, two minutes only.  Eyes on the clock.  Nic, would you like to begin?

>> NIC SUZOR: Let's go.

I think we might be behind our quota on multistakeholderism.  Multistakeholderism, make multistakeholderism.  This is really important, because I think we have got -- there's a tendency to look for a single solution here and there is no single solution.  There are limits to what governments can achieve.  It's not possible to make all of the content, harmful content that we are talking about today illegal.  Much of the harmful content that is spread, the harmful false content that is spread that leads to bad outcomes is perfectly lawful.

So, as much as we would like, government can't be the sole arbiters here.  Government also -- well, are restrained because sometimes they are the worst actors.  72% statistic, that sounds terrifying, I'm not sure -- well, I know that I really -- I am very concerned about requests for removal and censorship of content that come from state actors trying to keep ahold of their power.  I think that's incredibly difficult.  And as a responsibility on private actors, on civil society to resist accordingly.

There's limitations to what -- I don't necessarily trust the private sector either.  I certainly don't trust the private sector.  We have seen some progress in tech companies.  But we also have to be really careful about other components.

I don't trust mainstream media to have the answer to this.  Most of the disinformation that we see is amplified, picked up in a cycle of amplification by both mainstream media and social media.  I think that makes it really difficult.  Means we have to work together.

>> DEBORAH STEELE: Thank you.

Randi Michel.

>> RANDI MICHEL: Thank you.  I would like to make four quick points to summarize what I have said earlier.

First is the importance of governments and implementing authentification and providence measures.  And we hope to collaborate with the governments represented here in this room and around the world on building out a global norm on that issue.

Second is the key role that technology companies play in providing transparency to their users.  That we try to encourage with the voluntary commitments.

Third is the importance of engaging the civil society, and the need for multistakeholder engagement as mentioned previously.

And fourth, this is an issue that I think we haven't talked about enough today, and so I really want to emphasize this, is the need to ensure that these efforts to advance transparency do not evolve into censorship or infringement on internet freedom.  The best way to address disinformation is not by limiting content, but by disseminating attic accurate information.

The growth of generative AI does not change that.  We look forward to continuing to work with civil society, the private sector and governments from around the world, from developing and developed countries alike to address these evolving challenges in a way that upholds human rights and democratic freedoms.  Thank you very much.

>> DEBORAH STEELE: Thank you.

Maria.

>> MARIA RESSA: So, I will also do three points for the two minutes that I have. 

So, the first is the one I haven't -- I have been critical of tech companies, but we work with every one of the tech companies that are there, because they must be at the table.  They are the only ones who have the power right now to stop the harms.  They are choosing not to, and this is where I will use an ASEAN phrase, please, tech companies, look at your business model, moderate your greed, because it is about enlightened self-interest.  You do not want democracy to die.  You do not want to harm people.  It is not up to governments.

And the second is to the governments.  The governments are late to the game.  But that's also partly because we had to figure out what the tech was doing, right?  We were one of the "drank the Kool-Aid," you know?  Rappler was created with the -- on Facebook.  If Facebook had better search, I may not have even created the website.

But having said that, governments now know the problem.  Please, work faster and in terms of speed and pace, you are talking about agile development, which rolls out code every two weeks.  So, we must come up with systems of governance that can address the code or you stop the actual rollout of the code, which then moves into business, or you make the companies responsible for what they roll out.

And the third one is to citizens.  This is it for us.  The journalists are holding the line, but we can't do this without you.  So, move from users to active citizens.  This is the time.

>> DEBORAH STEELE: Vera.

>> VERA JOUROVA: Yeah.  Thank you.  Three comments.  Hopefully I will manage in two minutes.

First of all, we have to work together democracies of the world, because if democracies are not the rules makers and will become the rules takers, I think that we will fail something absolutely essentially critical.

The second -- so, that's why I am also happy that we can work within the G7 on the AI code of conduct and other things.

Second thing, I spoke here about the regulation, and Maria Ressa repeated that we are too slow still.  Maybe in the EU we are a bit faster than somewhere else.

But it must not be top-down only.  That's why also in the EU, we believe in very strongly involvement of the civil society, demanding citizens who do not want to be manipulated.  We believe in strong media.  We believe in engagement of academic sphere, because we need to understand what's happening and to analyze and take the well-informed political decisions.

Last thing.  Consumers versus citizens.  I worked as a commissioner for five years to protect consumers, and it was mainly about protection of, sorry, I have to say, it's stomach and health, not to be poisoned.  And we did not invest so much in the people's hearts and brains and souls not to be poisoned.  And I think that now it's time to do something more with the citizens and for the citizens.  That's why also we are preparing the regulation on transparency of political advertising online before the elections, because we need the citizens to understand what's happening, not to get manipulated and not to become from individual citizens who should have free choice, the easy to manipulate crowd.  Because it could happen very easily online.

Thank you.

>> DEBORAH STEELE: Thank you.

Would you like to share your summary?

>> TATSUHIKO YAMAMOTO: Thank you very much.  Thank you for making me a part of this.  The disinformation is becoming very serious, but on one hand we see some hope.  So, I think otherwise a reference to hope and I was able to mention this.  Because I think we are on the same page about understanding and everyone knows that we have to take action so there's a consensus on this stage.

And also practices.  We have to operationalize or take action immediately.  This is something very important.  For that to happen, we need to have international collaboration and international dialogue like the one we are having now.  And platform companies, tech companies need to be also at the table.  This is quite important.  Platform is expanding and becoming gigantic and very powerful.  And I am a researcher on the constitution.  The constitution is the control the power of government, but constitutional is now emerging as a word.  In other words, not just governments, but tech companies need also to be managed or controlled by some of the piece of legislation.  So, government, one country cannot confront with platform companies at this point in time.  And, therefore, international framework is needed so that we can have a dialogue with platform companies.  And that's what I believe.

And also, one thing we have to focus is a structure.  Attention economy structure.  If you look at each and every pieces of phenomenon that you encounter, maybe you can't really bring any solution to the total system.  Therefore, you will need to look at the ecosystem.

So, bringing solution to that is quite important -- quite difficult, because we are going to make new culture.  But I have a hope that we can do so together.  So, we would like to continue discussion and exchange opinions going forward.  Thank you very much.

>> DEBORAH STEELE: Thank you.  And to complete Maria's list of elections that we have in the next year, the world's three biggest democracies are going to the polls, Indonesia, India and the United States all have elections in the next 12 months.

Please join me in thanking our exceptional panel, Nic Suzor from the Meta Oversight Board.

Maria Ressa, 2021 Nobel Peace Prize Laureate.

Ms. Vera Jourova, from the European Commission.

Ms. Randi Michel from the White House with National Security Council.

And Mr. Tatsuhiko Yamamoto, from Keio University.  Thank you very much.

(Applause)

>> DEBORAH STEELE: If you would join me for an official photograph, and it is now.