IGF 2025 – Day 2 – Workshop room 5 – Open Forum #16 AI and Disinformation Countering the Threats to Democratic Dialogue( FINISHED)

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> IRENA GUIDIKOVA: Good afternoon, everyone. Welcome to the IGF Open Forum AI and Disinformation: Countering the Threats to Democratic Dialogue, organised by the Council of Europe.

My name is Irena Guidikova, Head of Democratic Institutions and Freedoms Department, Directorate for Democracy, Council of Europe, and I will be moderating this panel.

I would like, immediately, to thank my colleagues, Giulia Lucchese and Evangelia Vasalou, who is online, for helping me put together this panel and who will be producing the report and helping with this session.

We're going to be dealing with what is facing all societies today, not just democratic societies. I'm personally concerned about democratic societies, in the first place.

Artificial intelligence generating and spreading this information, but we will hopefully discuss the roles AI can play in limiting the spread of this information.

Combatting this information is the top priority for the Council of Europe as a human rights organisation.

For those not familiar with the Council of Europe, especially those coming from other countries is the older brother of the European Union, an organise of four to six member states with a particular focus on human rights, democracy, and the rule of law. We call it Europe's watchdog, democracy and human rights watchdog.

As such, of course, we are extremely concerned with the phenomenon of this threat of disinformation today.

The Council of Europe is always on the forefront of how technological development impacts our societies and the rights‑based ecosystem that we have created.

And this is why the Council of Europe prepared and opened for significant and notification last year the first international treaty on AI and its impact on democracy, human rights, and rule of law.

And now we are in the process of developing sector guidelines and also supporting member states and implementing specific standards in different areas, including in the area of freedom of expression.

The Council of Europe has been at the forefront of analysing AI‑generated disinformation and the rights for rights‑based and pluralism ‑‑ ecosystem.

We have uploaded, as background to the session, a copy of this, in case you still want an analog copy of the information of the guidelines.

This note offers practical guidance, really very, very specific and detailed pointers of how states and other partners in this democratic system we're trying to protect.

Stakeholders can fight this information in a rights‑compliant manner. Now, it's a soft instrument. This is not a treaty, but it has collective wisdom of our member states and experts.

Some are sitting around here.

It's useful and organised around three pillars, the main things suggested here. And, mind you, in the time when it was actually developed and written, AI was yet not that prevent and prominent. So it's not so much AI‑focused, but it lays out the foundation of main actions at the policy level that can be undertaken as a part of, hopefully, comprehensive strategies to try and limit disinformation, if not completely annihilate it.

Platforms are urged to include fact‑checking into the content systems. Unfortunately, we've seen in the past few months that the platforms have been disengaging from that commitment. And that's an issue in itself.

Platform design is the other pillar of this information‑strategic approach. The guidance note advocates for human rights by design and safety‑by‑design principles. These are keywords we've been hearing throughout the IGF and in previous editions. These are the basic principles of society we would like to see.

In order to favour nuanced approaches to content moderation and content ranking preferable to blunt takedown approaches, and, perhaps, we can explore how AI can help us achieve a nuanced content‑moderation approach.

And the third pillar of that approach is user empowerment.

I particularly see that your empowerment is becoming more and more prominent as a tool, as a strategy, as a way to fight the content of this misinformation. And that includes all kinds of initiatives at local level, community‑based, and local collaborations between civil society, platform, media, and users.

There's also technological users to empower our users.

I'm sure our panellists will tell us about them.

Following this note, we're still working on the role of AI in creating a resilient and fact‑based approach.

We're working on things that will be more practice‑oriented. We support through field projects in which we work directly with our member states.

Just to introduce the panel ‑‑ and I will give the floor to our first panellist in a minute.

Our thinking in the moment ‑‑ and these are the areas that we're really looking into, in more detail, as policy strategies ‑‑ how to resource public service media. It's always been the cornerstone of truth‑based and quality information system, and they are threatened.

So we need to find ways of strengthening public service media but also to enhance the capabilities of the regulatory authorities, the mandates, the independence to mitigate the rapidly evolving digital environment.

Another line of thought is how to demonetise the disinformation economy to cut off the financial incentives that helped to amplify this information content.

And then, indeed, another topic that's been very much on the surface here, how to enforce correlation and strengthen regulation. There are dissenting voices against regulation, but the Council of Europe is moving toward a full ‑‑ curation is a must.

Finally, finally, investing in the resilience of users. We're talking a lot and there's a lot of debate in public research about the supply side of this information. How do we ensure the content produced and visible there is less disinforming or less harmful?

But what about the demand side. I'm putting this in inverted commas because the demand is not necessarily explicit or willing, but the use of information. What do we do about users and how do we make sure they make use and demand and go for quality content, even when it's there.

Our speakers, first, David Caswell, will tell us what is the state of the art?

What can we do about the problem? It's also part of the solution.

We have an amazing panel here for you.

Without further ado, I will introduce David, who is a product developer, consultant and researcher of ‑‑ and David is also a member of the Council of Europe and part of the expert committee on the notes of Generative AI and freedom of expression. That's forthcoming.

Keep an eye on the website. You will be informed about it by the end of the year.

David, please share your perspectives about the AI information, challenges, and hopefully some solutions.

>> DAVID CASWELL: Certainly. And thank you for the introduction.

Yeah, before I start, I just want to make it clear that I'm not an expert on disinformation. My expertise is around AI in news and journalism and civic information. So I focus less on the social media side of things than I do on news, but I think a lot of this applies either way.

Before kind of getting into what AI is doing in the space of disinformation, I want to just talk a little bit about what was happening before AI came along. So, if you can imagine, before ChatGPT and most of the ecosystem now.

The big thing that kind of changed and made this last 10 or 15 years of disinformation and misinformation activity ‑‑ the change that made that necessary was we changed it to a few‑to‑many shape to a many‑to‑many shape. So we went to where only a few entities could speak to an audience.

This was the change in the distribution of content, the distribution of information. This is the cascade that caused the change of activity over the last 15 years, including around disinformation and misinformation.

Again, before AI ‑‑ or generally before AI, it's kind of looking at how the response to that era of kind of disinformation went. I would suggest that it hasn't gone well. I would suggest there are not people on any side of the arguments that would suggest that it's been very success and worked well.

Just to go through some of the things that I think people think about here or perceive.

One is that it was generally ineffective. There was questions around scale here. The scale on the Internet and social media, things like fact‑checking provide a tiny drop in this vast ocean of information.

I think there's a certain alarmism around it. There's a lot of research coming out on this with different ways of looking at it, but, essentially, the net of it is that the concern around these things seems to be restricted to a relatively small portion of the population.

Most people think it's less of an issue.

I think there's a sense or a perception that there's a certain self‑interest around misinformation or disinformation activities, and that has to do with this overlap of journalism. So there's a sense of that as journalism was diminished and its power reduced in the Internet era, a lot of that activity kind of went over to the disinformation/misinformation space.

On political side, there's a left‑coding bias to the space. This is certainly what Mark Zuckerberg used as his justification for turning off the fact‑checking activity at Meta.

I think there's another politicalisation that's happening here. It's the elite politicalisation ‑‑ that's easy for me to say. The thing that's happening here ‑‑ and there's a good Hugo ‑‑ on this.

There's a reason why this is happening. It's fooling half of the population. I think that's been an issue.

I think there's an issue of most of this being anecdotal. Not just anecdotal but focus on the images, individual facts and documents, these kind of things. It's not the processes and the systems that do this.

These are just kind of my views, but I think it's general perception in a lot of parts of society and putting ‑‑ is not successful.

In the AI era, the thing that's changed now with ChatGPT ‑‑ it's been building long before that, but, roughly, since ChatGPT, this transition of creation of news and journalism to an automated form of news and journalism.

So this is quite profound because news and journalism and the creation of knowledge generally was one of the last kind of handmade ‑‑ I think there's a risk of accelerating shared narratives.

This has, obviously, been a building issue since the Internet came along, basically. But it's important to keep in mind that this is not just kind of a news or journalism kind of concern. This is happening in all knowledge‑producing activities. It's happening in scientific publishing and academic publishing. It's happening in government intelligence servicing and knowledge management. The fundamental mechanisms behind this are really kind of broad.

I think there's a second risk that's not perceived very widely about the ability with AI to extend disinformation beyond individual artifacts, like deepfakes or individual facts, to entire narratives that extend over many, many documents and images and videos and media artifacts and that extend over long periods of time, you know, days, weeks, months, years.

An example of this in the manual space is this ‑‑ programme that are people operating on narratives across the world. With AI, we're entering a space where that can be more accessible to agents and governments and other actors.

I think there's automated and personal persuasion at school. Think of this as grooming at scale. Those are words that are used.

There are papers that have come out on this recently. One is not a paper because they had an issue in the data collection, and so it wasn't an official paper, but, generally, it seems that these AI chat bots are already substantially more persuasive than humans.

You can operate these individually in a personalised way across the population. I think that's a new and significant risk.

I think there's another deep risk that is really underappreciated here, which is that as we start the use these models as core intelligence for our societies, that there are biases in these models.

And that's not ‑‑ you know, we talk a lot about AI bias in models now, and that's very true. There's bias in the training data. There's attempts to resolve this in things like system prompts or the reinforcement learning‑from‑human feedback that helps to train these models.

But even more fundamental than that, we're starting to see intentions to place deep biases into the model.

So this tweet here, this is Elon Musk who built this large language model. Grok. It was a truth‑seeking language model. It's been too truth‑seeking for his space. So he's been getting into this argument with Grok for a while. You ask Grok a question, and he doesn't like the answer and so on.

So he's just recently announced that they're going to use Grok to basically rebuild the archive on which they train the next version of Grok.

So they're going to write a new history of humanity and use that to train Grok.

You know, that's the example of building a large language model and using traits that are a risk.

So I think if you kind of go down all the way to the foundations or the first principles here, there's a new deep need or core requirement in the information ecosystem. This is articulated very well in this book by Yuval Harari.

We've depended for 400 years on these mechanisms, like the scientific method or journalism or government intelligence or like the courts. And these mechanisms have two characteristics. They're truth‑seeking, and they're also self‑correcting. So they have internal biases that move them towards the truth.

I think there's actually a third requirement in there that maybe wasn't apparent until these large language models got going. This is just a personal interpretation.

I think our methodology, or our mechanisms, they need to be specifically referenceable and verifiable and persistent in an archive, all the things that large language models are not.

Just to kind of turn to the opportunities here because the news is not all bad. The scale of opportunities, I think, is of the same order as the scale of the risks. There are some real opportunities.

I think one opportunity is this possibility that we might have news or journalism or civic information or societal information that is systematic instead of selective. In other words, the scarcity around collecting and processing or communicating this information, those scarcity issues go away, and we have a level of systematic transparency that is vastly greater than it is today.

I think that's a very real possibility.

Scaling the amount of civic information in the ecosystem.

I think making it available to many people, regardless of style preference or language or whatever, but we can adapt this to each individual that's a significant new thing. Those things together, the scale and the accessibility means that we really do have, I think, this possibility, if we were to build towards having relevant, accessible societal beneficial information available to everybody at a much, much deeper degree of personal relevance than we've had before.

Also, finally, I think one of the challenges of information now is it feels very overwhelming. We have a news problem at scale. We all have a personal sense of being overwhelmed by information.

I think AI helps us address that.

I think the thing we're primarily being overwhelmed by is units of content and not information.

And we have this new possibility here with AI to not just have dramatically more information but feel more in control and more master of that information.

So just to wrap up here, I would like to suggest what we might aim for as an opportunity in this AI‑mediated information ecosystem that's forming.

And I think it's worth looking at this in terms of continuum.

Medieval ignorance to God‑like ‑‑ maybe like a Star Trek level of awareness. We went from people not knowing much about their world at all only through their experience but through these cultural adaptions to those inventions to what we know about the world is just staggering compared to what would be available to an average citizen in 1485.

There's no reason to think we've stopped at the optimal place on the continuum. We're at the place where the democratic dialogue is as good as it could ever be or was recently as good as it could ever be.

I think we have a way to go. The AI now diffused with the right governance and right orientation and so on, that could move us a considerable way up the continuum.

If we get something like AGI, maybe in like five or 10 years, I think that can move us even further.

And then at a future point, hyperintelligence moves us even further again.

I think there's challenges, government challenges and safety and security challenges. But, I think, as an ideal, trying to move to the right of the continuum is a good place to be.

I will leave that there.

>> IRENA GUIDIKOVA: Great. Thank you, David. That was a lot of food for thought. Personally, I was struck by how the AI can create a quasi-reality.

How do you fact‑check your way out of a completely alternative reality? Obviously, you can't.

I like your idea that we're spiralling and going up and up much faster than we can conceptualise it from information to content to hopefully information again.

But let's see if we have more practical tools we can do that.

Our next speaker is Chine Labbé, Editor‑in‑Chief and Vice President of Partnerships at NewsGuard. NewsGuard tackles things online, and she will explain how they do it and what are the results.

>> CHINE LABBÉ: Hi. Thank you very much for having me.

So I will start right away by explaining how AI is and has supercharged these information campaigns to this day.

The first thing we're seeing is malign actors are using text, images, audio and video generators to create contentious deepfakes, images, et cetera.

Just to give you one piece of data to illustrate that. Take the Russia‑Ukraine conflict. During the first year of the war, out of about 100 false claims that we debunked at NewsGuard was of Volodymyr Zelenskyy.

There are false claims we debunked that year. Of course, the conflict that was more recent, Israeli run, there's images that are fake that are circulating online.

This is a video that was shared as part of a Russian‑influence campaign. It shows a person whose identity was weaponised, but it was made to say that this person was sexually assaulted. The video is believable. If you Google him, you can see he was a student of Macron's wife.

Now, the second thing we've seen, in terms of how AI is supercharging this information is ‑‑

(Audio is poor)

>> CHINE LABBÉ: ‑‑ so create entire websites that look just like a local website that share maybe some reliable information and then some false claims, and that's entirely generated with AI, maintained with AI with no supervision.

This is an American ‑‑ a former sheriff that is now exiled in Moscow. He's behind 273 websites that he's created using AI with an AI server to imitate first local news station in the U.S. And in Germany, there were important elections.

These content forms have grown exponentially in the past years. We started monitoring them in 2023. May 2023, we had found 49 of them. Fast forward to today, there's 1,271, but that's probably the tip of the iceberg, what we're seeing and what we can actually confirm as being AI‑generated.

Why? It's cheap to create an AI‑generated new site. We did a test, a colleague of mine. He wrote about it in the "Wall Street Journal" and paid a developer in Pakistan. In two days, he had himself running a propaganda machine. It's password‑protected. It just two days and $105.

These are not always publishing false claims or running misinformation, but they're risks ‑‑ by definition, if you're run with AI with no human supervision, you have hallucinations, factual errors, and misinformation.

A recent one, the BBC did an experiment and asked questions to four chat bots base their information.

They found there were significant responses. In 90% of the cases, the chat bot had factual errors.

There were quotes that were never in the original articles or the that were modified by the chat bots.

Just a recent example of such hallucinations, a pretty reliable newspaper in the U.S. published a list of book recommendations for the summer. Very existing authors but very non‑existing books next to their names, along with some existing ones.

So all the more troubling.

Now, just imagine small errors like that slowly but surely polluting the news that we consume.

I would say this is a big threat to democratic dialogue.

Now, the third way, AI chat bots are often repeating misinformation narratives as fact. We're seeing a vicious circle where chat bots fail to recognise the fake sites that AI has contributed to creating, and they will cite things that are false. So you have information created by AI‑generated and then repeated through those websites and validated by the AI chat bots, a really vicious circle of disinformation.

Now, in early 2023, the idea that AI chat bots could be misinformation superspreaders was a hypothetical. We looked at it because it seemed possible. But, today, it's a reality.

Chat bots repeatedly fail to recognise false claims, and, yet, users are turning more and more to them for fact‑checking, to ask them questions about the news.

So we saw it recently during the L.A. protests against deportation.

This is a striking example. There was a photo of a pile of bricks that was circulating online as proof, evidence, that the protests were fake. The photo was from New Jersey. So it was not the California. But users online turned to Grok and asked Grok to verify the claim. Even here you have an example. Even when a user was pushing back saying, No, Grok, you're wrong, it would repeat and say it was true.

We saw a false claim saying that China had sent military cargo planes to Iran that was on fake‑flight‑tracking data.

We're doing that every day. We're monitoring the chat bots to see how they resist to false claims. What we're seeing month to month is that 20% of the time they repeat misinformation.

It also cites known disinformation cite as the sources.

It's not only an English‑language problem. It was done before the AI Summit. We did a test in seven languages and it's especially prevent in languages where there's less diversity of the press, where the language is dominated by state‑sponsored narratives.

Now, if I just told you we had put a drug in the open to sell in the commercial world where 26 of the pills out of 100 were poisoned, would you find it okay?

That's the question we have to ask ourselves when talking about misinformation and AI.

The last thing I want to raise here is this vulnerability that we're still failing at putting guardrails against is identified by malign actors.

They know by pushing Russian narratives, for example, they can change AI and influence the result to chat bots.

This is the Pravada Network. It's in 40 languages that's a (?) Machine for propaganda. There's no audience. The websites have very little followers and little traffic, but the goal is to saturate the web results so chat bots will use their content.

We did an audit and found that 30% of the time, the chat bot would repeat as fact.

We did the test two months later. It had gone down 20%. We don't know what mitigating measures were put in place.

I will end here because I'm running out of time.

To conclude, with Generative AI, these can do less with more impact.

As David said, it's just the scale that's changing dramatically, the automation. And they can influence information given through AI chat bots and through the process of grooming.

On a positive note, yes, AI can help us fight disinformation. So we're using AI for monitoring, for deploying fact checks. We can use Generative AI to create new formats of presenting content, as long as the human is in the loop.

But it's hard to foresee a world in which we'll be able to label all false, synthetic disinformation. So that's why a very important factor today is also to label and certify credible information and allow users to identify credible information.

Trust my Content is an example.

I think that's a very positive way forward.

>> IRENA GUIDIKOVA: Thank you, Chine. I think we've entered the era of mistrust. We cannot trust anyone, not even news sites that may be fake.

I think this is dangerous, not just for the dialogue. Democracy is ultimately built on trust.

Maybe we need chat bots that are trained only on reliable data.

Unfortunately, efforts to defend legitimate media, editorial media, by stealing the content from training the chat bots, that left the chat bots only scraping the dark corners of the Internet, so their content became even more ‑‑ less trustable.

Anywhere, for the moment, it seems like doom and gloom.

I hope you have digested your lunch. A lot of this is hard to digest.

The next speaker is Maria Nordström, Ph.D., Head of Section, Digital Government Division, Ministry of Finance, Sweden.

She worked with AI policy, in particular. She was ‑‑

Please, Maria, the floor is yours.

>> MARIA NORDSTRÖM: Thank you very having me. I've had the privilege of being on the Council of Europe. The main task of the committee was to look at the framework on AI that was mentioned. It was adopted and open for signatures last year.

It's the first global treaty. It's open for signatures, not just the Council of Europe but it can be signed by other states as well. Since it's open for signature, it's been signed by Japan, Ukraine, and others. It's a global treaty, in essence. There's fundamental principles and rules that safeguard democracy and the rule of law throughout the AI lifecycle.

While, at the same time, conducive to progress and technological innovation.

So, as we've heard and, as I think we know, it's improve integrity of information but, at the same time, valid concerns have been raised.

The integrity of democracy and its processes rests on two fundamental assumptions, that individuals possess both agency and capacity to form an opinion and act on it as well as influence. So capacity to affect decisions made on their behalf.

AI has the potential to strengthen as well as to undermine these capacities.

So David mentioned at scale. That's an example of how these capacities can be very efficiently undermine.

So it's not very surprising that one of the core obligation under the Council of Europe's AI Convention is for parties to uphold measures to protect individuals' ability to freely form opinions.

So these measures can include measures to protect against foreign influence as well as countering the spread of disinformation.

As we heard, AI can both serve as a tool for spreading efficiently this information and, thereby, affecting the public sphere as well as to combat the information.

This has to be done with some thought put into it.

It's important to put safeguards and make sure AI does not impact democratic processes.

So right now, the committee is assessing an impact methodology that can be used by developers and other stakeholders to guide responsible AI development. It's a multistakeholder process with the civil society, the technical society being involved, and there's still time to contribute to this process, if you wish.

It's a great example of how we can go from a convention to developing a tool that will have value and can be used by practitioners and policymakers to assist with the democratic process and in dialogue.

Another safeguard that we in Sweden believe is important is AI literacy. So both understanding what AI is and also understanding how it can affect this information is crucial in addressing the challenges posed by the rapid advancement of AI technologies.

So the Swedish government has charged the committee with creating materials to understand AI, particularly with misinformation and disinformation.

So they will develop an educational tool that will be released later this year.

However, one of the things that we are thinking about, from a policymaker's perspective and the key challenge is to find the right balance between providing sufficient information without further eroding public trust in the information ecosystem.

So the important question here is: To what extent is it beneficial for the society when all information is questioned? What does it do with democracy? What does it do with our agency when we can no longer trust the information that we see, that we read, that we hear?

So finding that right balance and informing about the risks while not eroding the public's trust is a key challenge and something I would like to talk more about.

Thank you.

>> IRENA GUIDIKOVA: Thank you very much, Maria.

Indeed, the treaty is really important. It does need to be ratified, though. So I encourage everyone here who has any say or any way of doing advocacy for the signature modification of this treaty, do not hesitate to come to my colleagues and myself to find out more about it.

Our final speaker is Olha Petriv, Artificial intelligence lawyer, Center for Democracy and the Rule of Law (CEDEM), Ukraine. She's an expert on artificial intelligence and played an active role in the discussion of amending the negotiations of the framework on artificial intelligence with the Council of Europe.

You're observing the challenges that artificial intelligence posts for society. It's the war of aggression, and you also have ideas about tools that can actually help curb that phenomenon.

Can you tell us a little bit about it?

>> OLHA PETRIV: Yes. I want to start that in Ukraine, this information, it's not just a problem. It's something that we face and solve every day.

We have steps to fight this, and in Ukraine, we have already created an AI strategy and a roadmap on AI. It has a bottom‑up approach to help us right now to do our first step with companies and with our civil society so fight against this information during the war.

What it means is bottom‑up approach in Ukraine. It means we don't wait for the law to be passed, and we have two parts.

The first part consists of recommendation for society, for business, for developers, and, also, the other part is methodology of ‑‑ self‑regulation and other steps that help us not to wait for the law.

I want to share more information about self‑regulation, like the first step before this law. It's like a process. When companies come together to create their own Code of Conduct and to solve a problem. When companies don't use AI like in a ‑‑ way. That's why six months ago, our companies ‑‑ it was 14 companies, Ukrainian, they were created in Ukraine and worldwide, they created a Code of Conduct as it consists of eight main principles. It's implemented in business.

Also, after this is created, memorandum of commitment, there's a circulation and members of this circulation body and businesses that sign the Code of Conduct, they will report once a year about implementing these guidelines, our Code of Conduct.

As a result, Ukrainian society and other countries that can see how Ukrainian business is working with ethical AI. We can just check this and understand how it's implemented.

Our first step ‑‑ of course, we don't want to wait when this law will be in Ukraine right now or later. We know if we don't implement this ethical AI in our systems right now, for example, principles that are connected with transparency are a risk‑oriented approach and other things we have in Code of Conduct, and other companies want to be part of this protest.

As a result, we'll show insight to our country that we are using AI etiquette.

Also, after this and during this process to fight against this information, what we are doing all the time because of a lot of complaints that we have because of work.

Other important site follows what I want to tell today is part of children and AI and disinformation because children are using AI a lot too.

This information is part. They can spread and they can be, also, victims of this information that got created by using their faces. And we already have a situation in Ukraine. The name is ‑‑ and the face of Ukrainian girl was used to spread information that she don't like different schools in the U.S. As a result, Ukrainian refugees in other countries, they have distress.

So it's one type of how this information is being used.

There's children that are part of this process but are not ready for this. It was suggested created deepfakes by Russia. Also, right now, this information is spreading more and more in schools, not only in politics.

And what we can do is this. We're working on a lot of ways using AI in education process with children and, for example, UNESCO has created a lot of problems to connect this process in Ukraine where we spread to children how to mind their critical thinking better. Especially when you're leaving a country, when every day you're faced ‑‑ there's a lot coming out in the news. You have to mind your critical thinking better.

So, also, we are now working consistently. Not to ban AI from children but we have to give them knowledge. We have to use AI better and now how AI works.

Right now, strategies against AI information is the main strategic response to information. Also, we have to remind children how to resist different fakes that we have.

And when we're talking about this, for example, we're working to help children to develop critical thinking through ‑‑ not through lectures or different moral lessons but through AI companion.

It's easy to understand for children. As a result, it's better for education.

We will not teach children how to understand news and understand AI ‑‑ someone else will teach them how to think.

It's generational.

>> IRENA GUIDIKOVA: Thank you, Olha.

We have about 15 minutes for discussion with you, the audience, both onsite and online.

If you would like to ask a question and contribute with your thoughts, please use the two microphones on the sides of the room. Introduce yourself and go for it, and my colleague, Giulia, will be checking the Zoom site for any speakers online.

Yes, please.

>> FLOOR: Thank you. My name is mic ‑‑ I'm representing ‑‑ digital information service in Finland. We've been, say, 12 years in the crazy world of information disorders. We try to focus where it matters. I subscribe to the education field. Okay. Perhaps we may be spoiled in Finland but still being able to trust and invest in teachers for a long time.

But what you said, Ukraine, you know what you are talking about because you're in the face of it.

But something I would like to specify is at what stage and how did you teach about the AI because of the big picture ‑‑ and this is what we learned when lecturing in the U.S. People tend to take the AI as granted.

I mean, what is the right age? You have to, first, think yourself, in order to use the AI for supportive tools, like you describe. I think that's a culturally bound thing.

Then a small note to Mr. Caswell, you have a good presentation, but we're working with the fact‑checkers as well as. Just a little bit of ringing. If you committed Mark Zuckerberg to ‑‑ left‑leaning fact‑checking ‑‑ what happened is Zuckerberg completely revised the opinion.

I think the fact‑checkers explained about the case. That needs to be clear for the files.

As a small NGOx we push the government to do the guidelines for the teachers. Without that, the teachers are lost. It's such a big thing. When they have the guidelines ‑‑ we made a guide on AI for teachers. It's been translated within The One‑year Project. I think that's the lowest‑hanging fruit to multiply for the generations. If you become AI native, so to say, without being literate on that one that's very scary.

Thank you.

>> IRENA GUIDIKOVA: Thank you for your contribution.

Let's take the other two questions and revert to the panel.

>> Francis: Hi. I'm Francis from the Youth DIG.

My intuition is there needs to be an original source when it comes to journalism. When we say it can be equally or just about as good by AI. Is that true? If you think about war generalism or people who go into conflicts or humanitarian crisis. A lot of us like to consume news about personal stories, so can AI really replace that?

And the second thing is people like to read the same content as other people because it unifies us in a way. That's why you have echo chambers. We read different publications, but why would that disappear. The whole screen for one or the media for one is not necessarily that attractive because it means that, then, when we go to dinner parties or see our friends or talk to people in whatever area of work we work in, we cannot the relate to people because we're consuming different news.

The last characteristic I thing is true but correct me if I'm wrong. If AI is generating all of our news, that's super fast. Maybe it's too fast for us to keep up with. So you have a comparative advantage if you have the original source and only a few true articles.

So I would like to know if you agree or disagree with those characterisations.

>> FLOOR: One of the lessons learned through doing a lot of education work regarding digital privacy and a lot of AI literacy skills, it seems like even if you do a lot of attempts at education, it is a very hard battle to be won because of the scale of the problem.

And I was wondering if there might be other ways of incentivising a lot of AI service providers to be more grounded on truth and reality, and what are some ways to incentivise them to do so.

Thank you.

>> IRENA GUIDIKOVA: Thank you so much. Actually, I had a very similar question for Olha. How do you actually motivate AI companies to go for the self‑regulation comply.

I will use my role as moderator to ask David a question.

You were saying that we need news that's not selective. I don't know what you mean.

So let's address all of these questions now. Who wants to start?

>> OLHA PETRIV: I want to say these questions are complex.

When children are using AI, we want to give them a safe environment. And what can we do as people who have committed to the process and can work with ministry, for example? Should we ask more and more companies that share services that children can use to do this in ethical way.

The other side of this, of course, is we have to work with parents and teachers.

We have this gap already when AI is here. It means that we should use even the result of ‑‑ and is a road in 2023 that AI skills are so important for children, beginning from the algorithm understanding and the other important steps that children have to understand when they're at school too.

So what about age? It's also important that people already use this in super‑young age. It's important to give children understand that AI can be a tool that helps them to find answers, to answer questions, and to help them to ask more and more questions.

Even if you will say about this AI leap, Estonia ‑‑ we understand that we have lessons, parts that are responsible for criminal thinking. It's the main part of this. Yes, it's my answer.

>> IRENA GUIDIKOVA: David?

>> DAVID CASWELL: Before I answer the questions, I would just like to make a comment. I really think this idea of an AI chat bot to teach children AI literacy is an absolute brilliant idea. I will have to noodle on that one. I think it's really, really smart.

I will go through these questions. And, please, correct me if I'm wrong. On the left coding of disinformation/misinformation, yeah, the Mark Zuckerberg situation, him specifically, is one thing, but I think the reason he's up there doing that, the reason he's making these decisions is because essentially half of the electorate feels the same way. I think it's a strategy that we have a partisan slant on this idea of untruthful information environment.

I think if there's ever a thing we should all agree on, regardless of which side of the political spectrum we're on, it should be based on fact and accuracy. So that was the point I was trying to make with that one.

I think on the fact‑checking specifically, there's a massive scale issue where no matter how much you spend on fact‑checking, you just can't keep up with the facts that need to be checked.

AI can be applied to only some kinds of journalism. It has to be accessible to AI.

There's a chunk of journalism. A lot of it is less in the field. People always bring up the war stuff. AI is not going to do it in my lifetime.

The ability of these systems to do other thing. For example, AI is interviewing people, these kind of things, email exchanges, all that kind of stuff. That's a very real thing as well. It's already starting.

On the use of publications as part of identity, you know, this idea of a shared ‑‑ you know, the "Economist" and some read different publications but what these take in terms of focusing more on identity, it's going to be more successful.

The problem is it will be successful for the economists and ‑‑ of the world, these are identity‑faced publications, but if you take news publications and move it into a luxury category, you're leaving out a lot of publication. So I think that's a challenge there.

The speed thing, I don't know. I think there are different ways to adapt to the new cycle or what is even a new cycle that's on.

I think AI can basically create the experience. You can create whatever experience you want. That's one of the examples there.

The other question about ‑‑ actually, in your question about systematic versus selective, the opportunity there is if you look at a domain of knowledge ‑‑ say the auto industry in Bavaria, right now, there's a lot of journalists that cover that, but they don't cover evening that is happening. They can't. There's not enough of them. There's not enough time.

So newsworthiness, they find things to report about. Every single PDF can be analysed.

Every post can be analysed. The automatic systems can cover all of it, only the stuff that's digitally accessible. Journalists have to pick and choose because they're in a world of scarce resources.

>> To address the question of AI versus on‑the‑ground journalism, before being in the news my life and having a hybrid role, we often didn't have money to go ‑‑ it could just be going around the street. But this allows on‑the‑ground journalism, and it's an opportunity.

The one thing I wanted to address is how to incentivise AI providers to base their systems more on the truth.

I think the first step here is to raise awareness. A lot of people don't realise. I think once users realise the scale of the issue ‑‑ so the more you have tests like the BBC one ‑‑ the more you have audits like hours that show repeatedly that chat bots' authority to believe and the platforms are asked to do better. If the users ask for more truth, then they will have to put in it guardrails.

The problem today is AI chat bots are not meant to provide you accurate information. That's not what they're meant to do.

As use is increased toward that end, we have to have consumers ask for more reliability.

The problem we're seeing now in the audits, the chat bots do worse in the month that they release new features.

What does that mean? The industry is focusing on new, sexy features but not on safety.

Usually that takes the back seat. I think it has to come from the users asking for it.

>> DAVID CASWELL: If I could build on that one. On hallucinations specifically, there's a website you can go to, the AI leaderboard that measures the rate. You can see a march over time going from resolution rates around the 15% range. The top models now are 0.7%. I think it's a lot ‑‑ than it looks. Omission is a big problem. We're in a transition phase with these tools, and they will get better.

(Overlapping speakers)

(No discernible speaker)

>> IRENA GUIDIKOVA: I will let the speakers conclude.

>> MARIA NORDSTRÖM: These are consumer products. We give power to consumer, but, at the same time, we're at hard law ‑‑ when it comes to hard law, yeah, we have the AI Act in the EU, for example, but it's hard because regulation requires the truth. I think that's quite difficult to achieve through that particular measure. So when it comes to incentives, I think it's very true, that we can empower consumers and lower the bar for consumers to understand and compare these products. Ultimately, there are various AI systems out there that you can use. We can help consumers make a conscious choice about which systems they are using.

>> IRENA GUIDIKOVA: Exactly.

>> CHINE LABBÉ: There's people that use ‑‑ for the news. It's going to grow spectacularly. They're betting that we're not going to put guardrails in.

>> IRENA GUIDIKOVA: Less than one minute. If you have three seconds for a conclusion?

>> DAVID CASWELL: I would like to emphasise that it's probably worth paying attention to the difficulties of the last 10 years of misinformation/disinformation response and not applying those or continuing those necessarily into the AI era. I think that means systems have a more strategic level focus, and the necessity of that is it's ideal. It's what we want the information system to look like, and we have to have that conversation first because then we know what we're steering towards.

>> OLHA PETRIV: It's important to remember that the target audience of this information and propaganda, sometimes and all the time are not only people that are ‑‑ people will want in the next year. And it's important to remember when we're thinking about this information.

>> IRENA GUIDIKOVA: Couple of highlights or food for thought because we need to conclude on an action note. First of all, very important for me, we need to source primary‑source journalism. We need to create a solid base for AI ‑‑ it will turn into a virtual world. We need AI literacy, including using chat bots to teach children AI literacy.

Perhaps we need certification for IA bots. NewsGuard monitors and alerts, but how many people are aware. Maybe we need a star system, like the user's ranking, so we know how trustable a particular bot is or maybe a particular public service bots.

AI is in the infancy, and it's even more in the beginning. Let's hope we can turn AI from a weapon to a force for good.

Thank you very much to the panellists, technicians, participants, and everyone else.

(Applause)