IGF 2023 – Day 2 – Town Hall #25 Let’s design the next Global Dialogue on Ai & Metaverses – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> ANTOINE VERGNE: Can you hear.

>> RAASHI SAXENA: Yes, I can hear you.  We are just waiting for Raashi to come.  She was in another session, I think.  We still have some minutes.

>> ANTOINE VERGNE: Yes, yes.

Town Hall #25, Let's Design the Next Global Dialogue On AI and

Metaverses

>> ANTOINE VERGNE: Antoine, you will do the intro or will it be Raashi.

>> ANTOINE VERGNE: We can share it.

>> RAASHI SAXENA: Antoine and I'm the one facilitating the discussion.

>> ANTOINE VERGNE: Hello, everyone, my name is Antoine Vergne, I work at Missions Publiques.  We are working on citizen participation.  It is my pleasure today to be with Raashi and Roberto to talk about citizen game and artificial intelligence and the future of it.  Raashi, you want to say a word on yourself and Roberto too, and then we can give you some input and then we will have a discussion and then we can try to understand what will be the next step of such initiative.

>> RAASHI SAXENA: Thank you, Antoine, hi, everyone, my name is Raashi Saxena, I'm from India.  Why I was deeply involved in organizing the global dialogue back in 2020 on behalf of my country in India.  I'm also a member of the scientific committee for the internet project which we will be discussing further, and I'm really happy to be here and I'll pass it on to my call league, Roberto.

>> ROBERTO ZAMBRANA: Thank you very much, Raashi.  I want to welcome as well to all the attendance to the session.  It will be very insight.  Whyism I'm Roberto Zambrana, I come from Bolivia, I was involved in the dialogue in Bolivia, a couple of years ago, and happy to be part of the scientific committee with the internet initiative.

And with that, I think we can go.

>> RAASHI SAXENA: Yes, moving on to you, Antoine.

>> ANTOINE VERGNE: Yes.  Roberto, you can share your screen and so we can have the kind of short presentation on looking back.  And then looking ahead of what we can do together.

So it's about dialogue on metaverses.  Why that was for the program today.  And we can start with a short ice breaker, that's always a nice way to get together.  Maybe we can think about next question, that question is if you were able to ask an artificial general intelligence something very advanced, one question.  What would that question be?  Maybe we can take 10 or 15 seconds and some of you in the room want to share with us online, what that question would be?

Think about it, you're in front of a general artificial intense and ask one question., the guide to the galaxies, the question to the deep mind, you can ask one question.  What would that question be?

>> ROBERTO ZAMBRANA: Anyone lake to volunteer.

>> ANTOINE VERGNE: In front?

>> AUDIENCE MEMBER: Okay.  Good morning, everybody.  Jane, a member of parliament from South Africa.  And part of the ICT group in parliament, the Department of Higher education and training, scene innovation and technology.

One of my worries with regards to anything that has to do with ICT and AI, it's the issue of a forever perpetual inequalities in terms of those who are at the ‑‑ as those who are literate.  I want to ask AI, what is it that we can do to make sure we bring everybody be along in terms of the advancement and transformation of ‑‑ for AI.  Cyber security, cyber crime and everything that has to do with human rights, what is it we can do to ensure we bring everybody along and as we grow our countries and the world, we grow everybody, thank you.

>> ROBERTO ZAMBRANA: Thank you very much.  Very interesting question.

>> RAASHI SAXENA: Anyone else.

>> ANTOINE VERGNE: We have a question from Philipa, can I trust you, that would be the question Philipa would ask.  Can I trust you.  Other questions in the room?

>> AUDIENCE MEMBER: We have a mic there as well.  Or we can pass the mic.

>> AUDIENCE MEMBER: Good afternoon.  I'm Amy Tadaka from Japan, I am working at a company called OSEN Tech, we collect a lot of governmental press releases in different languages into one language, using AI.  And we are trying to offer customers to ‑‑ more official and reliable information in English, yes.  And my question and my ‑‑ like my ‑‑ what do you say question, is that when you say design ‑‑ let's design the next global dialogue on AI in the metaverse, designing is very, very tricky, you know.  If we try ‑‑ try to avoid a lot of problems, and ‑‑ but if we hesitate to move forward, we cannot move forward.  So I would like to know the way you think about the balance of preventing the ‑‑ preventing problems, but going ‑‑ moving forward to have better world.  So that's my question.  That's.

>> ROBERTO ZAMBRANA: Balance between threats and how we need to advance.

>> AUDIENCE MEMBER: Exactly.

>> ROBERTO ZAMBRANA: Thank you very much.

>> RAASHI SAXENA: That's a very valid question given all of the tools we have and the concerns around the misinformation aspects of it, whether it comes to information being outdated or having faulty information that could have reputational harms to gender based violence with the advent of ‑‑ I would say revolutionizing and Democratizing videos.  That's a very relevant question to include in our dialogue.

>> ROBERTO ZAMBRANA: Yes.  Any other question maybe?  Someone else.  Yes, please.

>> AUDIENCE MEMBER: Thank you.  Hello, everybody.  I am high school teacher at a high school called (?) in Tokyo, and I've been trying to develop metaverse with help of AI and want to teach international students to talk the social skill, and we are now trying to develop such kind of a curriculum, but now I'm only using the standard meeting system, and I wonder how we can ask support for the AI developers or the metaverses visitors, and we as a teacher, we need to use those latest technologies so that international students can collaborate together, but we don't know how to ask for help to people in those research areas.

So if you have a position, I would like to collaborate with you, thank you.

>> ROBERTO ZAMBRANA: Thanks very much.

>> ANTOINE VERGNE: I'm not sure if we have a solution, but we could ask the general AI to give us one that would be one question to ask.  Raashi, Roberto, what would you ask.

>> ROBERTO ZAMBRANA: Well, I think I will ask the AI what the AI think of itself.

>> ANTOINE VERGNE: Yes.  I would ask the same.

>> RAASHI SAXENA: From a curriculum perspective, the Indian government in the AI strategy has initiated something with, I believe, with Intel called the AI for all.  Building curriculum for school students in the central school board to give them education around that.  And the Indian government also came in partnership with incubators to start conversations around conceptings of AI.  So I'm happy to connect offline and maybe that's of interest to you.  But yes, there's also ‑‑ Antoine, we could also share the metaverse curriculum ‑‑ the AI curriculum we have with the dialogue, happy to share that as well with you to keep the conversation going.

>> AUDIENCE MEMBER: Thank you very much.

>> ANTOINE VERGNE: So then maybe thanks to Raashi for the transition.  Maybe it's time to look back at what we have done together in 2020.  Roberto, you can show us the next slide.

  We would like to do is have a look back at a project we did together with many other partners.  So on the left side the strategy partners.  On the right side, the strategy partners too, but in the countries, and the strategy partners.  And we formed a coalition and we did the design so one of the questions how do you do the design and we can talk about that in data.  We did the design and implementation, we called global citizens dialogue.  What it is maybe ‑‑ to the next slide.

This is pretty easy, we take as many countries as possible over the world, all over the world, and in each of those countries, we select a group of citizens, ordinary citizens, citizens that are not engaged, nonexpert, that are selected through random selection or through a system of ‑‑ to have a group representative of the diversity of its country.

It's very important, nonexpert, nonengaged.  These are what we call day‑to‑day citizens, every day citizens, ordinary citizens, lay citizens, whatever you want to call them.  These are people that live in the country and have an experience of the internet or not, because in some of the countries we, of course, also have people without internet connection.  Very important too.

And we gather them for one day of dialogue, and that day is one day all over the world and it goes from different topics and for each of the topics, they get information.  So we are talking about a curriculum, it's a very short curriculum in that case, but it's about the main information on a topic, the main controversies on a topic, and then they discuss on that topic through one or two questions.  Those questions guide the discussion, and then at the end they give a collective answer to that question.  So here you see the gender balance of our participants in 2020, so we had around ‑‑ almost 6,000 participants all over the world, and as you can see, we had a very good distribution of ages.  Good distribution of gender, and maybe on the next slide you see the distribution in terms of occupation, which more or less reflects the global population.  So that was for us a check to see what we had in those rooms, almost 80 rooms, almost half of them virtual, because it was 2020, so it was in the height of the pandemic.  But half of those dialogues were on sight.

So these are the demographics.

One of the sessions we had, one of the topics was governing artificial intelligence.  And we asked the citizens to take a couple of positions and to discuss and collectively to give their opinion on the governance of artificial intelligence.

It's always what we see after, it's always the ‑‑ what we call the collective judgment, it's not only the individual opinion, but some of the reasons of discussion of the groups, and that's very important for us because we don't want an opinion poll, we want to understand what people think when they think.  It's kind of an advanced way of asking the people on complex topics.  One of the questions we asked, and I'm sorry for the numbers below, it should be 0, 10, 20, 30, but there was a glitch in the numbers.  So it's percentage, and we ‑‑ one of the things we asked the group was to reflect on the artificial intelligence was more threat or opportunity, and at the end of the day when it was discussed, the people said it's equally important and a threat.  On top of that, a couple of ‑‑ around 30 percent of the groups thought it was more opportunity than a threat.  The first we had was generally, people didn't see AI as something very, very bad in itself, and it was rather neutral or positive view on artificial intelligence.  Next slide.

Then we asked them, and that's the big advantage of such dialogues, you can have qualitative work.  We asked them to go over the priorities and which should be the prior tease in developing AI systems and AI governance, as you can see the highest one was it should be aligned with human rights.  Then we asked them some questions which are close questions, more individual questions, in that sense, and here you can see also about the question of ethics and AI, they had the feeling that, of course, there should be ethics involved in the work, all of the different organizations.  The next one.

And that is hard to read, but just to give you the generalim pression of that, on that question, we asked the people to tell us, as a group, if they saw more opportunity or threat in different fields of AI.  The last one is when the people had the impression it would bring the most opportunity, and that research and development, science and research.  They saw it was one of the fields that would bring a lot of opportunities and not a lot of threat.

Also to expand all those sentences on the left, I'm sorry I'm not really readable, I can give you the last one, development, that's where we work ‑‑ because we are going to discuss about what would be a cycle or a new dialogue on that topic, it's very important in those dialogues to phrase controversies, because we all know when it's about to create perfect policy to take collective decisions together, very often what we have is to solve tradeoffs, to solve tradeoffs, and that's when those deliberative processes, citizen dialogues work very well.  Last one.

The one before.  Nope.

Nope.

AI brings advances in science and research, we should invest the money elsewhere.  That's on the left.

On the right, AI brings a lot of breakthrough in science and research which benefit humanity.  Each of those lines, you happened controversy or ‑‑ they had to choose.

 the first one where they thought the most harm would come.  It was directed by those who would get profit and exercise power.  The left part.  Data is used and organized for the common good and serves humanity.  You see the people all over the world.

Next slide.

Then we also asked, of course, the citizens to move to governance, how should governance be done.  With the Internet Governance Forum, it's interesting to see for AI, there was a vig part of the citizens that wanted to have global level discussion and a global level governance for AI more than on other topic.  You can show the next one.

And okay, maybe we can stop here.  You can go one back.  Raashi, Roberto and I see that we also have Desiree online want to add something on that experience in 2020.

>> RAASHI SAXENA: Maybe we can give a chance to our colleagues online.  Anyone online, Juliana.

>> ANTOINE VERGNE: Juliana, yes.

>> JULIANA HARSIANTI: Hell,o, everybody, my name is Juliana, happy to meet all the participants.  To understand AI and and the mission.  I think what Antoine's presentation is quite clear about what the global outlook on the internet has happened.  From 2020, this is ‑‑ is this nice to hear.  Sharing the knowledge, sharing the experience about ‑‑ what the internet digital technology and especially about AI in this conversation from different stakeholder, from different countries and different ‑‑ because I know the technology help people in different ways, and everybody in different stakeholder, different country, different economic economy.  So economic situation is a different experience about the digital technology and especially AI than ‑‑ I think what the different experience brings to the inclusion policy and inclusion ‑‑ inclusion ‑‑ best practice for what we do to have a better application of AI in our life.

I think it's enough for me.  Antoine.

>> ANTOINE VERGNE: Thanks, Juliana.

Roberto, Raashi, if you want to add something.  After that, I can show a different example on a related topic and we can open the discussion.

>> RAASHI SAXENA: I want to check if ‑‑ Noha is online.

>> ROBERTO ZAMBRANA: Or anyone else that would like to share with us?

>> ANTOINE VERGNE: So maybe you can share again the presentation, I can do the second part of the input.

>> ROBERTO ZAMBRANA: Sure.

>> ANTOINE VERGNE: In here, I will share another experience, more done by Missions Publiques and less by the coalition.  It is a direct ‑‑ it's a child of the dialogue, and of developments in Europe, around citizen participation, and this is where the European citizens panels.  Maybe you can show the next slide.  So the context in 21 and 22, we had in Europe a huge process which was called the conference on the future of Europe, and the conference was launched by the European parliament and the European Commission and the council, and it was about asking citizens of Europe about their views and wishes and recommendations for the future of Europe.  This process was both at national and European level.  It was both online and on site and one of the key pieces of that process where so‑called European citizens panels.

Those panels worked on the same reason as the dialogue.  We had a randomly selected Europeans representing the diversity of Europe.  Each talking their own language.  That process was different because it was not one day, it was three weeks.  So much deeper process of discussion.  But which smaller group of people because it was 200 in each citizens panel.  In 22‑23 we had a new cycle of those panels with three topics, which we have policies being prepared by the European Commission.  The first topic was food waste, and because the commission was preparing a directive on food waste, the third topic was learning mobility.  The fact that you go abroad to learn and go back to your country because the European Commission was preparing a text internet program.  The second you see it, it was about a virtual world because the commission was preparing not only legislative texts, initiative, on virtual worlds.  Maybe you can show the next slide.

So the next ‑‑ so yes, basic facts, 150 randomly selected citizens stratification from all over the countries in Europe.  Three weekends, and we had those citizens discuss with another, so it's the VRS (?) on the left side, you can show the next slide, Roberto.

>> RAASHI SAXENA: Desiree wanted to make a few comments.  If Desiree in still online.

>> DESIREE MILOSHEVIC: Yeah.

>> DESIREE MILOSHEVIC: This is Desiree Miloshevic.  I wanted to make a few comments earlier on the findings you have presented in 2022.  I believe that it's very important, first of all, the work that Missions Publiques is doing, and that's why we like to get engaged to get out to the people who are otherwise not really close to the process of international IGF or the global IGF.  This is not one of their first priorities to think about.  So from the point of concept, I really always liked how Missions Publiques try to the reach out to vulnerability section of population, but also of unions, of workers that are going to be really somehow affected by all the policies that we are discussing here.

The ‑‑ when you pointed out that some of them wanted regulation on a global level versus like regional level, I think it would have been also good to tease out as who why this deliberations happened that way, and if you could possibly somehow also quantify, you know, to understand a little bit the thinking behind it, I would personally find that useful.  Of course moderators that do it at a time of speaking to the groups, really know, but I wonder at the future how we could present it a little bit more grain.  That was my only comment with regards to the first set of slides, but let's continue with this.

>> ANTOINE VERGNE: Thank you.  Desiree, thanks.  I thought you were online because you are also online.

But you're both, you manage u biquity.  Congratulations for that.  Let me finish that and then I'll go back to the question you had.  I think it's a very important one for the future because, indeed, there is a lot of that we had.

So on the European level, the question had and the commission asked to the citizens what was ‑‑ what visions, principles and actions should guide the development of Farrell virtual worlds.  We had them work a couple of weekends, and give this recommendations to the commission.  At the end, the output of that process was a communication from the commission about what they called web four.  So Desiree, you wanted to talk about web three, with the commission, we are already at web four in virtual worlds.  This is very long, so you don't need to read it.

But it's part of the official communication of the communication, and what is interesting, that they specifically mentioned the citizens panel as being the inspiration for their legislation.  Here in terms of impact of citizen dialogue and the continuation, we can see what it can become.  If you look at the last paragraph, the European Commission says the citizens panels specified a set of guiding principles for desirable virtual worlds and then they just list the values that the citizens had developed during the citizens panel.

What I want to say with that, in 2020, we had more bottom‑up approach trying to have impact on policy‑making.  The example we just showed here in 2022, 2023 was more top‑down approach from a policy‑making body.  Asking citizens ‑‑ ordinary citizens, really to imagine the people that came to Brussels to take part in that panel were not expert, they had no clue about what metaverse is, they didn't know the word.  Didn't know about virtual worlds, but took the time, they were guided and able to give recommendations and give the guiding principles, they saw as important for the development of such a world.  With that, I would like to close the presentation saying okay ‑‑ now maybe the next one

Now when you have heard that, we first can have a discussion or we can have an open discussion in the time we have, but our motivation was to have that with all the partners is to imagine what could be the future of such a dialogue.  What could be a new version of it.  Because we are still convinced and I look at Raashi and other partners, I think we are still convinced we need that the input from ordinary citizens for the global discussion on those topics.  So what could be the topic?  Also, I would like to connect back to Desiree, what you're recommending.  What you said, indeed, we asked in 2020, asked them what should the level of governance.  So that was a framing to understand if they saw more global governance or local governance, and now if we want to be more granular, as you said, we also should be able to understand, should the topic be the same or adapted to the context.  I think it's exactly what you were starting to explore, Desiree, if we were to talk about global dialogue on AI and metaverse.  Should it be comment or local topic, that's the discussion we wanted to have with you I give the floor to Roberto and Raashi to introduce the discussion.

>> ROBERTO ZAMBRANA: We'll manage to receive participants participation now, if you agree now, we are asking you if you can have an answer for this question, first about the topic you think will be relevant to discuss regarding AI, artificial intelligence and then if it needs to have a context in the country that we developing the dialogue.

>> RAASHI SAXENA: Although a lot of people weren't subject matter experts, I liked how you were looking at a very diverse age group.  We had participation to housewives and picked up participants from our places where there is a lot of turmoil and anxious is it.  I come from a country that has the most internet shutdowns, there has been potential downfall for political reasons or otherwise.  What came out from those conversations is people do laugh a lot to say.

If you provide them with information that is concrete, that has the right data, we need to give them agency to be able to make their own decisions, but in short, it was a very good educational exercise for people to understand and people are ‑‑ no matter what age, people are always going to be keen to participate and say what they have to say.

From that point of view.  I thoroughly enjoyed the discussion as opposed more of a literacy exercise, which is something we all need.

>> ROBERTO ZAMBRANA: We do have some participation for the audience, please.

>> AUDIENCE MEMBER: Thank you.  Good afternoon, everyone, I'm Katna.  I represent the polish national research institute.  In terms of global dialogue on AI and what should be the main topics, I would definitely recommend child protection.  Child online safety because it's a global ‑‑ it's a global phenomenon and problem that many harms to children caused in the online environment and digital artifacts are also stored online of what's happening ‑‑ what has happened to children, but AI can serve both as a tool to help in terms of for example finding materials in the big photos or videos, and it can ‑‑ the relative harm, if you imagine digitally created material.  Such as child sexual abuse material, including visual appearance of an existing child.

So we have different sides of the problem, but I believe this is the topic that should be discussed both of the global and at the local level.

>> ROBERTO ZAMBRANA: Thank you very much.

>> RAASHI SAXENA: That makes a lot of sense when it comes to talking about AI being used for content related to harms because it's easier, given that content moderators and the way they've been treated when it comes to living wages, or when it comes to looking at content.  Kind of contracting that to AI that could be more precise, could also help in sifting through large will vos of data.  AI would be a good application there.

>> ROBERTO ZAMBRANA: Yes, please.

>> AUDIENCE MEMBER: Just to follow on that note.  I agree with that, but it comes back to also some of the results earlier.  Classical theme, it depends on the type of AI we are talking about.  Is it AI, like ChatGPT that students are using or we are using for research or is used by students to, you know, cheat the teacher and skip learning experiences.  Is it page agery, fake news.  So it comes back to this classical skill of not just having access but also the critical skills and thinking about what you consume online, particularly when we are looking at deep fakes and the like.  ChatGPT, there was an interesting study done, I think it was Oxford or MIT on allow students, and they actually found that poorer performing law students, bachelor level law, using tools like ChatGPT, actually increased their performance, whereas top performance students dropped in performance because they were leaving things too late.  They started not being as creatively thinking about things as before

These are university graduates.  So there are these differentiations I think we need to make.  Walking into AI, but also the metaverse and virtual realities with open eyes and with critical minds.  Because they there are pros and cons to these technologies.  ChatGPT scrapes the internet and makes a proposal based a un‑what are the most ‑‑ what are the loudest voices there.  If those loud voices are fake news, or false information, well, that's the output.  If you don't double‑check, as a consumer of these things, then we are in a dangerous situation.  Again on the back end use cases, to identified fake news, discriminatory use, et cetera, et cetera, even copyright infringement, there's a lot of abilities there.  There's still all the classical dialogues and who are in charge of the algorithms.  There's been a lot of image recognition, and so forth how is that going to come into this debate.  I think that starting with an educated learning and ensuring we all critically assess what we consume and check alternative sources is part of that solution.  Not just regulation.  All fear of using the technology.

>> ROBERTO ZAMBRANA: Please, if anyone would like to share also, participants, you can raise your hands.  So we can allow you the mic, please, we have several participants online, you're invited too, but we have here another comment, please.

>> AUDIENCE MEMBER: Okay, thank you.  I think what I need to say here is to appreciate the ‑‑ after the question that I've asked in terms of what is it I would want the AI to respond, it was ‑‑ from the issue of the demographics, public participation because of at times people don't think when we speak AI you need to bring everybody along, people would speak about specifics, specialists, and now it is clear that when you speak about anything that speaks about transformation, you need to have a public participation, but I'm happy as well that in some of our part by participant, it comes out to say the need of continuous civic education in terms of AI, and I think what we need to do is not to shy away from the fact that AI governance, it is important.  So that you deal with the issue of comment and control.  Nobody can spread anything that has to do with ‑‑ to deal with deal with misinformation and fake news, it is important that AI is governed properly, as well the issue of uniformity.  Irrespective of where you are nationally, locally, it's something else, but having a uniformed approach in terms of digitalization, in AI, it is important.  The fact that we all have different languages, but the content must not change.

One of the things that makes us not to be ‑‑ in terms of development issues or studies, it is on the basis when you come ‑‑ when you come from America and I come from South Africa, I understand I'm not the same and the contents change based, but if we can agree when we speak on the issue of artificial intelligence, we must remain in terms of the same content, I think we deal with many issues that might affect progress in terms of ‑‑ for AI revolution, thank you.

>> RAASHI SAXENA: Thanks.

>> ROBERTO ZAMBRANA: He a bridgehead approach.  Meaning we need to agree on general terms independently, but to adapt somehow a particular topic inside.  Okay, great.

Anyone else in the room or online?

>> ANTOINE VERGNE: No question or feedback online.  So no.

>> ROBERTO ZAMBRANA: Another comment, please.

>> AUDIENCE MEMBER: Okay.  So I wanted to follow up of what ‑‑ what the previous participant commented on the ‑‑ the number of solutions perhaps, it led to that as well.

>> JAMES HAIRSTON: Where one of the questions one should be asking is what kind of AI and implement implementation should it be.  And as it's been mentioned earlier on.  At the moment there's like a plethora of many models of AI being developed, not just the ChatGPT, but also the training datasets are being developed.  There is  ‑‑ different kind of open source AI models as we have heard in the main situation, the Father of the Internet supporting the open sourcing some of these models, being developed, but in that light, I also wanted to say that it's important when we present these choices to people who are not expert in the fields, to also always give them some kind of understanding and tradeoffs.  For example, maybe one example would be, you know, there could be an offset if only too few companies end up being the owners of the best datasets, and having the more powerful AI algorithms.  And on the other hand, open source could make many more models, but it comes down to many other things, like the size of the dataset, whether it's biased, if you do search for what are the CEOs of hospitals in the world, it's always a man, is it true?  No.  It's the wrong dataset, and it's not easy to fix that code in align to say there are CEO's of hospitals in the world that are not only men.  Just a plastic example.

But with the thought of whether it should be proprietary or open source AI models or training datasets, which are now more and more available, I think it's also important to think about the guardrails that are built into some of these big proprietary models.  That means should not be allowed hate speech or they have these kind of constraints that are built in that are good in these, so you can control maybe easy to regulate fewer of these instances of AI, whether ChatGPT or something else.  And on the other hand, open source, we think we would not be bound to just use a couple of these models.

So there are these certain tradeoffs that in open source is not open source unless you can modify.

For example you can modify that you allow hate speech then we ask ourselves is this really artificial intelligence simulating human intelligence, if intelligent, it should not be suggesting propagation of hate speech and so on.

Then there's a set of copyright issues as well.  So there are all these questions that we could work on because there is a rapidly growing set of developersette that are making sustainable awe models and different kind of ChatGPTs.

>> ROBERTO ZAMBRANA: Thank you.  Thank you very much.

  One last round if anyone please.  Yes, there is one more comment.

>> AUDIENCE MEMBER: Thank you for very inspiring presentation and comments.  I'm Amy from Japan.  I'm working for a private company right now, but involving educational arena as well.  To provide reliable information from global governments, and various governments I really understood the participatory process is crucial in this topick, and now I have one new question that the developers should be involved in such participatory process in order to ‑‑ I feel like educating citizens as well as educating developers, maybe educating the wording is not really what I mean, but I do understand the concerns and also to understand the front line is beneficial for both of us.  So I feel understanding and educating and learning from both of the fields is very important.

So I don't know such cases, I have my desire to know more about that.  Thank you very much.

>> ROBERTO ZAMBRANA: Very important part of the dialogue will include the ‑‑ I mean not only the developers but all the technical community that will be related to AI, it's very important.  Thank you for that comment as well.

>> RAASHI SAXENA: I wanted to.

>> ANTOINE VERGNE: Sure, it's from.

>> ROBERTO ZAMBRANA: From filla Smith, right.  I'm just wondering whether a worthwhile question on digital divides, how does my impact on the understanding and use of AI from a global perspective, how can a global dialogue assist, to support each other to overcome areas so there might be a balance in understanding and use of AI and metaverses?  That's a question that.

>> AUDIENCE MEMBER: Is that the question, it is a tool.

>> ROBERTO ZAMBRANA: This dialogue can support to overcome areas, so there might be a balance in understanding.  Use of AI and metaverses.  So how the Dee log contributes to this.

You want to take that one, Antoine.

>> ANTOINE VERGNE: Yes, thanks.  Before that, I wanted to make a comment on Amy's question.  About developers, I think it's really important indeed, and it's something to extend the scope of that kind of dialogue because they are part of the solution and a challenge.  One example of something we managed to do in 2015, we had such a dialogue at global level on the climate agreement in Paris, same principle, all about a group of citizens and question about the Paris agreement, information and discussed and gulf the opinion, and in parallel to that, we did one process with employees from NG and you may know they are one of the big, big energy company at global level.

We had a process for the employees, so we had thousands and thousands of employees from NG taking part in the same exact, same dialogue as the citizens.

So the interesting part was as citizens and employees and stakeholders of a competitor.  It was interesting to see they had the double hat.  I am a citizen.  It was an interesting thing.  Thank you for reminding that indeed, having developers, having people that do also the technology is very important to address them, not only as ‑‑ in their job but also as it dozens in such a process.  I think it's ‑‑ thank you for that comment.

A clarification from Philipa, I see, about the ‑‑ okay.  I don't know if you see it, Roberto, she says whole countries that are more capable can assist others that might have issues through a dialogue.  And so how can the dialogue assist the learning and the neutral learning of different countries with different capacities.

>> RAASHI SAXENA: We did that in India in some places, we did two dialogues in places that are more and more where ‑‑ I wouldn't say the internet penetration is low, but generally talking about digital literacy all these topics, usually not approached, and those were ‑‑ the dialogues happened during the peak pandemic period.  So we did have two in‑person dialogues in small settings where we trained a lot of journalists to be able to utilize and get outputs, and visualize that the format that we had might not have been the best.  We might need a better way of contextualizing that information.  We did localize it in terms of translating it into the local languages and yeah, it whats a lot of languages, we feel like maybe a story telling format with a few ux experts testing out different ways to be able to evoke responses because that's not something they are used to.  People are not used to talking that long about those topics.  Maybe there needs more time.  Yes, there was something tried and I'm sure there are other examples across the world that would have also worked, but also coming back to the question of developers, yes, developers need to be actually central to conversations like this to have, I would say, a more conscience and moralistic bent to this.  At the end of the day, they're humans and also to ‑‑ also for all of to us reflect and see that all of these issues we talk about, hate speech and misinformation, they are not new phenomena that have been there because of the advent of technology, they've always been there, they've always had different modes.  Sometimes you've had more expensive infrastructure to be able to enable these phenomena, now you use technology which has made it cheaper and easier to spread propaganda.  Developers should be at the center of the conversation.  Thank you for writing that.

>> ROBERTO ZAMBRANA: Maybe a follow up about the question from filla after she clarified about this.  She ‑‑ this ‑‑ we need to remember that this process is initially or mainly locally.  So it's between the citizens of each of our countries.  I will say, I don't know what you think Antoine, but maybe after we ‑‑ as part of the results of the dialogues, we gather all the results, all the conclusions in each of the die logs, we can have a sort of round between coordinators of each of the countries, to comment and maybe to identify which are the common topics as priorities that could be presented in the different instances that we need to share the reports.  In that way I think we could accomplish what us what suggested, to have a return this support coming from the countries that have more experience maybe in particular topics, assisting some others that don't.  Maybe that could be a good idea.  What do you think, Antoine.

>> ANTOINE VERGNE: Yeah, that would be fantastic.

I dream a bit, the next, of course, the next piece would be to invite people from each country, participating citizens to gather at the global level to also reflect on their own reasons, that needs stronger infrastructure for the dialogue, I think that would be fantastic to have that step, and to be able to aggregate the different levels, those reasons, because I think that's one of the key so the advantage of being ‑‑ that you can search a lot into it and understand why people say what they say, and ‑‑ but at the same time it's the challenge, you have to analyze it and maybe it's where artificial intelligence can help make sense it was a human mind and so maybe there is a full circle here to have AI help us understand what people say about AI.  That could be a nice way to have circle end and connection.  Between AI and citizens dialogues.

I think we can conclude, if I did it right.

>> RAASHI SAXENA: One question.

>> ANTOINE VERGNE: I don't know the time.  I let you in the room do the last round of questions.

>> ROBERTO ZAMBRANA: We have lunch after this, so we can have a license to extend a little bit.  Mark.

>> MARK:  Thank you.  Roberto, Mark Carvell, internet governance consultant.  I was previously with the U.K. government.  I it's not a question, just a point of information.  It may have cropped up earlier, because I arrived late from this session from the main plenary session on the digital compact.  I know from my association with project liberty, that they are participating in a focus group on metaverses at the Tu.  There are a number of working groups on metaverses at the ITU.  I think that is potentially a channel for contributing the citizens aspects of this, the evolution of the convergence of immersive technologies with internet technologies that will be so transformative.  And to those discussions, just think from what I understand, that you are valuable, quite wide ranging and project liberties of particular interest on decentralizing these technology platforms, on ensuring they are probably properly respectful of ethics and rights and so on.  I offer that as a piece of information, conclusion here, hope that's helpful.  Thank you.

>> ROBERTO ZAMBRANA: It really helps actually.

We were talking about AI during the whole session, but of course it wasn't like just that, but also some other emerging technologies like the metaverse.  Thank you very much for that, Mark.  I think we are getting to final moment of the session.

If we don't have any other comment, maybe we can wrap up.

>> RAASHI SAXENA: Yes, we can.  I do believe we have one person maybe, I'll just take a room around and see if there's anyone that has my last comments.  Anyone at all?

>> Give her the mic.

>> AUDIENCE MEMBER: Thank you for giving me the opportunity to provide you with a final comment.

A lot has been said on inclusion of developers and society and so on, but I do believe it's strongly interdisciplinary issue.  So there must be a place for every single specialist who has something to say.  By this, I'm relating to the metaverse and instances of ‑‑ because, you know, I originate from child protection environment.  So we have to benefit from what we know in the past and the research, and we have to check what is going on now.  So we need to make a ridge between the past, the present, and we need to listen to experts, especially, such as again developers, practitioners, policy makers, so it's a common responsibility, I would say, and by not including an expert of a particular field, we may simply overlook an important contribution.

So there is really ‑‑ I think in this room, it's a good example that there are many people from different environments, different angles and we learn from each other.

This is the only way to proceed.  Thank you.

>> ROBERTO ZAMBRANA: Correct, correct, yes.  Of course, the learners, the teachers, are necessary experts in the field, but also users of the technology.

Okay.

>> RAASHI SAXENA: One last comment, they wanted to mention that we talk about children, other vulnerable groups, people with disabilities, we should be taking into account and also, of course, different languages, we can go on and on.  We should come to a halt.  Don't want to take away anyone's lunchtime.  Thank you for joining us, yes, we are going to be around at the IGF.  Happy to take more questions and happy to have more discussions, with this, we come to a close.

>> ROBERTO ZAMBRANA: Maybe Antoine would like to say something.

>> ANTOINE VERGNE: Just once thank you for being there, but one thing is our intention is to not stop involving citizens into those discussions.  If you're interested in joining us in that effort of thinking about it and making it happen, we are open, and would love to discuss with you on how to do that together.

>> ROBERTO ZAMBRANA: Excellent.

>> RAASHI SAXENA: Thank you.

>> ROBERTO ZAMBRANA: Thank you very much.

[ Applause ]