IGF 2022 Day 2 WS #240 Pathways to equitable and safe development of AGI

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> JOANNA MAZUR: Good afternoon, everybody.  I have a question whether our panel is actually starting any time soon?  What is the situation on the ground?

>> Can you hear me?  We'll start in the next minute.  We'll start shortly, yeah, in the next one minute.

>> JOANNA MAZUR: Okay.  Great.  Thank you.

>> THEOROSA ELIMPLIM DZINEKU:  Thank you to those joining, good morning, good afternoon, good evening, depending where you're connecting from.  Welcome to our session on Pathways to equitable and safe development of AGI.

I'm excited to moderate this session, and to delve into the discussion on AI.  Of course, we have heard several discussions that are centred around that, but really excited on the path that we're moving into AI governance, we're going to look at policy and we're definitely going to have discussions on some of the challenges that we are facing before I introduce my speakers, why this session, why this topic?  Of course, during COVID, when the COVID came, it was so necessary for us to really appreciate and kind of know how the Internet is more relevant to us in those areas as well, there are a look forward lot of developments on artificial intelligence.  Some have been good, some haven't been good.  The major issue, it is on how it is governed.  It is quite a new area and a lot of people are navigating.  We kind of want to know what are the policies that have been put in place to govern that.  Our discussion is certificated around that.

I have three wonderful speakers here with you and I have Joanna Mazur online.  My name is Theorose Elikplim Dzineku, from Ghana, and I'll be the moderator throughout the session.

On my right is Oarabile Mudongo, on my left, Umet Pajaro Velasquez, we'll quickly allow Oarabile Mudongo to brow dues himself followed by Joanna Mazur online and I'll come to Umet Pajaro Velasquez.

>> OARABIL MUDONGO: I'm Oarabile Mudongo, a technologist, a policy researcher focused on public interest technology at the intersection of digital governance, policy and regulation.  Currently I'm a member of the advisory Committee to the African Internet Governance Forum, I also sit in the expert group on emerging technologies on the African Union high‑level panel.

Thank you for having me.

>> THEOROSE ELIKPLIM DZINEKU: Thank you.

>> JOANNA MAZUR: I'm Joanna Mazur from the University of Warsaw, an assistant professor here, I'm working in the area of law European Union law and international law governing new technology users with a special focus on data protection and competition law.

Thank you again for having me.

>> THEOROSE ELIKPLIM DZINEKU: 

>> UMUT PAJARO VELASQUEZ: Hello, everybody, I'm Umet Pajaro Velasquez.  I'm consulting in artificial intelligence in development and so right now, part of the ‑‑ I'm Chair of the group for ISO right now and my main work, research right now, it is to find a framework that includes The Rights for people in different policies and regulations that have been developed in artificial intelligence around the world.

Really I'm happy to be here and to start this conversation on artificial intelligent.

>> THEOROSE ELIKPLIM DZINEKU: Thank you.

I would ask Bruce Tsai, our online moderator to introduce himself as well if he's there.

>> BRUCE TSAI: I'm Bruce.  I'm the online moderate, if you have questions at any time, put them in the chat and I'll speak up on your behalf.  Yeah.  This is an area I'm interested in looking forward to where the discussion goes.

Thank you very much.

>> THEOROSE ELIKPLIM DZINEKU: Thank you.

With that out of the way, I'll just again just quickly jump into our first set of questions.  I would also want all of us in here to feel free to ask questions and make contributions and submissions as well.  We really appreciate your input in all of that.

As I said at the earlier part, right, during the first set of submissions, it is really important that AI, we cannot dispute that we have used that in most developing countries, it is really fully in session.  Moving to Oarabile Mudongo as somebody who is in the policy system, right, how do we handle the synergy between all of the stakeholder groups?  I'm talking about government, Civil Society, private, technical, and even academia which would fall under Civil Society?  How do we handle all those to really contribute to a more ethical AI?

>> OARABIL MUDONGO: Yeah.  I mean, thank you so much, Theorose Elikplim Dzineku, for that question.

I think for me, the discussion really starts off when we have to frame AI through ‑‑ I mean, my input in the discussion, it would really kind of feed into the African narrative and looking at the developments that are emerging, given the work that I have been doing in the continent.

Framing AI through this African lens, I think the continent, it is playing a central role in the global AI supply chain.  We have almost became consumers and a testing ground for these kinds of technologies.

At the same time, yet foreign technology companies driving AI technologies, in the continent, they're dominating the region through public private partnerships and we have also noticed many African states that are deploying AI‑driven technologies and at the same time not practicing open procurement standards and with this lack of transparency as to how the technology, the products or solutions, how they are functioning in a community or a society level.

Also, I just wanted to point out to one key challenge, one area that I feel needs to be discussed in this setting.  It is the monopolistic strength of technology companies that are driving ‑‑ that are driven by monitory value through the supply chain models and perhaps with surveillance capability, which I think establishes this new form of profit making predominance.  At the same time, I think by controlling, engineering this digital ecosystem, this company, the big tech companies, mostly they dominate the technology architecture as well as given them this direct over the political, economic, society levels and layers of life, it is I believe a form of new sovereign control.  The main issue here, it is central around the policy inadequacy for Africa.  While we're seeing this many developments and at the same time adopting these technologies, do we actually have sufficient policy frameworks or standards to regulate how this technology ‑‑ perhaps that's a question I would explore further in the discussion.

>> THEOROSE ELIKPLIM DZINEKU: Really, I'm glad you spoke about investment.  I was actually coming into that.  Going to Joanna Mazur online, now, one of the major challenges that we face really in all of the sectors, the more private sectors invest financially into policymaking, development of devices, they kind of have control, some level of input really because they're investing in it, they kind of determined or it looks like they kind of determined the font or the stream, so just to you, Joanne, what are some of the challenges that we face again in the area of ethical artificial intelligence when it comes to private investment and making sure that all the stakeholder groups are equally represented?

>> JOANNA MAZUR: You know, I wish I knew the answer.  It would be great to have, like, perfect solutions for these challenges.

I think one thing that's been missing for a while, it is only starting to be somehow present in the debate right now for last couple of year, it is the fact that we used to treat new technologies as something that's better to leave it unregulated and just to let it develop on its own pace, on its own way.  I think it is changing for the last couple of years.  This is good that it is changing.  We cannot let such important sphere of the economy, also the social life and political life to just develop on its own without any regulatory measures addressing the challenges and the risks that are connected to it.

This would be like the first thing that I would like to stress, that actually the presence and the development of regulatory measures concerning new technologies is quite important.  This feeling of being powerful in this regard, the states actually do have the right to regulate new technologies, it is something that we should stress and acknowledge and follow this path.

The other thing, which I think is important, it is that we also sometimes tend to forget that there are regulations in place which somehow demands the AI tools to be aligned to certain Human Rights or to certain values that we deem protected by la w.

We have treated technologies particularly that it is not necessarily relevant, the old traditional law, rights, value, this is something that we also should recognize, all provisions with non‑discrimination, they should be considered as important for talking about AI technologies and if a company introduces products and services on the market all the requirements that actually are foreseen in already binding legal regulations should also be followed.

This is the second point.

The third point I would like to make, it is that maybe it is also time for the states to actually become more active actors in terms of developing their own solutions.

Instead of relying on private sector for the provision of services used in public sector, and for the goods of citizens, so on, we could think about the way ‑‑ how channeling the public investment into the development of sovereign technological solutions.  So instead of treating the state as the client, consumer of the products and services provided by the private sector we should think about how can we assure that we'll have funds and resources to have our own public solutions which would be more for example transparent, which would be more accessible, which would be based on more diversified types of data, so on, so on.

This is the points that I think are important in this context.

Just moving on to Umet Pajaro Velasquez, so one of the challenges that we are really facing with this new innovation is the ability to really serve globally.  So we have realized the sum of the softwares are not able to recognize accent, not able to recognize faces, they're lacking some colors as ‑‑ you know, not really, because innovation is not fully, but the whole idea is that you should be able to serve everybody globally.  Right soft‑landing now what can we do, in terms of the innovations to make sure that in terms of representation, it does represent every stakeholder group.  In terms of jurisdiction, it does serve globally.

>> UMUT PAJARO VELASQUEZ: This is a question related more with artificial intelligence governance.  It kind of came as a way to solve all of the problems related to artificial intelligence with a set of principles or values that actually serve to the different data rights that people have and different ethical frameworks or standards.

Right now, we can find some efforts, for example, UNESCO has developed a set of principles, tried ‑‑ they want ‑‑ that they ‑‑ that all of the nation starts to develop different policy frameworks related to the principles, in order to protect the digital life of the person, the gender, race, economic background, protecting the Human Rights in general.

Another set of principles that we should look at, so as an example, also an effort for creating artificial intelligence governance, and the OCD principles, and the G20 principles.  Those are the efforts, they're including voices aside from the Global North in the development or in the different regulatory frameworks.  Yeah.

The idea behind the different principle, every country, they can adapt that in the culture, the way of living, because technology, as you said before, it is ‑‑ it should serve the people that are actually going to be the beneficiary or the user.

>> THEOROSE ELIKPLIM DZINEKU: Thank you.

>> OARABIL MUDONGO: Yeah.  I just wanted to add to that question as well based on the point I highlighted earlier on the issue of policy and adequacy.  I think it gives me pleasure to share on one of the interesting insights from the UNESCO AI policy report we just finalized in the AI Africa policy ‑‑ I mean, AI Africa observatory project.  What's interesting about this, in the AI regulatory, policymaking portion that we were doing, particularly looking at nine African ‑‑ nine countries, we kind of find out, you know, the mapping included national AI policies as well as provisions that are related to AI data processing as well as automated decision making within ICT, competition, consumer protection.  What's really interesting about the study, some of the reports, the preliminary report, insights from the report, it is that, you know, we have actually noticed inadequate environment, particularly in many African countries where policies, legislative environments, legislative instruments, institutions, they are lacking in context to define what exactly the regulatory frameworks should look like in regulating AI technologies in Africa.

We think that, you know, with this kind of approach in Africa, there is still a lack of understanding, especially from the government side which I think needs to be actually addressed from that.

>> THEOROSE ELIKPLIM DZINEKU: Yeah.  Welcome to that.

I want us to move into another equally important area that is more on control pursuit of AI and all of that.

We spoke earlier, we had earlier submission on the role of investment, private sector investment, especially how the innovations are governed or are taken or handled, right.  So just on the same lens of governance, I want you to focus on the socioeconomic factors that actually effect the innovation.  Again, is there enough resources ‑‑ before going into that again, let's have a quick look into IoT, which is also in the same category, Internet of Things, having smart homes, that's a good option, that's something a we should look at.

We don't even want to talk about the data that is there and how protected it is.

It has to do with money, it has to do with a lot of investment.

Do you think, really, that developers, first of all, have enough resources to do that, if they don't, do you think that the investment could have a positive feedback on how the devices are controlled or how those things are handled?

>> OARABIL MUDONGO: Thank you for that question.

When you look at the emerging debates in the global digital discourse around AI or emerging technologies, they are kind of centred around digital sovereignty.  The control is around that, not only in Global North countries, we are actually beginning to see similar developments in Global South countries who are also trying to assert their position as opposed ‑‑ as in relation to the reactions of the Global North countries.

The contested governance of emerging technologies, I think it is part of the emerging global digital discourse through the lens of multilateral parties and is the result of a further understanding of benefits versus ‑‑ benefits of the technology versus the implications in our society.  Right.  Looking at it from a context of digital infrastructure development, however, I think we should understand within ‑‑ we should understand this argument within the broader context of data and surveillance, digitalization, which is most of the items that many ‑‑ basically governments globally they're using that to justify why they're driving this kind of technologies.  The underlying governance, the Digital Transformation on this already loosely used concept, I it has the potential of blurring this useful distinction from a technology policy as well as the governance and legal perspective of what exactly is the enabler of citizens' benefits and at the same time, the states' obligation to what is open to abuse, exploitation.

In addition to that, I think that the shift in socioeconomic and geopolitical dynamics of all of the conversation, they have disrupted the traditional digital Development Agenda that is respect for Human Rights which my colleague has already alluded to, as well as respect for sovereignty in other countries.  Right.

So towards this capital‑driven business models, in disregard of citizens' relation with this kind of technology, and also lastly, to that point, it is this powerful autocratic regimes trying to set their Developmental Agenda demanding greater flexibility and exceptionalism in the global arena as to how the technologies should be regulated globally.

Most of the states contest the democratic values and are pressed to control the Internet and even the very same technologies we're discussing now in favor of economic development and protecting the wealth of the elites and the political patronage in some instances.

>> THEOROSE ELIKPLIM DZINEKU: I will come back to you, Umet Pajaro Velasquez.

We spoke earlier the fact that the devices, the intelligent devices, not recognizable in some parts of the world globally, right, they're still under development.  What is your take on the global dynamic factor of that in terms of the development, those that develop the devices?  What are the dynamics?  Is it more European, more African, just because the understanding is not there, because that more people that face that challenge are not the ones making the input when it comes to development?

>> UMUT PAJARO VELASQUEZ: I actually love this question.  I come have had this conversation regularly on how the development of this technology affects us.  We implement the technologies as it was actually made for us, but it wasn't made for us.

Actually we think that ‑‑ I would like to say that we're facing right now a new way of colonialism through this implementation of these technologies in everyday life.

So we need to be aware of that before implementing those technologies in our countries.  There are several samples of implementation on artificial intelligence, for instance, the technologists, how they can be biased to different genders and different races and so there is work being done that we need to address here as Joanna Mazur had said before, we need to have the different datasets in order to include more visions of the world inside of the technologies.

Since we're in the majority of the world, I prefer to use the term, the Global South term, because we're the majority of the world literally.

We're the majority of the world.  To use all versions of the world into these technologies, we actually will use this technology in a way that can benefit to all of us and to look at the balance of power, that this is actually happening in the development of the different technologies.  We have the resources and we have the people that actually can develop that inside of the countries, probably we lack the political ‑‑ we have the political will do that, and we have the people that are developing these technologies, they can make that in the majority of the world.  Yeah.

>> THEOROSE ELIKPLIM DZINEKU: Is there a historical factor when it comes to AI development, is there any precedent that is there?  Something that had happened that will speak in favor or not in favor?

>> JOANNA MAZUR: In terms of using the data, for example, for the development of the solutions, we have the problem of biases that will be there when we use historical datasets develop AI, it will probably replicate the biases that were there historically and, therefore, I think there are two things that are ‑‑ that I would like to mention in this context.  One, the fact that we need to support the solutionings that would actually take this into consideration.  Even not support buddy manned the solutions implemented would take this into consideration.

Then the question is whether actually the development of these technologies can have, like, positive impact on equality.  Can we make thanks to technology solutions that may be more fair than the solutions which we have based on human decision making, for example and I would like to briefly come back to the topic of which you started the whole round of question, smart home, and the solutions that you gave as an example, the Io Texas technologies, so on, I think it goes to very important issues of what are the solutions developed and are actually needed?  What are the solutions that are needed but are not there yet and what are the solutions available but are not necessary but also harmful for us?  I think that the problem that we have, it is that we use the resources and we use also people with competencies to develop AI to work on solutions that are absolutely not necessary and don't make our lives better, like all of the chat bots that discuss with us the possibility to buy something new and all of the automated calls, the voice, you know, automated voice that actually recommends us to buy something, does it really improve the quality of life of anybody?  Does it help us to provide more, I don't know, better health services, better housing?  We're missing, I think the point, like we're missing the question whether the solutions we develop will actually bring something good.  This is something that I would like very much to change to take into account and focus on the areas and solutions that will provide something beneficial for the society and not only for the market and therefore I strongly support what was already said in this regard.

>> THEOROSE ELIKPLIM DZINEKU: Thank you so much.

Just to wrap up before we take questions, I would again want to move to Oarabile Mudongo to now this time focus more on redesigning the AI governance.  We started with AI in general, we streamline it to the socioeconomic factor, streamline it into the different dynamics, if there is a historical threat or change, we spoke extensively that the devices are not truly global, although we're trans global.  There is a perspective really that the development in itself should be a multistakeholder approach including all stakeholder groups, all age groups, youth, people in minority, that it should have that merger, that everybody, irrespective of where you are, where you belong to, that you do have an input in that.

Now I would start with you again, with your experience in AI governance, all of that, do you think that minority groups or youth are fully represented in this area when it comes to policy?  Do the policies cover those groups of people a lot?

>> OARABIL MUDONGO: That's a very good question.  Thank you for that.

First of all, I think there is still a lack of representation, especially minority groups in this space.  More specifically, I think it speaks to the transparency and fairness in how policymakers a developing policies in the policymaking process which kind of mediates between interests of all stakeholder, especially those in the Global South.  There is still a gap there.

In terms of, you know, speaking more on principles of transparency, fairness, I think many of the principles of transparency, fairness, it is just not important for AI and the representation of the minority, especially arguing it from a private sector perspective.  Right.

This value, it is part of a larger umbrella of digital ethics in policymaking processes.  Therefore, digital ethics is or become as crucial element in creating a better Digital World that caters for all our stakeholder groups, that is fair and inclusive to all.

Here is a metaphor I probably want to throw off to our participants in this session.

Right.

Think of buying ‑‑ think of yourself buying a product in a store with a label showing all of the nutritional values and its packaging, a guideline booklet of some sort to have an IoT product just ordered on Amazon, right.  Wouldn't it be nice to have the same approach in every IoT or AI‑driven technology that comes packaged with a booklet indicating how that device process data?  Those are the minor issues that we might not really think impact psychologically how societies see and appreciate the technologies.  It makes a difference because it speaks specifically to issues of transparency.

At the same time, I tend too also sympathize with the private sector in terms of how they're looking to approach this issue.

They are, of course, challenges of implementing digital ethic, particularly from the private sector, which might come as though we're asking for more.  Navigating the issues around transparency, fairness, it can be easier said than done.

You know, because taking fairness as an example, how can it then be defined from a private sector perspective?  How can businesses choose between what is fair to the customer and what is fair to the business?  For example, in the eyes of a customer applying to a bank for a loan, the fair decision would be to approve the loan for the requested amount but for the business, it would be fair to decline a customer at risk to the investment prospects, right?  Those are some of the issues I would really like to perhaps for us to think about as we look at both sides.

>> THEOROSE ELIKPLIM DZINEKU: Great.

Going straight to Umet Pajaro Velasquez.  I think that there are some challenges listed by Oarabile Mudongo that we're facing in the ecosystem currently.  What are some of the practical solutions that we can implement to make sure all groups are presented, especially the youth group and those within minority groups.

>> UMUT PAJARO VELASQUEZ: It is in my work.

Right now I'm trying to develop a framework to include minority groups, especially gender diverse groups into all of the process of designing, development, deployment of the different entities and how to make that possible.

One of the solutions that came to my mind, it is the participatory designs, and this means not only the people going to be there, to be asked what they need, currently the technologies, but also being from the beginning to the end, saying where is the present failing to achieve their necessities so that the technology that we have at the end, it is actually the response to the digital rights or the digital necessities of the people that are historically not included in spite of this process which are usually minorities, which are people of different race so, on, different economic backgrounds or genders.

Another solution that actually came to mind also, it is this kind of technology, it has two parts, one part, it is the part of ‑‑ we need to protect the Human Rights.  We need to protect the Human Rights and also we need to guarantee to the private sector that they will have some benefits from the data technology they are developing.

So right now I'm seeing in some countries that doing something interesting when it comes to regulating the different artificial intelligence in their own national laws, and those are a part of forced regulations or policies.  With this kind of creating regulations for artificial intelligence, in trying some guided experimentation on what could work on a small scale in order ‑‑ in order to have the lessons possible and to identify all of the reasons that actually can a product go to the specific population, and then decide from there, we can have the regulations that are nearer for the private sector to release that product into the market.

>> THEOROSE ELIKPLIM DZINEKU: Great.

Thank you.

I just want to check with Bruce if we have any questions or comments in the house?  I see one hand up.

Bruce, any comment or question online?

>> BRUCE TSAI: No.  Nothing here yet.  I'll let you know if anything comes up.

>> THEOROSE ELIKPLIM DZINEKU: Okay.

I would take the question and I'll come back to you, Joanne for a submission.  Yes, sir?

>> AUDIENCE: Can you hear me?  Can you hear me?  Thank you very much for giving me these opportunities.  I have one question and one supplement or opinion.

Regarding the Big Data management system, I'm sure everyone notes there is no one around this room now because we're talking about big, big, big problem and challenge, how we manage Big Data governance as the UNIFC talking about all the Internet.  The Internet is generating billions of messages and datas every single minute.  How are we managing this?  The question, follow‑up to the question to you, and then I'll add strategies of how we initiate both the government and private sector, what I see now, it is you talking about the policies and also I will mention a bit later about IoT and IoE, that's one that we were talking about, the other one generating the data.  How Big Data government is implemented in a global stage.  That's the question to you.

The clarification, it is you every country, the government, the private sector, they have the working system framework, which is containing environments, strategies, and infrastructure.  At the big level.

Inside each, we have customer, products and service, processes and activities and we have participants and then we have information in technology.

Those are the framework we need to develop for each country and practice for managing Big Data.

Back to IoTs, IoS, what we see here, it is technology is driving the force and generating a huge amount of money because of AI and we have IoTs which is integrated every day themes on the Internet, the application is services and softwares that integrate the data service from various IoT devices.  The artificial intelligence, data, technology that helps to make the decision for the private and the public sector.  So implement this what we do, it is the IoEs, mention to machine, the capability and people to the machine, and the people to people, that's why these devices are working for.  How we manage all of the Big Data?  What are the strategies?  It is actually in the especially, my brother, so how do we actually, in Africa, when talking about the African context ‑‑

>> THEOROSE ELIKPLIM DZINEKU: I think I will give that over to Oarabile Mudongo to answer that for us and then we'll have Joanne to add up on that.

>> OARABIL MUDONGO: Thank you, sir, for that question.

For me, from a general perspective while you frame the question, for me, it is a need to provide guidance to private sector and the governments especially big players on the accusation and the operation to AI enabled services that are data intensive in both the private and public sector including assessing whether AI is necessary and whether it is developing evaluative criterias and specification models that guide on data processing within the technologies, but at the same time, I think we need to evaluate the technical and social impact of these technologies in society.

In addition to that, we need to ensure a sustainable operation over the implementation life cycle which I think speaks to principles or standards of how data is stored in the system and at what point are we supposed to destroy it from the data management system.  This should be accompanied by establishing appropriate oversight mechanisms to monitor the data gathering process and the data development aspect of the technologies, its adoption and the withdrawal for AI enabled services when they violate the data practices if and when there are such legal frameworks in a tick particular country.

>> THEOROSE ELIKPLIM DZINEKU: Great.

Joanne, would you wanted to add?

>> JOANNA MAZUR: I wanted to go back to the question before just to add one thing.  Of course, I do agree with the fact that we need some specific solutions concerning the way in which we can get the minorities or majority or groups that are unrepresented in the AI based solutions that are being developed, like the new innovative ways to actually ensure their presence in this regard.

However, I think it is also important to remember we're also talking about technologies that should be subject to very traditional regulation and to very traditional conditions and the procedures and the laws are enforced, still in a very traditional way and they're also made in very traditional way.

I think it is important that we also ensure that for example in regard to the enforcement of The Rights, the process of preparing legislative proposals, and in the process, of course, in the development of the solutions like from technological point of view, we have proper presentation and we have the right to participate in this process of non‑governmental organizations and grassroots movements and any of these kinds of barriers.  Of course, this technological aspects, they're very important.  Finally, what we end up with, it is the need to enforce The Rights and enforce the regulations that are in place in regard to this new technologies and then it all goes down to the question of whether for example it is possible to represent somebody as non‑governmental organization who defends Rights of certain minority for example or is it not?  Therefore, I think we should also include this very traditional, actually somewhat boring dimension in the framework concerning the governance of the inclusiveness of this new technological solutions.

This would be one point that I wanted to make regarding the topic that we discussed before the question, comment, question.

In terms of the state system, so on, I think we're living in very difficult ‑‑ the fact that we're dealing with technically global solutions which also are somehow particular in the local context and that they're unequally available, accessible, transparent, developed in various parts of the world makes it very difficult to think about the interplay between the various layers of governance and regulation that we're dealing with.

I would be very much for the development of the global standards than the standards on the lower levels.

I think that this is the way in which we can actually somehow face the challenges which are caused by global companies, by addressing the economic level, this is the dimension in which we meet and therefore I think that of course, like it is also dependent on the states and it is the level in which we should be working on is the global one.

Thank you.

>> THEOROSE ELIKPLIM DZINEKU: Thank you so much.

Is there any more question, either online or here?

>> BRUCE TSAI: Nothing online.

>> THEOROSE ELIKPLIM DZINEKU: Great.

I guess I will throw it back to our three speakers to just wrap up the session for us with their last remarks.

>> OARABIL MUDONGO: I think in closing I would say stakeholders have a joint responsibility in ensuring that Digital Transformation processes are diverse, inclusive, democratic and that they are sustainable in the long‑term commitment, strong leadership from public institutions need to be complemented with accountability and responsibility.

On the part of government, private sector actors, I think that there is also a necessity to strengthen the multistakeholder approach in order to be truly inclusive and to develop effective policies that respond to the needs of citizens, build trust and to meet the demands of the rapidly changing digital environment.

Thank you.

>> UMUT PAJARO VELASQUEZ: Well, right now around the world there is different efforts in governance and laws to develop and implement some economic principles and regulation that actually can work only locally and also globally in order to have a more multistakeholder approach to this topic and also to have more of a transfer accountable ethic approach to artificial intelligence.

>> THEOROSE ELIKPLIM DZINEKU: Thank you.

>> JOANNA MAZUR: I don't know if I have anything to add to this actually.

>> THEOROSE ELIKPLIM DZINEKU: Great.  Thank you so much.

Just before we wrap up, we have been having the discussion basically on the pathway to equitable AGI.  We had a discussion centred around three policy questions, the first one ‑‑ I mean, in the area of AI governance we had a discussion on diversity, we had a discussion centred around representation and we had submissions on the roles of various stakeholder organizations and in terms of policy development and in terms of implementations and how various multistakeholder groups and people globally, especially youth and those forced within the minority group could have that development.

Then we kind of spoke on data protection a little when we were looking at the submission on artificial intelligence and IoT.

I mean, the issue of this data protection is so long going.  Then we had the discussion again on the investment, if socioeconomic factors play a critical role on the development of AI devices and if that had a massive impact on the developers and then we centred the discussion around the development of AI devices are fully represented globally and we concluded on inputs and submissions.

Thank you to those online and those on site as well.  We're thankful for your time and your presence here.  We hope the discussions won't end here.  We welcome feedback.  If you go back, want to make inputs, you could always do that.  Thank you so much.  You have a nice evening, nice day, morning.

Be great!  My name is Theorose Elikplim Dzineku!  Bye!