Session
Organizer 1: Barbara Wanner, U.S. Council for International Business
Organizer 2: Nicole Primmer, Business at OECD (BIAC)
Organizer 3: Maylis Berviller, Business at OECD
Organizer 4: Karine Perset, OECD
Organizer 5: Clara Neppel, IEEE
Organizer 6: Jose Antonio Guridi Bustos, Future and Social Adoption of Technology (FAST) unit, Ministry of Economy, Government of Chile
Organizer 7: Pam Dixon, World Privacy Forum
Organizer 8: Abere Shiferaw, Minsitry of Innovation and Technology
Speaker 1: Clara Neppel, Technical Community, Western European and Others Group (WEOG)
Speaker 2: Karine Perset, Intergovernmental Organization, Western European and Others Group (WEOG)
Speaker 3: Pam Dixon, Civil Society, Western European and Others Group (WEOG)
Speaker 4: Andrade Norberto, Private Sector, Western European and Others Group (WEOG)
Speaker 5: Jose Antonio Guridi Bustos, Government, Latin American and Caribbean Group (GRULAC)
Correction of Speaker title:
Jose Guridi, Graduate Research and Teaching Assistant, Cornell University, New York
Barbara Wanner, Private Sector, Western European and Others Group (WEOG)
Nicole Primmer, Private Sector, Western European and Others Group (WEOG)
Maylis Berviller, Private Sector, Western European and Others Group (WEOG)
Round Table - U-shape - 90 Min
1. What important issues have arisen as business and government have endeavored to implement the OECD AI Principles? How are you tackling those challenges? 2. Many governments are in the process of developing AI regulations. What would business, civil society, and the technical community recommend to regulators to strike the balance needed to promote AI innovations while addressing risks? 3. We frequently hear of the need for a cross-disciplinary, multistakeholder approach to harness the benefits of AI. Do you agree, and what is needed to make such collaboration happen both at the national and global levels?
Connection with previous Messages:
Agenda is attached.
Targets: SDG 8.2 -- “Achieve higher levels of economic productivity through diversification, technological upgrading and innovation, including through a focus on high value-added and labor-intensive.” By implementing a human-centric approach to trustworthy use of AI, both business and government will help to drive economic growth and productivity by enabling greater efficiencies. Specifically, continued AI innovations already have proven to enhance the provision of healthcare services and diagnosis, improve municipal services and Smart City management, create new professions, and enable educational and training opportunities to facilitate adaptation throughout labor intensive industries.
Description:
Artificial Intelligence (AI) carries great promise for driving economic growth and improving our lives. This innovative technology has the potential to revolutionize how we live, work, learn, discover, and communicate. At the same time, there are growing concerns about the potential use – and misuse -- of AI, which risks undermining personal privacy and online security protections, supporting decision-making biases that exacerbate social inequality, and causing disruptions in the labor market, among other possible pitfalls. The OECD’s ground-breaking AI Principles, adopted in 2019, aim to shape a stable policy environment at the international level to foster trust in and adoption of AI in society. The five values-based principles and five recommendations for government policymakers promote a human-centric approach to trustworthy AI to which all stakeholders are encouraged to aspire. This workshop will utilize the OECD AI Principles as a foundation for considering the technical and operational realities for all stakeholders in implementing tools and processes to ensure the trustworthy use of AI to realize economic development and social welfare needs. We will evaluate concrete examples of how business has endeavored to implement the Principles, consider the technical community’s views on appropriate AI standards, examine civil society’s perspective on using AI to realize the SDGs, and review government efforts to pursue the human-centric development of trustworthy AI. All speakers will reflect on the critical need for collaboration among the entire stakeholder community to realize trustworthy AI.
o A report of this IGF workshop will be posted on the OECD’s AI Observatory. o This workshop will inform a follow-on report by Business at OECD and/or the U.S. Council for International Business that builds upon Business at OECD’s “Implementing the OECD AI Principles: Challenges and Best Practices,” published in May 2022. o The workshop discussion will inform business, technical community, and civil society interventions in work of the OECD’s AI Governance Working Party (AIGO). o The workshop discussion will inform various stakeholder inputs to the UN Secretary-General’s proposed Global Digital Compact, which promotes regulation of AI.
Hybrid Format: o The on-site moderator will begin the session by using an interactive poll, such as the Mentimeter App, to determine the general level of “trust” among participants about whether AI currently is being used in a responsible, human-centric manner. The on-site moderator then will wrap up the workshop by conducting a final interactive poll to determine how in-person/online participants’ trust level in business and government efforts to utilize AI in a trustworthy manner to promote economic prosperity and address societal needs has improved – or remained the same. o Using the polling App, the on-site moderator also will ask for in-person/online participant preferences for (1) top-down regulation of AI; (2) flexible regulation of AI, informed by stakeholder comments; or (3) no regulation of AI to maximize innovative potential. In the wrap up, the on-site moderator will ask participants whether their views about the extent of AI regulation have been changed, or remain the same, based on what they learned from workshop speakers. o Both the on-site moderator and the remote moderator will undergo training to ensure they both understand how to use the Zoom (or other) software to engage remote participants and communicate with each other. o The on-site moderator will pause following each question or engagement among speakers to ask for questions/comments from both in-person and remote participants. o The remote moderator will manage interventions by the remote speakers, alerting the on-site moderator of the need to recognize a remote speaker who has asked to be recognized via the “raised hand” function. o The remote moderator will watch carefully for “raised hand” questions posed in the chat or Q&A function and alert the on-site moderator or speaker, if addressed to a specific speaker.
Usage of IGF Official Tool.
Report
- Key takeaway 1: As more and more countries are planning to introduce some type of regulation over AI, all relevant stakeholders should seize this window of opportunity for collaboration to define concepts, identify commonalities and gather evidence in order to improve the effective implementation and enforcement of future regulations before their launch.
- Key takeaway 2: Ensuring that all actors, from both technical and non-technical communities, exchange and work together transparently is critical to developing general principles flexible enough to be applied in various specific contexts and fostering trust for the AI systems of today and tomorrow.
Stakeholder collaboration remains critical as the global community continues to grapple with how to tap the benefits of AI deployment while addressing the challenges caused by the rapid evolution of machine-learning. Ongoing human control remains critical with deployment of AI advancements to ensuring that "the algorithms do not get out of our control. Critical to this is is breaking down silos between engineers and policy experts.
Speakers:
- Norberto Andrade, Global Policy Lead for Digital and AI Ethics, Meta
- Jose Guridi, Head of the Future and Social Adoption of Technology (FAST) unit, Ministry of Economy, Government of Chile
- Clara Neppel, Senior Director of European Business Operations, IEEE
- Karine Perset, Senior Economist/Policy Analyst, Artificial Intelligence, OECD
- Mark Datysgeld, Internet Governance and Policies consultant, San Paulo State University, Brazil
- Stakeholder cooperation is at the core of the development of frameworks for trustworthy AI
As a general-purpose technology, Artificial Intelligence (AI) presents tremendous opportunities as well as risks, while having the potential to transform every aspect of our lives. Alongside the development of new systems and uses of AI, stakeholders from the public and private sectors, as well as civil society and the technical community have been working together towards the development of value-based and human-centred principles for AI.
The Internet Governance Forum is the perfect place to have discussion on the different existing initiatives to create policy frameworks for trustworthy and responsible AI, including the work conducted by UNESCO with the “Recommendation on the Ethics of Artificial Intelligence”, or the Council of Europe with its “Possible elements of a legal framework on artificial intelligence based on the Council of Europe’s standards on human rights, democracy and the rule of law”.
The OECD AI Principles developed in 2019, as the first internationally agreed principles, have set the path towards an interesting and necessary process involving different stakeholders to develop a policy ecosystem benefiting people and the planet at large. Similarly, global standards developed by the Institute of Electrical and Electronics Engineers (IEEE) aim at providing a high-level framework for trustworthy AI, while giving the possibility for the different stakeholders to operationalize them according to their needs.
- Standards versus practice: applying frameworks to real use-cases
In that regard, both public and private sector organizations present unique challenges in relation to AI according to their specific situations and requirements. It is therefore critical for policy makers to ambition to bridge the gap between policy and technical requirements as Artificial Intelligence systems are undergoing constant improvements, changes, and developments. The case of Generative AI is especially representative as, in less than a year, it superseded the discussion on deep fakes, which shows how fast the technology is evolving and the need to involve engineering and coding communities from the very start of policy discussions.
When ambitioning to move from principles to practice, difficult decisions must be taken by organisations due to value-based trade-offs, as well as technical and operational challenges dependent on the context. These challenges are often dealt with in silos within the technical community and not documented. Breaking these divisions is essential for companies to implement the principles in a holistic manner and better understand the conflicting nature of some of the principles.
For example, ensuring fair and human-centred AI systems may conflict with the requirement of privacy. In some cases, AI developers need to access sensitive data and detect if specific bias occur to know if models are impacting people with specific attributes, but this questions the privacy of people’s data. A similar tension can be seen between requirements of transparency and responsible disclosure regarding AI systems, and the explainability of predictions, recommendations or decisions coming from AI systems as specific and technical knowledge might be required to fully understand the process.
- Towards implementation: operatizing and monitoring policy frameworks into practice
To ensure the implementation of frameworks for trustworthy AI, international organizations and associations are developing frameworks to effectively manage AI risks by defining, assessing, treating, and monitoring AI systems, but also by working on the definition of common language and understanding, design different agile instruments to fit the different stages of the AI life cycle, and foster training and skill-development opportunities.
As different use-cases and applications of AI carry different risks, defining the most important values and principles depending on the specificities of a situation is critical to properly ensure the application of AI systems in a trustworthy and human-centric manner. Further, assessing the risks in a global, interoperable and multistakeholder way would allow the identification of commonalities to improve the effectiveness of implementation and facilitate enforcement of value-based principles. Alongside this risk-assessment approach, the OECD is proposing to collect good practices and tools to share knowledge between different actors and help a wider implementation of their principles. Finally, monitoring existing and emerging risks related to AI systems through different means, legislations, standards, and experimentations for example, would allow to create a governance system both informed by the past and present AI incidents while providing foresight on future AI developments.
Regulatory experimentations are of the utmost importance to ensure multistakeholder participation and resolve numbers of technical challenges including the opaque and dynamic nature of AI systems, the difficulty to measure impacts as well as the uncertainty around the effects of the regulation over technology. In the case of Chile specifically, the successful use of regulatory sandboxes benefitted from a participatory process involving the expert community, including engineering and coding communities, and policy makers in a transparent manner, which proved to bring down prejudices and barriers both groups had prior to working together. Other initiatives exist to connect policy makers, academics, and technology companies such as Open Loop, which is a project looking at understanding how guidance resonate in the real world before being implemented.
Working towards the realization of standards for trustworthy and human-centred AI proves timely and ahead of the curve as regulators are starting to design and implement regulations. Strong evidence-based analysis remains essential to feeding the policy dialogue, which is to be conducted at a truly global level by involving developing countries. If the different stakeholder communities present unique insights, objectives, and sometimes conflicting views their common objective remains to developing a holistic framework ensuring trustworthy and human-centred AI.