Check-in and access this session from the IGF Schedule.

IGF 2018 WS #98 Who is in charge? Accountability for algorithms on platforms

    Room
    Salle XI

    Organizer 1: Olausson Kristina, ETNO - European Telecommunications Network Operators' Assocation
    Organizer 2: Lorena Jaume-Palasí, German Internet Governance Forum
    Organizer 3: Andrés Sastre Portela, ASIET

     

    Speaker 1: Oscar Martín González, ,
    Speaker 2: Lorena Jaume-Palasí, Civil Society, Western European and Others Group (WEOG)
    Speaker 3: Pascal Bekono, Government, African Group

    Additional Speakers

    Phillip Malloch, is Vice President at Telia Company, which he joined in 2007. Phillip currently heads Telia Company’s Group Public Affairs. He joined ETNO’s Executive Board in 2012 and he was a member of the GSMA’s Chief Regulatory Officers’ Group from 2013 - 2017. Malloch is since 2018 the Chairman of ETNO.

    Fanny Hedvigi, is Access Now’s European Policy Manager based in Brussels. Previously, Fanny was International Privacy Fellow at the Electronic Privacy Information Center in Washington, D.C. where she focused on E.U.-U.S. data transfers. For three years Fanny led the Freedom of Information and Data Protection Program of the Hungarian Civil Liberties Union where she engaged in strategic litigation with journalists and other NGOs, participated in the fight against the national data retention law in Hungary, and promoted privacy enhancing technologies. There, she gained experience on how to operate as a human rights advocate in a restrictive environment. Fanny also worked as a consumer protection lawyer both in the public and the private sector. She has a law degree from Eötvös Loránd University Budapest and she spent one academic year at the University of Florence with Erasmus Scholarship. Fanny was once on Swiss national TV as Roger Federer’s biggest fan.

    Karen Reilly is the Managing Director of Tungsten Labs, building communication technology with privacy by design. Previously, she managed bare metal and cloud infrastructure in the private sector, and worked on information security and censorship circumvention for NGOs.

    Moderator

    Gonzalo Lopez-Barajas

    Online Moderator

    Kristina Olausson, ETNO

    Rapporteur

    Kristina Olausson, ETNO

    Format

    Break-out Group Discussions - 90 Min

    Interventions

    The break-out session will be used to identify dilemmas and solutions. Thereafter, the speakers will present what the group considered to be the right mechanism/stakeholder to turn to for such dilemmas. Finally, we will open up for discussion to identify common view on a set of dilemmas, possible mechanisms and which stakeholder group that should be responsible for what action. Wafa Ben-Hassine, Access Now Tunisia. Access Now has developed a strong positioning on accountability of algorithms at the RightsCon Conference in 2018. They are therefore a suitable representative of Civil society with a clear engagement on the issue of accountability of algorithms. Ms. Ben-Hassine, is also participating in a UN consultation on algorithms with the Special Rapporteur on Free Expression gaining important insights from the international multistakeholder level. Ms. Ben-Hassine can provide a perspective from the African point of view also in her role as member of the Advisory board of the Arab World Internet Institute. Lorena Jaume-Palasí, Executive Director, AlgorithmWatch. AlgorithmWatch is a non-profit organisation to evaluate and shed light on algorithmic and automatization processes that have a social relevance. Ms. Jaume-Palasi’s work focuses on philosophy of law and ethics of automatization and digitization. She has been appointed by the Spanish government as a member of the Council of the Wise on Artificial Intelligence and Data Politics. In 2018 she was elected by the Cotec Foundation as a member of its 100 experts for social change and innovation for her work on automatization and ethics. Ms. Jaume-Palasí has long experience from multistakeholder processes as a founder of the Dynamic Coalition on Publicness of the United Nations Internet Governance Forum as well as from serving as the head of the secretariat of the German Internet Governance Forum (IGF) and on the expert advisory board of the Code Red initiative against mass surveillance. Ms. Jaume-Palasí will bring the perspective of civil society engaged on an international level. Pascal Bekono, Computer and Telecom Engineer, Ministry of Justice Cameroon. He is an ICANN, IGF and ISOC Fellow. As a consultant, he has worked with German Technical Cooperation in Cameroon, and international NGOs focused on poverty reduction strategy papers, millennium development goals, and ICT for development and fair trade. Mr. Bekono was a 2005 World Summit on the Information Society youth Country Coordinator in partnership with TakingITGlobal and the Canadian Government. Mr. Bekono won a scholarship to attend the 3rd International ICT for Development (Doha, 2009). Mr. Bekono is a frequent attendee, participant, organizer, and speaker at IGF meetings. He will be able to bring the African perspective on platforms and algorithms, also benefiting from his technical background. Dr. Oscar González, Undersecretary of Regulation, Ministry of Modernization Argentina (TBC). The Ministry of Modernization is responsible for the federal Internet Plan, Digital Inclusion Plan, e-government and policies relating to digitization. Mr. Gonzalez is a trained lawyer with an LLM from Maastricht University and a law degree from University of Cordoba. Mr. Gonzalez will be able to provide the government perspective on accountability of algorithms, which is important as government so far have not been the central actors in the policy discussions.

    Diversity

    The speakers proposed are from different sectors, technical community, government, industry, and civil society and all have an engagement but different perspective on the topic. Speakers also reflect a big global diversity which is important as the issue is of global issue. It is also a crucial issue to address in a policy forming multi-stakeholder forum as legislation and formation of policy is yet at an early stage. Therefore we would like to bring in as many perspectives as possible.

    The session aims at discussing who should be held accountable for the impact of algorithms. In addition, participants will discuss what meaningful mechanisms there are (technical, legal, and policyoriented) as well as to which actor governments, companies, citizens and other stakeholders can turn to for solutions. Panellists will serve as moderators of the four break-out sessions and convey the core messages in the panel together with their own reflection. The agenda proposed is: - 5 minutes introduction - 20-30 minutes break-out session to identify dilemmas seen by the audience - 20 minutes summary of the speakers of the break-out session discussions coupled with their own reflection on what they consider to be the right mechanism/stakeholder to turn to for such dilemmas - 20-30 minutes discussion with audience - 5 minutes round-up to agree on session conclusions/steps forward

    Each of the break-out sessions will have a set of pre-prepared question to spur the discussion and facilitate input. In this interactive format, the panellists will lead the discussion in each group. As experts of their topic, they will lead the debate and collect input from each group and steer the discussion towards concrete proposals. Afterwards they will use the result of each groups discussion for a short reflection in a panel exercise. Afterwards a Q&A session will be dedicated to gather further input from the floor and online participation.

    We plan to tackle the following policy questions: what challenges/opportunities does the use of algorithms on platforms pose? What are the mechanisms and actors to turn to in order to address these issues? In the development of the data economy, algorithms have become the backbone of many business models deployed worldwide. However, they are no longer solely a topic in the private sector as also within the public sector - particularly in Europe and the US — algorithmic decisionmaking has emerged alongside broader policy trends of the last decade such as open government and evidence-based decisionmaking, as well as new areas like criminal justice. In low and middle-income countries, algorithms can also be “honest” brokers in societies where there are longstanding failures in these sectors, governments and companies. A core area of usage is online platforms were algorithms help ranking massive amounts of content on the platform, according to user preferences but also as part of business deals. This has created concerns with transparency and many business users of platforms (ex. app stores) experience discrimination due to lack of transparency including around the use of data as well as around the organisation of search and ranking results. From a user perspective on the other hand, ranking in app stores is presented in such a way that there is no distinction between paid and non-paid results or organic and individualised ranking. That in turn is encouraging governments to consider different options for addressing the role of these actors in a digital society. An underlying factor of these problems is that more and more tasks and decisions are delegated to algorithms, and they are provided more liberties in the way they execute such tasks. A growing concern is that algorithms are controlling the inclusion — and exclusion — of people and information in an increasing number of settings. This grants algorithms the power to perpetuate, reinforce or even create new forms of injustice. Yet the outcomes of algorithmic processes are often not designed to be accessible, verified or evaluated by humans, limiting our ability to identify if, when, where, and why the algorithm produced harm — and worse still — redress this harm. Civil society have sounded alarm at the recent Rights Con conference in Toronto (May 2018), where a coalition of human rights and technology groups released a new declaration on machine learning standards, calling on both governments and tech companies to ensure that algorithms respect basic principles of equality and non-discrimination. With this background the policy question we will address is: what challenges/opportunities does the use of algorithms on platforms pose and who are the relevant actors to address these issues? This workshop will build on IGF 2017 WS #264 Automated Guardians of the Good? Algorithms impact in the exercise of rights, while focusing more on platforms and the use of algorithms.

    Online Participation

    The online and onsight moderator will coordinate during the Q/A part to involve audiences questions both online and onsight. Online attendees will have a separate queue and will have equal possibility to take the floor as we will rotate the microphone. As this is a break-out session based workshop, the participants will have the possibility to discuss with the panellists in the room. The online moderator will collect questions from online participants and bring them to the break-out groups to allow them to feed into the debate. The organisers will also post questions on Internet forums such as Twitter in the days before the workshop to collect input from interested parties online.

    Agenda

    The agenda proposed is:

    - 10 minutes introduction: the background of the session will be presented as well as the instructions for the break-out sessions.

    - 30 minutes break-out session: The break-out sessions will be moderated by the speakers. Each speaker will have a topic with the common goal of identifying the dilemmas with algorithms on platforms from the perspective of the participants and together propose solutions. The audience can choose which break-out session they want to join. The speaker will serve as a facilitator/moderator of the discussions. 

    - 20 minutes pitches/intervention by speakers representing different stakeholder groups: The speakers will present their views of the discussions together with the solutions/conclusions from the break-out session discussions regarding what the group considers to be the right mechanism/stakeholder to turn to solve potential dilemmas.

    - 20 minutes discussion with audience: The moderator will lead discussion and exchanges with the whole audience, based on the identified solutions from the break-out sessions. The differences/similarities in perspectives can be further discussed.

    - 10 minutes round-up to agree on session conclusions/steps forward

    Session Time
    Session Report (* deadline 9 January) - click on the ? symbol for instructions

    - Session Type (Workshop, Open Forum, etc.): Workshop

    - Title: Who is in charge? Accountability for algorithms on platforms

    - Date & Time: 12 November, 12.10-13.40

    - Organizer(s): Kristina Olausson, ETNO; Lorena Jaume-Palasi, the Ethical Tech Society; Pablo Bello, ASIET; Andrés Sastre, ASIET

    - Chair/Moderator: Gonzalo Lopez Barajas, Telefonica

    - Rapporteur/Notetaker: Kristina Olausson, ETNO

    - List of speakers and their institutional affiliations (Indicate male/female/ transgender male/ transgender female/gender variant/prefer not to answer):

    Speaker 1: Oscar Martín González, male, Public Sector, Latin American and Caribbean Group (GRULAC)

    Speaker 2: Lorena Jaume-Palasí, female, Civil Society, Western European and Others Group (WEOG)

    Speaker 3: Fanny Hedvigi, female, Civil Society, Western European and Others Group (WEOG)

    Speaker 4: Pascal Bekono, male, Government, African Group

    Speaker 5: Phillip Malloch, male, Private Sector, Western European and Others Group (WEOG)

     

    - Theme (as listed here): Development, Innovation & Economic Issues

    - Subtheme (as listed here): INTERNET MARKETS - TELCOS, INTERNET SERVICE PROVIDERS, COMPETITION

     

    - Please state no more than three (3) key messages of the discussion. [150 words or less]

    1. The use of algorithms has become increasingly common not only in private but also public sector. There is a clear benefit in terms of assessing large amounts of data. However, challenges such as access to data, differences in legal frameworks and the impact of mathematical models for algorithms on the freedom/autonomy of the individual also need to be addressed.

    2. With rapid technological development, we need to ask how well current legislative frameworks are adapted to address human rights in the case of use of automated decision-assisting.

    3. As the discussions are still at an early stage, the multi-stakeholder model can be used to map and identify risks/challenges to increase the exchange between different regions.

    - Please elaborate on the discussion held, specifically on areas of agreement and divergence. [150 words]

    There was overall agreement that “transparency” and “explainability” are two different issues. While transparency was seen as key for ensuring accountability of algorithms, not all actors in the session saw this as enough and demand more active participation by the person who’s data is used in the process.

     

    The session noted that governments and private sector plays an important role in ensuring human rights and ethical principles. There was also broad agreement among the session participants that it is too early to regulate algorithms on platforms. The current framework for human rights is sufficient. However, some participants noted a lack knowledge about how individual are impacted.

     

    Private sector also took a more positive outlook, not only to look at challenges but also opportunities of algorithms. They can be an important tool to address SDGs by providing efficiency gains, making better analysis of data and creating values for individuals. Companies compete on trust from users. Convergence and globalization has brought a lot of competition. Users do not only care about price and quality but also whether a brand is trustworthy. These values will help ensuring that companies continue enforcing human rights.

     

    - Please describe any policy recommendations or suggestions regarding the way forward/potential next steps.

    Start with setting standards and technology and design technology according to those standards. It should be made clearer what is meant by “responsibility” and “accountability” as the legal concept has different meanings in different regions.  The human rights framework is sufficient to address the current issues with algorithms, but we should extrapolate them to this new context. This will be key in ensuring trust from users of platforms. Therefore, users should be engaged and consulted when issues address by algorithms are impacting them.  Finally, we should not rush into regulation that could hamper innovation.

     

    - What ideas surfaced in the discussion with respect to how the IGF ecosystem might make progress on this issue? [75 words]

    IGF should continue being a forum for exchange of information and best practices. While the participants concluded that a broad set of stakeholders should remain engaged, their specific roles need to be further discussed. Some of the concrete topics that we need to continue addressing are:

    • how can we make the utilisation of algorithms really understandable for all those people involved?
    • how can we reconcile transparency with people's intellectual property rights in the private and commercial space?
    • what is the role of a government actor, the private sector and others?

     

    - Please estimate the total number of participants.

    Ca 70 people.

     

    - Please estimate the total number of women and gender-variant individuals present.

    About 40-45 women.

     

    - To what extent did the session discuss gender issues, and if to any extent, what was the discussion? [100 words]

    The session discussed how algorithms can both enforce and uncover bias in society. Gender biases are one example. It was noted that algorithms therefore should be transparent and be based on a legal system respecting human rights.