Check-in and access this session from the IGF Schedule.

IGF 2020 WS #344 Trustworthy Web - Differential Privacy and AI to prevent Onl

    Subtheme

    Organizer 1: Hartmut Richard Glaser, Brazilian Internet Steering Committee - CGI.br
    Organizer 2: Vagner Diniz, NIC.br - CEWEB.br
    Organizer 3: Talitha Nicoletti, Pontifical Catholic University of São Paulo - PUC-SP
    Organizer 4: Caroline Burle, Ceweb.br/NIC.br
    Organizer 5: Beatriz Rossi Corrales, Brazilian Network Information Center - NIC.br

    Speaker 1: Diogo Cortiz da Silva, Technical Community, Latin American and Caribbean Group (GRULAC)
    Speaker 2: Ruback Lívia, Technical Community, Latin American and Caribbean Group (GRULAC)
    Speaker 3: Zubiaga Arkaitz, Technical Community, Western European and Others Group (WEOG)

    Moderator

    Vagner Diniz, Technical Community, Latin American and Caribbean Group (GRULAC)

    Online Moderator

    Caroline Burle, Civil Society, Latin American and Caribbean Group (GRULAC)

    Rapporteur

    Beatriz Rossi Corrales, Technical Community, Latin American and Caribbean Group (GRULAC)

    Format

    Round Table - U-shape - 90 Min

    Policy Question(s)

    What are the common practices of online harms nowadays? What are the impacts for the individuals? To what extent and how online harms can threat complex systems in a society, such democracy, economy and healthcare? How can technical approaches address those challenges? How can we ensure AI systems don't violate people's basic rights, such as freedom of speech, when dealing with Online Harms? To what extend the use of data from social media can violate privacy? To what extent and how Differential Privacy techniques could help us to use data to training AI models to fight against online harms while preserving privacy?

    In this Workshop we intend to discuss how AI models could be applied to prevent online harms in order to create a trustworthy Web. There is an emerging area of developing AI applications to detect hate speech, cyber bullying and disinformation in academia, government and private sector. Different companies are creating research projects to deal with those challenges. Facebook, for example, is funding research projects to deal with polarization and disinformation. The UK government also has published an Online Harms White Paper to introduce and discuss possible strategies to overcome the threat. However, as mentioned before, most of those techniques rely on data, so there is a potential risk for privacy when trying to prevent online harms. It seems controversial, but there are promising privacy techniques to address those challenges and preserve privacy while keeping data useful for AI models. Another problem we are facing is a kind of gap between the two disciplines. Usually they are different people with distinct technical backgrounds. That is opportunity to bring together experts who is leading projects in AI to prevent online harms to discuss with people who is leading privacy projects. Crossing this gap will benefit the society, because we will find better strategies to fight online attacks while we preserve privacy.

    SDGs

    GOAL 3: Good Health and Well-Being
    GOAL 5: Gender Equality
    GOAL 8: Decent Work and Economic Growth
    GOAL 9: Industry, Innovation and Infrastructure
    GOAL 12: Responsible Production and Consumption

    Description:

    Artificial intelligence and privacy are two major concerns on the web ecosystem today. In this workshop we aim to discuss how AI techniques can help us to prevent different types of online harms, such as hate speech, cyber bullying and disinformation while preserving privacy. At first glance, this may seem somewhat contradictory and paradoxical. First, because the most common AI techniques rely on data for their training. And if we are talking about online harm, we are referring mainly to data collection on the Web as training examples for AI models. It is very common, for example, to collect posts from the main social networks that will be noted by researchers, to then be used as AI training. There are also organizations that provide open data to be used in training. In both cases, is privacy being considered an important factor? As we are trying to combat online harms, can we not be violating users' privacy? Today, some approaches and techniques are being developed to assist in this process. Only anonymizing data does not guarantee privacy, considering several studies and famous cases of re-identification of users when crossing different databases. One technique that shows promise is Differential Privacy to "add noise" to the dataset. This strategy helps to preserve privacy, but may impact the performance of AI models. It is opportune that at this moment there is a greater integration between the two areas that seem distant. In the session, we will bring together experts on online harms and privacy to discuss how those two disciplines could be integrated to create a trustworthy Web, preventing attacking while preserving privacy.

    Expected Outcomes

    During the session, regarding the Policy Questions, the experts will briefly explore the state of the art of Online Harms and how technical arrangements (specially AI) can address those challenges. They will discuss to what extend AI models can be used in this scenario while preventing attacks to freedom of speech and privacy. Use cases will be discussed among the participants and they will also discuss the challenges of Online Harms, the role of AI and Differential Privacy in this process for the next years and how it will bring a significant change to the Web as we know it. Hence, the workshop may provide a roadmap agreed among workshop participants to open a global debate on the core challenges to enhance AI to prevent Online Harms while protecting the rights of people and privacy. The purpose of the workshop is to reach out to different stakeholders in order to disseminate this roadmap.

    Workshop agenda Opening remarks on policies and practices regarding Differential Privacy and artificial intelligence by the moderator of the workshop (10 min) Five interventions with use cases to generate the debate among the speakers and the audience about Differential Privacy and AI to prevent Online Harms - based on the Policy Questions.(50 minutes) Experts and the audience will debate focusing on to what extend AI models can be used in this scenario while preventing attacks to freedom of speech and privacy. (30min).

    Relevance to Internet Governance: Although the Web began as a platform to share documents, since the early 2000's we are in the era of data on the Web. And therefore the development of the Internet and the Web technologies facilitated the so called data revolution. In recent years the development of Artificial Intelligence has drawn attention to issues such as privacy and protection of personal data. Artificial intelligence and privacy are, thus, two major concerns on the web and Internet Governance ecosystem today.

    Relevance to Theme: Trust is key to promote an open and healthy online space. However, we have experienced in the last years some movements emerging to threat the original principles of the Web to be an open, collaborative and trustworthy platform. These risks includes, but are not limited, to movements who commit cyberbullying and spread hate speech and misinformation, often in a coordinated way. A toxic space is being created, a platform on which groups of users can feel attacked and violated while others can be manipulated. This situation is jeopardizing the original principles of the Web, and many efforts are being made to combat this threat. The use of AI models seems to be a promising strategy to deal with this problem, but side effects can arise: attack on freedom of expression and privacy. In this workshop we will seek a complex view on the topic and discuss the state of the art to deal with such issues as Differential Privacy.

    Online Participation

     

    Usage of IGF Official Tool. Additional Tools proposed: We intend to use Zoom to interact with online participants.