IGF 2025 – Day 3 – Studio N – WEF Business Engagement Session Safety in Innovation - Building Digital Trust and Resilience

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> AUGUSTINA CALLEGARI: Good morning, everyone, welcome to the session on safety and innovation organized by the World Economic Forum. My name is Augustina Callegari. I'm digital safety lead at the World Economic Forum. I'm really pleased to have you today and I'm going to be moderating this session.

Firstly, I have to say that as part of the Global Coalition for Digital Safety, we have been reflecting on the importance of safety by design because building safety into the design from the very start is a good practice, it's essential for maintaining usefulness and responsiveness of innovation. That's why we are here today to hear from various groups of ‑‑ we don't have one, we don't have two speakers. We have five speakers. So this is going to be a very interesting session.

We have ‑‑ let me start by introducing our panelists.

We have Jeff Collins, head of global trust and safety at Amazon Web Services. I have Renato Leite, VP of legal AI privacy and innovation at E&.

We have Peter Stern, stakeholder engagement director at Meta, and Valiant Richey, global head of outreach and partnership at TikTok, and Rafaela Nicolazzi at Open AI.

I know we have an hour only and many speakers. I want to start by asking each of our speakers because something that we focus when we work around safety it's on building common ground, common language. Sometimes we talk certain terms or challenges but there is no agreement what we mean by the thing that we are trying to solve. So I want to ask the panelist what safety means to each of you and what if the biggest challenge that you face when you want to turn this principle into practice? So I think that I think I can start with Tiktok then go to you.

>> VALIANT RICHEY: Thanks for the government for hosting the IGF this year and for this panel and the invitation to participate. I think the topic of this is safety and innovation and I think this concept of safety from the start is something we're very committed to at TikTok. It means ensuring that safety and security are built into our products from the beginning by design. In other words, what that means is is that the product team is building thing and safety teams need to work from development to launch. Those who might be less familiar with how things happen in the world TikTok are building the search function in TikTok. The safety teams are writing the rules and enforcing those rules across and sometimes the product teams will do things without trust and safety involved and that can cause problems. What we try to do is bring them hand in hand and work together. The goal is to build responsible products that help people express themselves and builds communities. We need to focus on minimizing risk when is we do that. When we're talking about building safely and safety and innovation what does that look like in practice? What it means that T and S teams are working with our product and engineering teams to safeguard new product features and innovations as they are developed. Let me give you an example. We recently started testing a feature called AI alive which lets you turn your TikTok stories into videos. As part of the development of AI alive our T and S team built a multilayer moderation strategy to make sure that safety checks happen at each step of the way. And we added AI generated content labels and metadata into the feature so the people are actually aware that the content was created with AI.

Another example of how we do this at TikTok is by really focusing on building tools and resources into the product that our community can use. That might include tools like content management or screen time management tools reporting options, account controls like being able to set your account to private and so on.

Along that journey, it's hard, right? A lot of challenges. Unfortunately you're facing motivated bad actors who are ‑‑ they're really trying to misuse the product at every step and anticipating all those misuses of products is not easy. But for a product like TikTok, which feature user generated content, one of the biggest challenges we face is balancing safety and fundamental rights like freedom of expression. We're firmly committed to finding that right balance but it's not easy. Something like online hate for example can be highly context lies asked nuanced. It's really difficult to know where to draw the line. Sometimes efforts have to be a bit more reactive. That's when we look at our community principles and guidelines and outside experts to really help us build the system that works the best for our community. So we want to do safety by design but we also have to recognize that it's going to be difficult as we go.

>> AUGUSTINA CALLEGARI: Thank you.

>> PETER STERN: This is actually my first IGF even though I have been at Meta for 11 years now. It's great to see everybody. I'm on the content policy team at Meta. I run a stakeholder engagement function. My role is to develop relationships with academics, civil society groups, other thought leaders who can help us inform our policies, including our product policies to build better policies and get better results for our worldwide user base. I want to talk about how we think about safety both in terms of policies and products but first I mean I would want to sort of question any suggestion that safety and innovation don't go together or are incompatible. Safety is really at the heart of everything that we do and it's always been that way. I mean, safety is one of the core principles that underlie our community standards along with voice, dignity, and privacy. And we're always thinking about safety as we build our policies.

This goes back a long time. I was involved in the rollout of Facebook live, gosh, seven, eight years ago now, and at that point we faced a lot of novel questions about how we were going to moderate content, look in on live streams that reached a certain level of viral or viewership. Safety was obviously incredibly important then and continues to be now.

So one way that we embed safety in our processes is by doing robust stakeholder engagement whenever we revise our community standards. My team maps out a strategy for who we should be talking to around the world, whose written an authoritative opinion about a particular policy area and then we'll seek to engage and get their input and feed that into the policy development process. By talking to the right people and fleshing out these views I think we help to identify the impacts that we will have on our users which I think is one core element of what goes up to making an approach to safety and something very important to us. The policy development process then concludes through a kind of Companywide convening that we call a policy forum and is then rolled out in the form of a public change to our community standards. That transparency I think is also an important element of safety that we'll probably be talking more about today.

Shifting gears here just because I know that time is limited for everybody. In the world of, for example, the development of our policies around AI systems, safety is also fundamental. You know, Meta deals with issues of safety in AI at a number of different levels as I think people know we develop our own foundational models that we open source and make available to others to use. We also create our own services that are fine tuned for our users using AI and then we host AI generated content on our services. At each of these elements we have to take into account issues of safety. Are we using diverse and representative data to train our models? Are we providing information to developers who are using our models to create their own products so that he can this build in safety and they have a sense of the guardrails that we think are important. How are we testing and read teaming and seeking to build in safeguards when we create our own products and how do we identify AI generated content other people might want to put on our services in a way that people feel is giving them sufficient information but not unduly intrusive. Lots more to be said but that's an overview of how we think about safety in this setting.

>> AUGUSTINA CALLEGARI: Thank you.

Jeff, do you want to address the question?

>> JEFF COLLINS: First, thanks to the World Economic Forum for putting all of this together. Thanks for all of you showing up. As someone who has been ‑‑ Peter's ‑‑ I'm surprised you haven't been to IGF but this is a pretty good turnout for late in the week. So thanks for coming.

I have been working in the trust and safety field for a decade. This topic is near and dear to me because for a trust and safety practitioners really at the core of our work is to imbed a safety mind‑set early in the design process I've worked at a teen app called after school at TikTok where Peter like Val are working on the work we started there and now AWS cloud infrastructure provider and for me an important component and something I want to introduce to you here today is we need to think about safety across the entire tech stack. A lot of times people think user generated content companies like Facebook or Twitter but it's really important we understand across the entire stack how abuse can occur and how we need to think about safety often and early. Being at AWS has wide end my aperture in how I think about safety. As I said, we're a cloud infrastructure company. What that means is we have a large diverse customer base. We provide the underlying technology for our customers to power their services and applications. So you might think of a cloud infrastructure provider as providing storage but we provide compute, data analysis AI, many other facets and really we help power company's businesses. Now, our customers from AWS, they are ultimately responsible for what they do in air AWS environment. They're responsible for complying with laws, complying with our acceptable use policy and what we try to do is help empower them to build safe, secure, and trustworthy services and applications. So how do we do that? Our trust and safety team at AWS works to minimize and mitigate harms and mitigate abuse across the AWS network, and this could be things from security threats 41DDOS, someone putting out illegal content like terrorist content for example. If we face true bad actor who compromises a customer's account and launches a DDOS attack, welted take action to mitigate. F. we have a customer who has a problem we help them try to address the problem, diagnosis it at the root cause and figure how to prevent it in the future much as an example we might dig into a case where there's ongoing Internet abuse, find out that a customer didn't patch a security as a result negotiate, their account was compromised and illegally mined cryptocurrency. In a case like that we would work with them to understand what happened and then there are a number of ways we try to help them prevent the problem in the future, really design safety into their product. So we can do that through guiding them to use AWS services. We have over 200 of our own services and many of them are really useful in the trust and safety space. You may have heard of Amazon recognition which is imagine and video detection. We have other tools like Amazon transcribe that can help detect and block language. We will guide customers to use those solutions. We also have AWS marketplace which is a way customers can use independent software vendor's products in their business. So you might be products with like Thorne's safer tool which is used to detect and prevent child abuse material, entities like cohere, Web purify, check step, those are all in AWS marketplace. Then we also have a partner network where we have consulting partners with AWS like Accenture, Deloitte, cognizant and they help our customers integrate Amazon services like recognition into their severance. Finally we something trust and safety center which is to provide information how to report abuse to our AWS trust and safety team, how to trouble shoot certain types of abuse and how to integrate best practices into your work. I think I'll save AI for later but that's an overview of how we think about safety at our layer of the tech stack where we are not directly putting out content but we have a whole range of customers across government, private sector, civil society that are using AWS and try to work with them to help them integrate safety into their products.

>> AUGUSTINA CALLEGARI: Thank you, Jeff. We are going to be talking more about AI later but now I want to give the floor to Renato Leite.

>> Renato Leite: First and foremost, thanks for the World Economic Forum for organizing the fellow panelists. It is an honor to be here at the IGF. Just before, since we have such a crowded room, can everybody hear okay? Everybody in the back over there? I know it's a little bit packed I know so it's okay.

Jokes a part of it here.

So just a little bit of introduction. I work for E& and I know it's not a company as famous as some of these companies over here in the western world. I am in the Middle East. So E& is a holding company in north Africa Asia Europe and expanding to other parts. So basically it used to be Telco studying attack and it does everything, everything from infrastructure, Internet services, satellite, cables, robotics, health, finance, payments, entertainment even hotels. Every time I talk to a team they find something new we do. As many companies also decided to turn into AI first company and comes with the challenge that we come in over here.

So going very specific to the question how do we see safety from the start and what are some of the challenges we see turning though those principles those that can be labeled safe into practice over here. So I think there is very ‑‑ that E& saw from the beginning is that guaranteeing safety what we come a here responsible AI does not mean that you cannot drive innovation. Even if you see my title, I don't like titles. Those who know me but it's about AI privacy and innovation. The same time that you can promote AI and AI first that you can guarantee privacy. Privacy is my main background over here. You can also foster and drive innovation. Goes having a very well structured and governance processes and policies inside of the company. We can talk a little bit more about that but one of the main challenge we see a lot of companies doing that. We do ‑‑ like we have a good governance in AI. We have a bunch of committees inside of the companies. Everybody has a chime in, everybody has a say in practice is very very hard. So one of the main challenge that we see first is the lack of expertise. I'm not talking about engineers, computer engineers that know how to do large language model multimodal reasoning, other things all of the engineers that work at AWS doing these marvelous projects that all of us use is basically saying what is responsible AI? What is safety from the beginning? What is the things that you have to know in order to be able to build these safe products. So some of the first and also the habit of teaching is how to upscale people to know what safety means from the beginning. That when they are doing their day‑to‑day activities, when they are developing products, services, solutions regardless of the sector, regardless of the ‑‑ Internet or not, they know what they're doing. They're not saying I'm just doing this because there is a policy that needs to be in force. So this is something that we're working from the beginning.

That means also alignment, communication among all of the teams. We need to have a really strong buy‑in from the top down in order to ‑‑ for the team to say, okay, we're going to take the time in order to learn, in order to develop this, in order to follow these principles the charter that was discussed and that is also really hard. This is why we go from some of the solutions we rely on a lot of automation. This is something that I'm really keen of is how we rely on automation and even AI for policy enforcements. You discuss provide awareness, but how to understand that there will not be some type of bypass most of the time because there is no knowledge of all of the processes that need to be followed. How have AI even to help on that. This is what is called just to finalize not only talking about safety by design but governance from design. Every single stage from deployment, from risk assessment, from deployment, from monitoring, it have a governance it can be as automated and also can be embedded into the expertise of all of our employees. We can talk about what we have been doing internally but just from the challenges and how to put into practice.

Thank you.

>> AUGUSTINA CALLEGARI: Rafaela Nicolazzi.

>> RAFAELA NICOLAZZI: I think my fellow panelists addressed most of the things companies do to address safety by design but I want to perhaps to start with a little bit of my background. I work based in Brussels. So one of the things I do a lot is to engage with regulators all the over Europe and beyond. So I wanted to answer this question think those lenses and to me building safety from the start means not waiting for regulation or crisis to come up. It has to be a proactive commitment. It's basically responsibility from section in every product as my fellow colleagues here already said and not treating safety as after commitment or compliance check box which happens sometimes when we see like with smaller organizations. I could speak about safety for many many hours on this. We are almost like 20 minutes away ‑‑ or 35, but I want to highlight four things that we're doing at Open AI to bring this safety by design into practice.

And No. 1, it starts with what does it mean safety by design? One of the key documents that we publish it's called model spec. I wonder how many of you may have seen that already. It's this public commitment that said expectations of how we want the model to behave and grounded into societal values. How we want our AI models to really engage with society.

No. 2 is risk mitigation. For that we have something called system cards. System guards is how we've identified tested and mitigated potential harms, misuse, bias, so it goes. Basically while we do after we publish or release a big model or a new feature into things we do.

No. 3 is what I call threat monitoring that's related into preparedness document and it really outlines actually how we track, evaluate, and stress test what we call catastrophic risks like Ms. Use in bio security.

And last but not least I think to me one of the most important one that really speaks to my heart which is being transparent about all of this. Everything we do all of these policies, safety measures, define what is risk, what are the risks we think most important should be assessed by our safety teams is shared openly. And the reason is to really invite critique because we believe that building safety is a collective effort, right? One example of this is spec. We allow this for public consultation for feedback for the general public for two weeks and then we publish the model spec, the first version of it. And then goes to your last point of the question which is what is the challenge. To me the challenge is balancing this feat of how this technology is evolving so fast with the depth of the safeguards that we have to provide. We know that sometimes regulation and enforcement can go as fast as how those paradigm shifts are happening in the AI era. Definitely how to balance those two together.

>> AUGUSTINA CALLEGARI: Thank you.

Well, we have covered a lot already but we have different perspective from a term that could seem very straightforward when we think about safety by design we think it needs to be embedded from the beginning but what we have here different perspective. To pick up on something that we're mentioning here firstly we need to recognize and address safety it's a space continually changing and evolving practice and evolving discipline as well. Things that came up from the conversation that were related to building trust and safety not only for users but within the company there was a need of working together with all the teams inside the company but also with other stakeholders in terms of building interventions that embed safety from the beginning and we also think of the importance of not different rating safety and innovation because they come together and also picking on some of the points that we've made about the importance of transparencies and balancing this. So that's where we are right now but now I want to go deeper into some of the things that you have mentioned. So I will do a follow‑up question to each of you. We still have half an hour but we also want to allow the audience to make questions. So please be prepared for that as well.

So I will start with you all because I will follow the same order. And we know, of course, that TikTok reach massive youth audience. So what unique safety by design approaches do you employ for children or other vulnerable groups?

>> VALIANT RICHEY: TikTok has this narration as a place that teens love but I think a lot of people would be surprised to learn at just how broad the audience is on TikTok. In fact, forecast in the U.S. our age is 30 and my own feed is full of parenting advice and cooking recipes and content posted by grandparents. So I think that it's really important to think about safety as the holistic not just our users but yes, teens around the world love Tiktok we know that. And so for years we have approached safety by design relating to accounts differently. We build the strongest safeguards into teen accounts so they can have a positive experience in a safe environment.

For example, accounts for teens under 16 are automatically set to private along with their content. So their content can't be downloaded. It's not recommend in the for you feed and so on.

Direct messaging is not available to anyone under 16. I'll tell you that's really annoying to my 15‑year‑old who wants to direct message. Every teen wants to direct message but the fact is we made a decision that direct messaging is one of the riskier areas in social media and so for under 16 we don't allow it at all.

Every teen under 18 has a screen time limit automatically set to 60 minutes and only people over 18 are allowed to live stream. So these are some of the things that we have built into the back end to really ensure those safeguards are robust for our youngest users. What we want to do then is that you look at ‑‑ so those are like I said back end sort of product features but we also want to empower our community and particularly parents and guardians. We look at a holistic we give opportunity to give agency to our community. Safety by design really doesn't mean those back end features but also those other things that people can use themselves. We recognize that every teen is different, every family is different. So we've continued to enhance our family pairing feature which is a feature that allows me to link my account to my teen's account and set time limits and set times when he can't use TikTok if I want to. That I think allows parents some control over what their teens are doing on TikTok, block their teens from using the app at certain times of the day for example or setting screen time limits. Really helping families customize their experience beyond those robust guardrails that I described. Finally we really want families to talk. I think we spend a tremendous amount of time and we should on thinking about what we can do in this space. We also talk with governments a lot about what they need to be doing and to support this environment, but we also see families and parents and guardians as critical stakeholders and in this conversation. So we want to make sure that they're supported and having conversations about digital safety. We also know it's really hard to have those conversations. I have three boys. It's super hard to talk to them. They don't want to talk or they want to talk at midnight when I want to go to sleep. You know, those things, it's difficult to start that. So we develop this digital safety partnerships for families which is a conversation starter and we built it through consultations with NGOs, through consultations with teens, with families, and got advice about what helps. Things like come with an open mind. Be available when they want to talk. When things go back don't freak out, right? So those types of advice that we put together and you can download it on our website and it's not TikTok specific. It's for social media generally because we want to support families in those conversations. So that's a little bit about how we approach it. Thanks.

>> AUGUSTINA CALLEGARI: Thank you. Yeah, we have been discussing within the coalition the importance of designing targeted interventions because there is no one size fits all interventions and that we need to specifically look at targeted group children and other stakeholders.

Also Peter on the point you made about stakeholder engagement, I would like to ‑‑ I would like to ask you to explain more about a bit about with how to engage with other governments on evolving safety practices without undermining innovation, for example.

>> PETER STERN: Sure. So okay. So let me give you an example here that I think is illustrative of the practices that we're talking about. The issue of transparency and how we reveal that something may have been generated by AI to people is an extremely important one. At the moment we take a scaled approach to this issue and I want to kind of tell you briefly what our approach is and then back up and tell you how we got there which I think sheds light on how we operate. So currently we put watermarks, invisible watermarks on content that is generated using our own AI tools. We also will attach certain types of labels to content that people post that have been generated using AI in part or in whole and we also have in our policies the option to apply a more prominent label if we determine that content that has been generated by AI could be highly misleading and a matter of public interest potentially putting people at risk. So that's sort of the bumper sticker version of what the policies there. There's a lot of nuance but tracing back on what we got there we had a policy on what we call manipulated misleading video, manipulated misleading video as going back to 2020. And a case went up to our oversight board which is an independent oversight mechanism that we created a number of years ago and the oversight board told us some important things we took to heart in the space. One is we needed to update the policy because it didn't cover enough different types of content. We thought that made sense the previous policy was focused purely on video. They also said interestingly from the perspective of freedom of expression that they wanted United States to focus on labeling and transparency rather than taking a heavy handed approach that would have involved removing a lot of video that was graded or other types of content that was created by AI. We thought both of these were very important. That kicked off a policy development process for us. We went out and talked to 100 stakeholders in 34 countries to get their input on how we should be achieving these balances and drawing these lines. We also did extensive opinion survey research with 23,000 respondents in 13 countries. We learned some interesting things. There was support for labeling. There was particular support for labeling in high risk settings. And there was also a lot of support for self disclosure, people putting labels on themselves, which is something that we also thought was important. We took this all into our policy development process and came up with the approach that I've described. So already we're talking about multiple, multiple touch points with experts and civil society but also an important industry component because in order to be able to identify content that has been generated using AI off of our services we need to be able to interpret certain types of metadata so we can then apply labels. We worked closely with a consortium of companies under the aegis of partnership for AI so that we could develop a common framework in that area. And we also used partnership for AI's best practices in approaching our own solution on watermarking. So it's a work in progress. This is ‑‑ these are not perfect solutions. There are ways around many of these tools and we need to continue to be vigilant about that, but the approach that we've come up with is broad‑based and I think it's helpful in understanding how we're going to tackle these types of problem in the future.

>> AUGUSTINA CALLEGARI: Thank you, Peter. And following up on the AI app that you mentioned and Jeff mentioned before on how your operation safety by design given the scale and speed of AI?

>> JEFF COLLINS: I want to start by picking up on something you said. You said trust is safety and continuously evolving. That's very true. We need to stay attuned to what's happening. But it's also very important to recognize we have learned a lot of lessons over the last decade as trust and safety practitioners and we need to use those lessons in our work with the really rapid assent of Gen AI I think what we have seen is across the tech industry companies trying to figure out where they place trust and safety, how it fits in with responsible AI. Responsible AI comes from ‑‑ its background is academia. There's a lot of overlap between the two but not a ton of clarity. One thing I've tried to do is make sure we take this lesson over the last decade that Val touched on in the beginning we need to break down silos and integrate some of our lessons learned from trust and safety into our work. I'll touch on one thing that we do in the AI front. So AWS bedrock is our service that allows customers to leverage foundational models around many different companies in their own work. And we try to make it simple for customers to integrate safety into all cycles of the AI process from design and development to deployment, to operations. So in bedrock we built in the ability for our customers to use a number of safety controls. These are things like input filtering to make sure that if you don't want to allow a user to be able to input something, certain language say around child sexual abuse material, you can prevent that. Output filtering. PII detection and redaction. There's a host of different safety things or steps that our customers can take and in doing that they can kind of tailor their products, their services and apps for ‑‑ to align with their own values.

Something else we did I'm really proud of is we built in in bedrock CSAM detection. We use hash matching technology to be able to detect potential CSAM in any inputs and then when we do detect that and we do it with a 99.9 percent accuracy, we block the content. We automatically report to national center for missing and exploited children which is the clearinghouse for CSAM in the United States. And then we advise the customer. And the reason I mention all of those tools is because we developed those by having our trust and safety team work closely with our Gen AI teams and I think this is similar. I don't work at other companies but just having talked to people in the industry, I think they face similar situations where AI teams were working fast and furiously, engineers who have been working on machine learning and AI for a long time to get products out and we really wanted to make sure that we did learn this lesson from the past that we need to be careful in not rushing things out too fast, but rather build safety into the products.

With that said, I will say that this is still very new territory as Open AI well knows the pace at which new models are coming out is just a Bliss string pace and we're still figuring out across the industry what exactly safety means.

For me, what I would like ‑‑ I would like us to get to a point where really going back to your primary initial question where safety is considered a utility, where users and customers of AWS and others really expect safety to be built into products and they're essentially demanding it. So I'll stop there.

>> AUGUSTINA CALLEGARI: Thank you, Jeff and I think we can expand on that in time but now I want to go Renato Leite and more regulations. So if you can expand a bit.

>> RENATO LEITE: In the region mainly even though we operate here in almost 40 countries our main market's the Middle East and most of the discussing the regulation but it's not being enforced. Not a stick to push people to do things. And that also brings what is something I said in the beginning the level of maturity in the discussions expertise how to do things is very low. I'm originally from Brazil, I know some Brazilian friends over here. You may remember when I was in Brazil 2015 when we were discussing the general data protection law and it was very going to the basic concepts, basic ideas and it's kind of hard when you are discussing with the teams in order that we have to follow XY and Z and guarantee when there is not level of maturity. My team have opportunity to manage overseas negotiations for the entire group for AI, privacy and for innovation intellectual property over here and within this field of safety and mostly talking about safety, not long ago we developed a responsible AI framework as a lot of companies do, we went there. We published and I must say we failed. And we failed because it was very beautiful. It was very colorful shine ‑‑ a lot of there, everybody could go, marketing. But the principles that were there did not really reflect how the maturity of the region, it did not really reflect what ‑‑ how the interworks of the organizations. So we said we need to come back. So we went back. We went to talk with all of the areas of the company, all of the ‑‑ we went to a lot of ‑‑ some of our main jurisdictions we tried to digitalize not only some global standards, regulatory concepts into this framework such as explainability which is not the same as transparency and it might be something that we take it for granted as in Europe when you're talking about that, but how to explain what is the difference between transparency and explainability and how to build that into the AI solutions we have internally and also provide services it's kind of hard. When you go to this understanding, so these are the key differences. It's not because it is in the regulation because as Augustina Callegari said it is a nascent regulation in the ‑‑ but not only because it is in the policy but how that cannot only drive innovation, how cannot only that like achieve the goals of the organization in the region we are very much driven about reputation, about brand reputation, brand value that has big impact and also helps like to get the buy‑in from the different areas and having an interdisciplinary reveal the assessment of all of the solutions we have over there. So we also created an AI governance committee which I'm a part of but even change the mandate of the AI governance committee to the say to discuss it from the ideation phase. Are these ideas that are worthwhile pursuing from the business perspective but what are the safety measures that we not only want but embed from the design before we put that from paper and start developing. That's what I have been doing over the last ‑‑ of the last years over there.

>> AUGUSTINA CALLEGARI: Thank you, Renato Leite. You also provide examples from Brazil.

We've, over to you. You already mentioned some principles that you are focusing on Open AI. Now I would like to ask you for maybe a concrete example of Open AI brings safety into real practice.

>> Absolutely. Perhaps I can start bringing a little bit of context. Give you one step back how do we see this fast pace evolution of AI technologies. And many times nowadays when I think about AI, I think most of us have in our minds the chat bots, right? Which are this AI technology that allows you to engage in network conversations, gives you fast responses with prompts that you give to the source. But to us this is just the first level of AI. The second one that we were already talking about for a few years, we were calling that reasoners and reasoners is basically AI that can think for longer. It's AI that would also have a chain of thoughts and would be able to allow you to challenge how it reached that response. And because you were thinking for longer it got much better on addressing questions on the fields of science or mathematics. And when you saw that that was coming up, we saw that that would be a big leap in the capability, right? It would be a huge paradigm shift. And when we first introduced this to society, it was end of last year, around September. And we called this model 01. We know it might be tricky to follow all the names. 01. And because we knew it would be a very capable model we also wanted to ensure the safest model would have in the market. So we did five things to make sure that we were addressing those safety concerns to society, to regulators, to policy stakeholders, to go in the right direction.

No. 1 was the very rigorous testing. So we conducted public internal evaluations to allow content, demographic fairness, and dangerous capabilities.

The second thing we do it was to do external expert red teaming. I heard already talking about it here but one thing we have very clear at Open AI especially because we are rooted in a research lab that started with a bunch of Ph.D. together understanding ‑‑ trying to understand what the future much AI would look like is that inside your own lab you can't understand and assess all the risks. You need people from the external community to do that with you. And for 01 we had partnered with more than ten external organizations all over the world to really challenge the model to identify risk that perhaps we wouldn't be able to see by ourselves. So that was No. 2.

No. 3, it was to bring more advanced safety features. So implemented new safety mechanisms like block list and safety levers to really guide the AI behavior to prevent any type of inappropriate content generation.

No. 4, already said in the principles but how we did this in practice so it's showing the work. We're very committed to be open about the capability, how is the risks, how are mitigating the risks and then of course the publication of the safety cards that I invite you all to checks it out if you haven't done that yet.

And last but not least what we've done as well was partnerships with governments. At the time we had the U.S. and AI safety institute, we gave them early access to our models so they would give us feedback and that would allow us to set the precedent before we release this to the public.

>> AUGUSTINA CALLEGARI: Thank you.

Now we still have ten minutes. I'm going to open the floor for questions or comments. There is a mic this side. So if there are questions, I will ask you to stand up.

We also have remote participation. So my colleagues also from the World Economic Forum if monitoring the chat in case there are any questions or comments from online participants.

>> Hello. Thank you for this panel. Really informative. Nice to see all the industry folks from the conference in one panel. My name is Andrea Vega, tech policy news and integrity. My question is being here at a multistakeholder conference some of you talked about the kind of engagements and outreach you do with civil society in academia but a key tenant of digital trust as named in the panel is ensure the information you host is factual true and not harmful. Promoting that quality of information comes from working with journalists. Can you each talk about your approach to one hosting news online and two the kind of partnerships you have with news organizations and media if any that inform your work.

Thank you.

>> AUGUSTINA CALLEGARI: I think we can take a couple questions and then address everything at the end. We cannot hear very well from here, right? So I hope we can address all your questions but we are checking on the screen. That's why you will see us looking at the screen to make sure we are understanding your question. So please, go ahead.

>> Hi, my name is Venicio. I'm a software engineer and I lead the ‑‑ at Google, and I lead the Internet freedom team at Jigsaw.

And this is my first IGF. One thing I noticed is I don't actually see lots of engineers here. I also don't see product managers. But those are the builders and we can't have multistakeholder governance without having the builders in the room. So I would like to ‑‑ like what we see is legal and policy. So I'd like to ask like our companies to like can we please bring the builders to this conversation?

And another like big tech engineers live like a well paid and live in a bubble quite often and they're not even thinking about these issues, but so our companies need to actively ‑‑ proactively put incentive in place to bring those people into the conversation. And the same time remove the hurdles. I know at least one company here that banned the engineers and product people from going to a lot of these events in the past. We used to have very good and productive conversations and I would love to see that more. I think this will help having more productive conversation with everyone and also build trust on your companies and our companies I think would help everyone.

I think Jeff can address that.

>> JEFF COLLINS: I'm just going to give you a quick response. It brought a smile on my face I come from a policy background but now I'm surrounded by engineers at AWS. It was 2IGFs' it was really a year and a half ago in Kyoto I made to sure to bring the senior engineer because she kept asking what is our policy team doing and I brought her there and the first day or so she was so confused because she went to these panels and she wanted to have ABC leads to solution DEF and she didn't understand, you know, like how this worked. But by the end of the time, she met a bunch of people and she really could understand better this is part of the policymaking process, but I couldn't agree with you more and I brought her because I wanted to help close that gap and I think that's important at conferences like this but also in companies. So when I was at TikTok, we worked very hard to not just hire people from Meta. We needed to hire people ‑‑ we needed to hire people who had really ‑‑

>> You might be the only one.

>> JEFF COLLINS: No we had a lot of people there and they were great because they knew the history of trust and safety operations but we wanted to ‑‑ so they were like high caliber. Everyone wanted them. But we also hired people from academia, from media, and of course we needed engineers, but you're absolutely right, there needs to be more of that.

Now, if you interact with ICANN world here, you do start to come across a lot more technical people, but I couldn't agree with you more. So let's just think about ways that we can do that at this conference and elsewhere.

>> That's a good point, like removing the barrier inside the companies is also important because as you all know engineers and policy people are very separated.

>> JEFF COLLINS: Very true.

>> AUGUSTINA CALLEGARI: Being mindful of time we only have three minutes and there are three minutes to ‑‑ I ask you to be brief and we will try to address the first question and also your question.

Thank you very much.

>> Thank you. Can you hear me now? My name is Beatrice. I am a lawyer by training but I'm also an assistant professor in law. So that's kind of the perspective I'm coming to the conversation. One of the things that I think Peter mentioned at the beginning but I also kind of referring to Rafaela Nicolazzi was saying this question of perhaps a conflict between safety and innovation but a lot of what we hear in terms of kind of the conversation is perhaps distinction between innovation and regulation. So kind of oh, regulation is going to hamper innovation and prevent innovation. I'm a firm believer as a lawyer of regulation that regulation can foster innovation. We are seeing a new wave of regulatory efforts from countries like the AU but also the UK where I'm based trying to kind of foster online safety through regulation. So I'm interested in hearing from you in terms of whether to an extent these new regulatory frameworks, these laws have been pushing you towards different forms of innovation? I think we have other examples, is it kind of an enabler, con strainer? Is it having a floor? Ceiling? How do you see regulation in this space of kind of internal policy? Thank you.

>> Hi, good afternoon. I'm Dr. Sadim, secretary general of creators union of Europe, a consultative with UN. First of all I would like to express my sincere to the distinguished speakers for their insightful representation.

Allow me to raise a question stemming from a news article I recently came across regarding an AI application. While I do not specialize in technical dimensions of AI, I will do the best to convey my concern.

The article discussed a machine learning based system that in its practical application appears capable of developing data inadvertently without human input. In some instances the system reportedly acted beyond human control. No longer lamented to the data it was trained on. Even surpassing intended human directives. My question is: Does this present a significant risk that all professionals and stakeholders of AI feel must really consider? And the other one: In your expert opinion, what measures can be taken to address such concerns and ensure the responsible AI development? Thank you.

>> Hello, good morning. My name is Usari and I come from Costa Rica. I am a professor right now. However, I have been a ‑‑ I am former minister of scientific technology and communication of Costa Rica.

My concern is simple: When we have digital trust, we can have safely innovation. However, my concern is how can we involve more the public policymaker and the government? Because if you don't involve all of them, their response to make more safely it's to iterate regulation. And the overregulation can produce that we stop the innovation. So I would like to have your point of view.

Thank you very much.

>> AUGUSTINA CALLEGARI: I know we are running out of time and we have questions about partnerships and involving more policymakers. We have questions about regulations and also about responsible AI development. So I ask to my panelist if you want to pick up on any of these points because we need to wrap up this session now as soon as possible.

>> Why don't I take the first question because I took that to be primarily directed to misinformation often the platform, which is something that is a concern to journalists but also to all of our users who want ‑‑ have told us they want to see more context around certain types of content.

We have a policy directed to misinformation and harm and under this policy we will remove content that is false when we determine that it could present real world safety risks to people. And this is an area that started off as a relatively small subset of content on the platform during Covid it became much larger. That continues to be a policy on the books and that we inform regularly with input from local people on the ground.

We also have policies directed to providing the type of context that I mentioned. Traditionally we have worked with third party fact checkers. We continue to work with 3PFCs outside the United States. In the United States we announced earlier this year that we were going to be moving to a different crowd source model, which has a lot of virtues to it in terms of the way that observations about the truth or falsie are reported. We're hoping the system will be more legitimate and scalable. A lot more to be said about that. Very robust conversation about it but I'll leave it at that. Community notes in the United States and third party fact checking remains in place around the rest of the world.

>> AUGUSTINA CALLEGARI: We can have one or two more minutes.

>> VALIANT RICHEY: Like Meta, TikTok has a robust policy framework. We do a lot of media literacy to make sure our community is receiving information with a discerning eye and we collaborate with fact checkers in order to create that content. So it's a pretty robust system. Then we also obviously have a lot of entities planting news and journalists so‑called to establish news organizations and we work closely with them to make sure that content is in line with our guidelines.

Regarding the point around building engineers and product to these meetings, we actually have a network of ten safety advisory councils around the world and a youth council and we always bring our product people to those meetings because they are amazing opportunities for them to hear from outside experts and integrate that knowledge directly into their work every day. So full support for that idea. It's something we practice regularly.

On regulations, the thing we have found to be I think helpful in how we think about innovation is the risk assessments. They're challenging. They can be annoying in some respects but they are also good opportunities to rethink about how you approach that work and that safety work and I think they've pushed us and finally on AI and responsible development, we have published our responsible AI principles when guide our work on AI development. You can find them on our website and I think they guide our work every day there. So something that we take serious and continue to do so.

>> RENATO LEITE: Always wanted to work with engineers. Even in my previous organization the privacy team which I managed was very siloed privacy legal privacy engineering so and so on. We put them all together. So at a certain point I'm a lawyer and I had ten privacy engineers reporting to me and working together, had to find a translator. In my new organization we started to do the same. It didn't work like we were not speaking the same language at a point. When we started to really work together bridging that gap. Started to understand the language in building better product because of that.

>> Very quick, and I'm not sure if that's the best closing but AI is already smarter than you, right? And it's just getting smarter and smarter. We already know this. Like the whole AI community is already trying to address the problem of going to this breakthrough. Three things I wanted to say about this is the things that we have already using to deal with this is internally is No. 1, governance. No. 2, what we call model alignment. And No. 3 which is the most important one to address the question from the article is always having human ‑‑ I'm happy to go deeper into this. We don't have more time, but, yeah, just to say that it would be very hard for us to leave a room in the near future and think that we are smarter in AI.

>> AUGUSTINA CALLEGARI: Thank you, everyone. We need to follow this conversation. So I think we have seen how important continuing the conversations about safety is and I'm hearing very loudly we need to break silos not only within the companies within teams and within the different communities from AI from digital safety and also build more partnership with civil society organizations, with other organizations and also including user perspective as much as possible. With that, I will close the session. I really want to thank all the panelists. I also want to thank the IGF and staff for offering us this space. Yep. Thank you very much for being here as well.

[APPLAUSE]