IGF 2017 - Day 3 - Room XXIV - OF31 Data Protection and Humanitarian Action


The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 



>> JACOBO QUINTANILLA: OK.  Good afternoon, everybody.  We thank you for coming at 3:00 p.m.  I know it's a really tough time.  Lunch, coffee and meetings.  So we are delighted to have a small, but very committed room.  Thanks for coming.  My name is Jacobo Quintanilla.  I work as Community Engagement Adviser of International Committee of the Records.  On the hill, 300 meters.

It's an honor to be here in the UN discussing with a fascinating panel of colleagues.  A fascinating issue around data protection.

Without more delay, I'm just going to quickly introduce you to our panel.  Before we do that, we're going to watch a one-minute animation on the topic.  So one minute, we'll be back with that.

>> JACOBO QUINTANILLA: Excellent.  Thank you.  To help us unpack this fascinating topic, we have a fantastic panel I have the honor to moderate.  Next to me is Alessandra Pierucci, committee chair of the Convention 108 from the Council of Europe.  Massimo Marelli is the head of the Data Protection Office ICRC.  Bryan Ford, professor of the centralized and -- Bryan, I really have to read this.  From the APFL in Switzerland.  At the very end, on the table is Alexandrine Pirlot De Corbion.

Massimo is going to give us a little bit of an introduction to set the scene on what we will be discussing today and what is data protection.  We would like to have live conversation.  This is a very intimate scenario today.  So we are delighted.  So please feel free to jump with questions.

The only reminder is that a question actually has a question mark at the end.  So ask your questions, and we'll try to do our best to answer to the best of our capacity.

>> MASSIMO MARELLI: Thank you for turning up in this intimate environment to have this discussion about a topic that for us is incredibly important.  And it's a topic that has been keeping us busy for a long time.  For four years at the ICRC and two years as a project we led with the Brussels Privacy Hub.  It brought together a number of experts from academia, from data protection authorities and privacy commissioners, from humanitarian organizations.  From corporates involved in providing some of the services and technologies we have addressed, Civil Society and so on and so forth.

And it's a process that led to the production of a handbook that will hopefully provide guidance in relation to the adoption and use of certain new technologies in humanitarian work.

It's been a long journey that has been marked by certain key steps and milestones.  The first one that is worth mentioning and Alexandrine will mention is the report in published in 2013 by Privacy International. 

There was a bit of a slap in the face in the humanitarian sector saying guys, you're just moving around a very volatile environment.  You're collecting lots of sensitive data.  Without meaning to do anything bad, the result of your action is that in many cases you are facilitating surveillance and (?) repression by using certain technologies.

Subsequently, the International Conference of Privacy and Data Protection commissioners in 2015 adopted a resolution on data protection on privacy and international humanitarian action where it was highlighted there were some issues around the use of new technologies in particular and humanitarian work.  It committed the international conference and some of the data protection authorities and privacy commissioners to work with the humanitarian sector to come up with guidelines and indications as to how certain new technologies can be used in humanitarian work. 

2015 was about the same time as we were sitting at the same table as the Brussels Privacy Hub and thinking there are people who are suggesting that we should start to fly drones, that we should use big data more, that we should use messaging apps.  That we should do many things.

And obviously they're suggesting that because they're good ideas.  They're ideas that actually could make our work a lot more efficient and effective.

Everybody has the sense that there's a lot that we don't know about the use of new technologies and the implications of these technologies.  So we need to be able to really look into those technologies a little more and try to understand how we can leverage those technologies in a way that is acceptable from a data protection point of view and that causes no harm.  We have an incredible panel with a lot of expertise.

Authority and knowledge.  And so I don't want to take too much time from the discussion, which I'm sure is going to be very good.  And also I know when people make the premise, they go on for a half hour.  I'm counting on Jacobo to slap me.

I just want to say a couple words about why the work we did in a way was so successful and worked out so well.  It worked out so well because bringing together the community of humanitarian organizations and bringing together data protection authorities, Civil Societies.  It looks at the issues around privacy.

Something that brings together people that have a commonality of drivers and interests.  To understand that, it's important to understand who we are as humanitarian organizations.

Humanitarian organizations, I'm speaking for the ICRC, in this case our work is to protect the life and dignity of victims of armed conflict and other situations of violence and to provide them with assistance.

Dignity in hostile environment.  The concept around data protection, the preamble of modern conventions tells us everything that is defined in data protection instrument is there to reinforce and strengthen the dignity of individuals when it comes to the processing of their personal data.  The commonality of drivers is something that brought us together.

Protecting individuals is something the ICRC has been doing for over 150 years.  We to be happy about the way in which we apply the principle of do not harm.

Now it's because of new technologies.  They give us the possibility to do much better things in a way -- the same ways in a more efficient and faster way.  And the humanitarian sector is indeed facing really major new challenges, facing conflicts that last longer.  The average conflict these days is a duration of about 30 years.

And so having new technologies that enable us to have programs around cash, for example, is something that is very useful.

Conflicts become more volatile and difficult to read.  Using big data is something that can give us actually value in understanding where we work and what needs to be done.

We face increasing challenges around access.  When I'm talking about that, I'm really thinking about physical access and I'm thinking about the colleagues that even just over the last year at the ICRC, eight of our colleagues lost their lives.  I think about the last five years, we're talking about over 50.

So this area becomes increasingly conflicts, and conflicts around digital proximity, when we cannot have physical proximity become relevant.  Messaging apps and thinking about using drones is increasingly becoming relevant.

In a way, data protection for us has been the tool that has enabled us to consider how to translate the principal of do not harm in a digital environment.  I think this is the scene from which we start, and I think I've already taken far too much time.  So I'll hand over to the panel.

>> JACOBO QUINTANILLA: Thank you very much.  Before we crack on how many of you work in the humanitarian or development sector?  Just to get a sense of how many of you are working in research and academic?


Any other sectors we should be aware of?


OK.  Thank you very much.  As mentioned, we have a really good panel.  And as Massimo, mentioned, we're going to try to unpack some of the big concepts and discussions that have been put together around this handbook on data protection humanitarian action.  In the interest of this session, we'll focus on four different topics.  The first is around consent and the lawfulness of processing of data.  The second is about the lawfulness of processing of data in the use of biometrics.  The third topic is handling of sensitive data and ethics, which is a fascinating topic as well.

And last but not least, we're going to be talking about the involvement and collaboration with the private sector.  And we're going to look more concrete at cash transfers.

Before we start we would like to ask you a question.  Who can tell us what it is?

Excellent.  How many of you have one of these devices at home?  Nobody.  How many of you are thinking of buying one for Christmas or giving it to someone?  Nobody here.

How many of you think that the smartest speaker like this one is actually a threat to your own personal privacy?  Only one, two, three?  OK.  Good.

The rest, no opinion?  Why is it a threat for your privacy do you think?

(Audience response).

>> JACOBO QUINTANILLA: Now you think you know.


This is a very interesting topic.  We've been talking about data protection, but hey, actually, Amazon can help you get out of jail, you know?  If you have done something that Amazon might have recorded.  Interesting to look into the connection in between situations that we feel are far away.

The pictures that Massimo was showing earlier.  But also the reflection it has in our day to day life.  This is an incredible forum to have discussions, because that's exactly one of the issues we're interested on.

Moving on.  The first topic as mentioned is we're going to look at consent and the lawfulness of processing.

I understand looking at the guide that the approach for a post in this handbook seems to be rather skeptical.  Massimo and I, let me get back to you, about the suitability of consent in humanitarian action.  The concept of consent is interesting.  And if you could help us unpack what consent means.

>> MASSIMO MARELLI: We'll hear also from Alexandrine after, is a concept in data protection, that has been developed a lot lately to really mark the point that consent is a fully-free expression of the will of an individual in full conscience of the risk and benefits of a (?) operation to have their data processed by the controller.

And why is the handbook skeptical about this in particular?  First of all, when we're talk about new technologies, already understanding what the risks and benefits of processing, might be very difficult.  Understanding first of all, it requires excellent understanding of who's involved in the processing, who are the various stakeholders in the loop.  Where the data is going to go.

Who might be seeing information and so on.  We're talking about environments where you saw the picture about where people are highly vulnerable.  How can an individual fully be considered to be fully free the moment that they consent to new processing the data, if that is the precondition to getting their food that day.

If we're talking about biometrics for example or the use of cash programs, there are operations that involve a number of risks.  And yet people don't really have a lot of choice unless you provide them with a choice to accept aid in kind, which is not manageable.

Alexandrine, from your perspective, from chair of the committee of the Convention and council of Europe, how does this analysis of consent fit within the broader context of lawfulness of processing of data?

>> ALEXANDRINE PIRLOT DE CORBION: Thank you.  Let me say that traditionally consent and information in which consent is based, has been the pillars for data protection.  It is not by chance, for example that consent is even mentioned in the European charter of fundamental rights, on the article on data protection.

At the same time, actually what Massimo was saying is absolutely right.  We developed the awareness that consent cannot be a mere bureaucratic element and must really reflect the (?) of individuals.  There may be situations where consent cannot be collected for practical reasons.

Or there are situations where the imbalance that between let's say, the power of the data controllers and the data subjects is such that the consent cannot be freely given.  In all these cases consent cannot constitute valid legal basis for processing.  And this is pretty much in the perspective of Convention 108.  Let me just recall run thing regarding question 108.  That's the council of Europe instrument for data protection.  It is particularly relevant, at least for two reasons.

Because it is only legally binding instrument at international level and it is open to potentially every country of the world.  Because it can be ratified by countries who are not members of the council of Europe.

51 parties.  The Convention 108 is undergoing a process of modernization.  In the text of the modernized Convention mentioned by Massimo, we have a specific reference to the legal basis on which processing must be based.  And consent is of course, mentioned, but also other legal basis.  This responds to the need to not abuse of consent when it's not advisable.

You should not use consent when it's not the appropriate legal basis.

>> JACOBO QUINTANILLA: What are the alternatives, going back to the picture of ICRC, what are the alternatives when consent is not suitable.  Massimo, you talk about informed consent as a basis for that consent to be effective.  In some other places we work, there is not even a consciousness of what even consent means.  What are the alternatives we have if any in the sector?

>> ALEXANDRINE PIRLOT DE CORBION: From the council of Europe perspective, the modernized Conventions speak about other legal basis, in a very general way.  It's not a case, it's because Convention is one way to speak to potentially every country and have to use general principles.

If you have a look at the explanatory memorandum of the modernized Convention, which is an important interpretive data instrument with Convention 108, you see there are other legal basis which could be used in humanitarian action.  For example, when the processing is necessary for protecting the vital interests of the individual, of the data subject.

>> JACOBO QUINTANILLA: Can you give an example?

>> ALEXANDRINE PIRLOT DE CORBION: Well, whenever the processing is really necessary to protect the security, for example, of the person.  Their integrity and of course, consent is not available.

The other legal basis which could be used is when the processing is based on grounds of public interest.  Which is typical of humanitarian organizations which have, let's say a specific public interest, they push.

It's interesting to see that in the (?) we're referring to humanitarian action is contemplated as an example of the use of these alternative bases.  It says sometimes the processing can be based on a combination of even two legal bases like vital interests and the public interest.

And humanitarian actions.  For example, if you want some examples, when the processing is needed for monitoring of live tracking academic.  Or for human emergency.  Or for natural disasters where you need to process data of missing persons.  That could be another example.

>> JACOBO QUINTANILLA: Let me ask you both.  I mean it is clear that in certain instances, there's an exception consent is not suitable for the particular situation.  With that exception, I guess we have also additional responsibilities.  So what of those that we have in the case when consent is not suitable?

>> ALEXANDRINE PIRLOT DE CORBION: One sort of legal basis has been defined by the data controller.  And I say that the work is not finished.

The definition of the correct legal basis for the data processing does not require data controllers to comply with all of the other data protection principles, for example proportionality, just to mention one of the most important ones.

(away from microphone).

I would say that humanitarian action, one of the elements, data controllers should care about is so-called data protection impact assessment, which means that whenever controllers -- before actually, commencing a processing, they need to evaluate the implications of such processing on fundamental rights and on individuals.

So I think data protection impact assessment is crucial element which has been introduced in the modernized Convention.  And also at a European level of course in the GDPR, which you're probably familiar with.

It is a change of perspective.  It means it is up to data controllers to be responsible for the evaluation of their action on the personal data.  And of course, they also have to adopt the other work measures to minimize the risk.

>> JACOBO QUINTANILLA: Thank you.  Is there any comments and concern at the table?

>> MARIE-CHARLOTTE ROQUES-BONNET: Thank you.  I would like to congratulate on such a wonderful handbook.  I would like to add, I believe everything actually has been said.  One of the issues we also experience.  I work for the special (?) Secretary General in big data.  Particularly in the big data context is that when we encounter a situation where consent should be sought, is that data literacy is one of the components that actually hinders the concept of informed consent.

And we notice that the capacity of individuals who are giving consent is crucial.  And that's one of the reasons for example, why (?) concerns is an efficient mechanism of data protection.

Of course, not in every situation.  One example is when digital literacy plays crucial role.  Now our actions and activities, we have always taken into consideration the knowledge of the individual about privacy and privacy risks.  One of the examples just recently published to guidance note by the world food program.  They also mention that in order to provide or obtain informed consent, the individual, the data subject, needs to be aware of the risks that comes with the data use.

That's what we're referring to, right?  The privacy impact assessment comes into play.  Even if sometimes we see consent, I think it's important and we actually insist on going back and making sure that the concern that has been sought is informed, and that's done through the risk assessment.

>> JACOBO QUINTANILLA: We're going to talk about informed consent more in the section later on.  Ethics.  Thanks very much for the reflection on consent and the lawfulness of the data processing.  I would like to move in the interest of time to Alexandrine at the end of the table.  You guys published an important report called aiding surveillance.

Around the use of biometrics in humanitarian action.  Because it can cause harm to people affected by crashes.

Can you give us a bit of a rundown on that particular report.  I would love you to do connections on how relevant on the line study.

>> ALEXANDRINE PIRLOT DE CORBION: Thank you very much for the opportunity to be part of this panel.  As mentioned earlier, and you can see on the slide, this is a report that Privacy International published in 2013.  At the time when we published, there was still attention paid to what we said in terms of the interests.

Since then, there's been a real change both from the privacy community, both from the humanitarian development community.  I want to make the point that this is not just about humanitarian aid both at the development sector as well.

And this report provided us the opportunity to link different aspects of our work.  We're looking at communication surveillance, surveillance in general by state actors and the private sector.  At the same time, we were recipients of development funding for (?) activities.  We felt it was our responsibility to be looking at this sector and how they're using new technologies and advancements to generate and process more data as well.  To enable and facilitate their activities.

When it came to biometrics, our problem analysis was also dipping back around what are the pressures these communities were facing about being more accountable and transparent.  Having less funding and having to be more efficient as well.

So what we saw is that technology and biometrics was being seen as sort of the solution to solve this problem.  From the data trail from when you get the funding to the beneficiary that would receive it.  There's all questions around identification as well as of beneficiaries, supporting as well humanitarian development actors were enabling and providing this assistance as well.

And as Massimo put it, even though I think the community saw it as a damning it was a report to start the dialogue on this issue we weren't having before.  I think that's been a huge change.

When it came to biometrics, the further problem that came through from some of the case studies that we're looking at.  With the lack of understanding as to what biometric information really provided in terms of personal information about individuals.  Ultimately that could be used for surveillance purposes.

There was also ethical dimensions as well.  As mentioned several times on this panel already.  What we're talking about are individuals that are in the most vulnerable positions being asked to get their fingerprints or to get their irises scanned.  There were not only privacy implications, but also ethical ones.

When it came to the technologies themselves, the diversities of technologies from fingerprints to photographs, to also iris scans.  Then there was very little questioning as to the safeguards that needed to be put in place.  What would happen if any of that data was accessed by individuals who shouldn't have access to it, or had malicious intentions, or who are being reckless and not knowing what the risk would be.  And not providing the necessary safeguards either at the point of collection of that data or when it was being stored.

And the harm is real.  One of the things we mentioned in the report is the Egyptian system funded by the Danish aid program where the authorities said yes we use biometric information for surveillance of different groups that are recipients of aid, for example.

The other aspect for us which was very problematic was the lack of control of some of the apps using biometric information.  A lot of these are dependent on the private sector to either provide the equipment, collect it, to store it, to maintain.  As soon as you have third parties involved, you lose that control.  While different development agencies, humanitarian agencies were adopting mechanisms for protection of their beneficiaries, it was really hard to make sure the data life cycle also at every point, being the storage, use and further processing, was also being integrated within safeguards as well.  So what role would that private sector be doing, particularly when it came to biometrics when biometric technologies are also companies that provide surveillance equipment.  So what was the agenda of these different actors as well.

Then finally, maybe a last point on the problems we're seeing was also the dependence.  When somebody had created a biometric system, people were like that's fine.  We'll also use it.  So this dependence on this data that already exists and then building on it and making it bigger than it was supposed to be and using it for different purposes as well.

>> JACOBO QUINTANILLA: We're going to unpack a little bit more in a couple of the issues that you touched now in your remarks.  We're going to be talking about data environments and then unpack some of these issues and a whole section on the private seconder.  That's very timely and relevant in looking at some of the recent events happening in the sector.

Question maybe for you'll Alexandrine, and feel free others to jump in.  In the humanitarian action, do we actually have -- is there a legal basis for the processing of biometric data.  We just take biometric and have to figure it out, or how does it work?

And congratulations for not mentioning the word blockchain.  That was heartwarming.  Thank you.


>> ALEXANDRINE PIRLOT DE CORBION: Biometric data should be part of the protocols of sensitive data.  Generally speaking and finally should be based on very strong content.  But again, we are going back to what we were seeing before.  That there may be situation where consent is not left idle.  So once again, we may try to work on the other legal basis which could assist.

I would repeat what I was saying before.  Again the vital interest and the public interest may be some relevancy to that.

Of course, the level of caution must be very high.

What we heard before, the handbook, when the underlined the possibility that once biometric data are used, they can be used by public authorities for other purposes, like law enforcement that has to do with fundamental rights and freedom.

Again, here the data protection office which I refer to must be in particularly relevant.  The risks are very, very high and the measures which counterbalance the risks must be very, very high as well.  Security is crucial.

>> JACOBO QUINTANILLA: I want to warn all of us that we have 25 minutes to go.

>> I think it is good we're exploring different legal grounds for processing.  Having said that, and it's not just on the work of the humanitarian, both the development sector.  One thing we're also trying to challenge is the problem analysis that's being done by the different actors as to do you really need to be collecting biometric information.  Some of the justifications given are around double dipping into aid funding, corruption.  But because there's no baseline, and we have asked different organizations to tell us what was this amount of fraud that you're trying to address.  If that is more, it's less than the cost of you deploying a massive biometric system then you should rethink whether that is solving your problem, when we know actually that a lot of the corruption is happening not even at the beneficiary level but the program management level.  Challenging even the justification for the resorting to such technologies.

>> JACOBO QUINTANILLA: It's 3:36 and I think we are clear that data protection laws, sensitive data clearly is defined in terms of specific legal obligations, correct?  However, definitions by themselves might not take us as far as we want and may not do that much.

What constitute sensitive data can very across context and how much people understand.  Specifically what consent means, actually.

So Bryan, you've been working with Massimo and ICRC for quite some time looking at research on how sensitive humanitarian sector data can be protected.

So making it accessible to everybody in the room, the main cybersecurity and privacy challenges that you think humanitarian organizations face today.

>> BRYAN FORD: Good.  So from a technical perspective, I really see three major areas of challenges.  Three really grand challenges in this space, which I think in the humanitarian sector is vastly amplified versions of security challenges and privacy challenges people face throughout the world.  But they're worse in the humanitarian context.

The first is the way humanitarian sector organizations -- the tools, technical tools, humanitarian sector organizations can use to interact internally and with their clients.  Messaging apps, biometrics, these kind of tools.  In general, the commercially-available tools tend to be developed with convenience attractiveness functionality in mind with security as an after-thought.  This is a huge problem for everybody, but especially in the humanitarian context.

For example, and this is especially a problem because in humanitarian context, often in many parts of the world, the tools that humanitarian workers have to use to communicate and interact with clients are tightly constrained by what's actually available.  What the local population understands, can accept, is commonly used.  Maybe only SMS is really available.

And lot of these technologies are extremely insecure, vulnerable to surveillance in all sorts of respects.

Some, for example, messaging apps are better than others in terms of these respects.  But even the best available currently are still extremely vulnerable in other ways.

Like they leak even if they protect the content of communication end to end, they still leak metadata.  Like crazy.  Like who's talking to whom.

At what times.  When you're even online versus offline.  Even if you're not talking.  Things like that.  That can be extremely sensitive and vulnerable to surveillance.  The technologies that can potentially provide further protection are not really mature or widely usable enough.

So that's kind of the grand challenge when it comes to the tools that humanitarian organizations use to interact, especially with client populations.

The second grand challenge is the collection and management of data itself.  The processes.  It's here where cloud computing comes in.  The push towards more centralized models of data, storage and management.  Cloud computing is extremely attractive both economically and in terms of convenience and functionality.  In the short term it's attractive from a security perspective because it relies on less I.T. and security expertise out at the edges of the network, where it's difficult and expensive to find sophisticated people in remote offices and stuff.  It centralizes the expertise where it's more economically practical.

Unfortunately, the short-term security advantage comes with an enormous long-term systemic risk of creating giant caches of data vulnerable to potentially adversarial entities around the world.  From ordinary hackers to unscrupulous government agencies.  The cloud security model, most of this cloud technology is designed in such a way that only a single compromised human or computer at some critical point can enable the complete -- the entire cache of data to be stolen and exfiltrated in bulk as a whole.  It's an enormous system at risk that I'm not people are sufficiently aware of.

Somebody already brought up the blockchain buzz word, I will feel free to follow suit.

There is a lot of hype around blockchain, and there is promise in the concept.  It's really just a new version of a very old concept of splitting trusts, decentralizing trusts.

Unfortunately, practically all of the so called blockchain solutions, pushed by major vendors, being sold as products are repackaged cloud technologies, that don't really have any security benefits.  And are still vulnerable to single points of compromise and failure.

So kind of even in that space it's worth being incredibly cautious.  That's the second area.  And managing and storing data.  The third area I see is actual processing data.  The third grand challenge.  And this is where we get kind of the attractiveness of big data and machine learning.  There's a lot of very attractive tools coming out.

Once you have centralized data, there's a lot of very attractive things you can do with it.  But of course, there's also huge risks.  Now there are standard practices such as anonymization, those definitely and the handbook covers those.  Those definitely help reduce risk with how data can be misused.  But know anonymization techniques can really eliminate these risks entirely, because well-known ways that even anonymized data can be put together with other data sources to reduce the privacy in the anonymized things.

There's of course, technology that's kind of just coming out.  Gradually coming out that allows the processing, even processing by sophisticated machine learning techniques in more privacy sensitive ways.  For example, processing data while it remains encrypted and inaccessible to anyone.

This whole class of technology is much less mature, not really available yet.  Even when it is, it's orders of magnitude more slower, more expensive in terms of processing costs and stuff.

>> JACOBO QUINTANILLA: So it's 3:45 in Geneva.  We have a very unscientific poll with all of you.  From 10 being very prepared, 1, not at all, how do you think -- how prepared are humanitarian organizations when it comes to facing the civil security threats?

I'm going to start counting from 10 to 1.  10.  9.  8.  7.  Huh?


Let's get to see.  5.  4.  3.  2.  1.  0.  Minus 1.  OK.  Thanks.


Perfect.  I don't think I'm going to put these questions too much to debate.  But as mentioned, we wanted to talk about data preparedness.  And it's obvious that there is a lot that needs to happen.

Let's move to the next particular challenge that we have.  Thanks very much Bryan.  We extracted from the handbook.  In relation to data analytics.  Big data is your bread and butter among other things.

The handbook talks about data analytics and big data and how they pose a challenge when it comes to (?) capacities, they become highly sensitive and potentially hazardous for affected people.  Can you give us a couple examples to illustrate?

>> Thank you so much for your question.  Before I present a few examples, what I want to mention is that right now we are actually in discussion at the UN on this particular topic.  And it's not only in the context of big data, but in privacy and data in general.  I think now talking about big data separately from personal data is a gray area.  I'll speak from both perspectives.

Right now at the United Nations there is a workgroup, updating and modernizing the privacy policies across the system.  One of the questions that came up is the inclusion and consideration of sensitive data versus nonsensitive data.  One of the questions that has been discussed, not only within the UN but also by the U.N. special Rapporteur is group privacy.  That's where we can make a link to sensitive data, especially when it comes to big data.

Let me explain why.  What is good privacy?  Right now there is no such notion of what good privacy actually means.  There is some research ongoing.  More work needs to be done on understanding group risks.

The group privacy, as well as group harm.

And sensitive data becomes sensitive when we are, for example, aggregating -- we're seeing data from private sector parties.  And it comes into aggregated form.  We don't use the data to target a specific individual.  Imagine in a humanitarian context, we're talk about a group of people who are voting, right?  Or voting against a particular government.

And that government can go after those people, the groups of people, and persecute them.  If the group, particular group, is identified.  We're not speaking about one individual or two.  We're speaking about groups of people in a particular area.  This is a question of not individual privacy anymore.  It's actually a question of group harms that could be caused to an identified group of people.

I think that is one particular example when nonsensitive data, and our notion of sensitive personal data suddenly becomes some personal, but yet sensitive.

Recently published United Nations Development Group notes here, there is a printout also available online.  We think about group privacy and group harm.  Especially when nonsensitive data is mentioned through group harm in this particular part.

I would also like to suggest -- the litigation of the risk is crucial when it comes to group privacy.  When we talk about privacy impact assessments we're not talking about personal or individual privacy anymore.

We just initiated a privacy protection tool, which is called a risk assessment tool.  It's no longer privacy risk assessment.  It's a risks harms and benefit tool that takes into consideration not only individual privacy, but group harms in the context of Human Rights.  I would like to suggest that when we speak about sensitive data, there's tons of personal sensitive data we're all used to.  But it's important to distinguish between nonpersonal data that is also used in sensitive contexts.

>> JACOBO QUINTANILLA: I have a quick question.  You mentioned litigation measures to try to prevent and to counter some of the potential risks.  You mentioned the data protection impact assessments, an impact assessment tool as well.

So who's responsible for implementing these measures?

It's my organization, whoever I work for.  Is there a third independent body who actually as well -- how we do this.  By itself, might not be very helpful.  But how we can ensure quality in those assessments.

>> You asked for examples.  And the example is from our own work.  I was here the first phase of the risk costs and benefits assessment tool.  Currently the second part of it is being developed, which is more comprehensive.

Actually considering, first of all, that not the privacy expert is performing the tool.  The key here is the person who's handling the data.  The manager.  It could be the person in a humanitarian organization or maybe it's outsourced to a third party.

However, there is a crucial link of due diligence.  And understanding that it's not only that one person that might perform this privacy assessment.  We're encouraging the involvement of various skill sets into the process.

Besides having legal and privacy people, we also have data experts and data engineers that are part of the process, consultation process.  It doesn't end there.  Another important element, and I would encourage everyone to think about is when international organizations are implementing their mandates in different countries in different contexts, we're talking about for example saving people's lives in humanitarian context.  So it affects human life and vulnerable populations.  We need to understand the social and cultural contexts.  We need experts, representatives of that country, or groups involved in the assessment process.

>> JACOBO QUINTANILLA: I would like to bring up something Bryan mentioned earlier.  Short-term investments, versus long term risks.

And this brings up one of our various topics which is ethics when it comes to all these issues around data protection.  In humanitarian crisis.  Particularly the recipients, (?) how do bridge the gap.  How do we strike a balance between the need for experimentation and innovation and as Massimo mentioned, the do not harm principle.

>> ALEXANDRINE PIRLOT DE CORBION: I want to respond to a quick thing around the risk assessment.  Based on what Bryan was saying.

>> JACOBO QUINTANILLA: Very quickly.

>> ALEXANDRINE PIRLOT DE CORBION: It's great we're doing risk and privacy assessments.  But given how little information we know how these services operate, the technology operates.  Why don't have the information, unfortunately private companies are not sharing how they are designing them, implementing them.  That we can't make the best risk assessment that we can because we don't have access to that information.  That's a callout to the private sector on transparency.

>> JACOBO QUINTANILLA: We'll try to come to that if we can before 4:00.  Privacy International, working with Alex on research in humanitarian meta data.  This is a report to last year we commissioned.

We partnered with the (?).  You can find it online.  We have time, we would love to talk about what we can do, or kind of do when it comes to the big tech giants.

How do we strike in a tweet.  How do we strike a balance between experimentation, innovation and do not harm?  Massimo.

>> MASSIMO MARELLI: When we talk about innovation and piloting and doing things.  The mantra seems to be innovation is about testing and (?).  It works in some environment.  It works in some commercial applications and particular applications to get you from one place to another.  Works or not.  You can test it.

But when you're working with the life of people, you cannot test and fail.  You can certainly not fail.  What it requires is actually having a proper risk analysis done well before.

Do not test on live data.  So it's really about having a very clear idea through -- we mentioned a few times and it sounds like the solution to everyone's problems is an impact assessment.  It is.  It is because it is the tool that brings data protection to life.  You start clearly understanding who the stakeholders are, where the data goes, and sometimes it's prizing you talk to people who are suggesting a solution and from the moment we start diggings, they don't know who is in the processing of information.  So you start with that and start to question what is the advantage of what you're proposing.

It's a question that sometimes we start to challenge the advantages that's been proposed, we realize it's really because a tool looks cool that it's been proposed without having a sense of what is a real advantage.

So going through the risks is what enables us to understand what the implications might be then able to take an accountable risk-based decision.

>> JACOBO QUINTANILLA: We literally have just a few minutes.  We wanted to talk about relationships with the corporate sector and partnerships and collaborations.  But I think in the interest of time, we're going to try to touch on something a little bit greater, which is a little bit of a reality check in the last three minutes of this discussion.  We just fully understand, or grasping the impacts of some of these companies.  You here in the room might be more aware.  The power of some of these corporations that basically run our lives by algorithms and digital intelligence.

Looking at the panel, what do you think is the real ability that governments and humanitarian organizations have to actually influence the power of this big tech corporations that we have here on the screen.  When it comes to operating and the benefit of the public.  I'm looking at you here Alexandrine since you work for the council of Europe.  How much leverage do you have?

>> ALEXANDRINE PIRLOT DE CORBION: I think with increasing big data, artificial intelligence, algorithms, there's an increasing need for two elements.  Transparency, which is the most obvious one.  Yesterday actually I took part in another session on big data and what clearly emerged also from the OGM was that one of the main concerns in respect of big data is that sort of intrinsic (?) which arise, the decisions which are taken by big data.  Or the use of big data.

In this sense, the work of the council of Europe committee is to ensure as much as possible that transparency is guaranteed.

Which doesn't mean to go into the secrecy of the algorithm.  It's not like that.  It's just to render intelligible to individuals what are the mechanisms, or obscured mechanisms of processing, which may lead to decisions, the consequences, which have repercussions on the private sphere of individuals.

That's the first element.  And the other thing I would add is I think that in this context there's a need to emphasize the role of human intervention.

And this is something which was very much considered where in the modernized Convention, there was introduction of the rights and subjects (?) decisions.  Steps.  Individuals can still have words.  Cannot simply be subjected to decisions with mechanisms behind is totally obscure.  I think in a few words, these are two main elements we my work on.

>> JACOBO QUINTANILLA: Massimo, what do you say are the main legal implications that an institution like the ICRC may have to deal with in the commercial enterprises?

>> MASSIMO MARELLI: It's a big question.


And it's a messy question.  Legal implications.  First of all, you start to lose control of the information that you collect and for which you are responsible in the moment you start with third parties.

You have a number of responsibilities under laws to actually ensure that all the necessary measures are put in place to avoid that loss of control.  And that guarantee you provide to individuals and under your rules and principles are not lost for those individuals the moment you start using external processor.  The responsibilities are huge.

The first and most basic is to have a clear understanding of what is happening with an external processing operation.  On this ends we can understand how to ensure that you could (?) the role.

>> JACOBO QUINTANILLA: This is real influence we have with some of these companies.  What's your take?

>> MARIE-CHARLOTTE ROQUES-BONNET: I think what we need and the United Nations in particular is bringing all the governance together along with the private stakeholders and humanitarian section.  There's a huge gap in this collaboration point where we don't have a proper framework for addressing the needs of humanitarian sector working with private sector.  I think that's one key point we need to address.  And in these frameworks, first of all missing the awareness of public are not only risk to society, but also on the risk of not using the data when it comes to the people who are the discussion of this humanitarian emergencies.

And also in the context of broader spectrum of sustainable development goal.  And the third point I would like to make is that Human Rights aspect here.  Also been mentioned in the report of the U.N. special Rapporteur on the right of privacy that we need to think of Human Rights in conjunction with all of the rights of each other and not just one right at a time.  I think that's what the framework that currently needs to exist in order to tackle the question.

>> JACOBO QUINTANILLA: Excellent.  It's really short.  Sorry.  I would like to finish with Bryan and Alex.  140 to 180 characters recently for Twitter.  So what's your 140 to wrap up the session?  I will go with Alexandrine after.  37 characters already.


>> BRYAN FORD: Don't under estimate the power of humanitarian organizations working together and with privacy conscious government to influence the way the types of products the private sector will build and will open up in an increased transparency when there's enough critical mass and demand for it.

>> JACOBO QUINTANILLA: Alexandrine, what message you want to leave for people who have to leave to the next session?

(away from microphone).

>> JACOBO QUINTANILLA: So it's been great.  Please download it.  Read it.  We have to wrap up unfortunately on this topic that has been done.  Job to the book and topic.  Of course, to be continued.  If you have questions come and talk to us and colleagues here.  Thank you very much for coming and see you in the next edition of this fascinating topic.  Thank you very much.


(Session concluded at 4:04 p.m.)