WS 106 CYBERSECURITY: THROWING OUT PRECONCEPTIONS

FINISHED COPY

 

EIGHTH INTERNET GOVERNANCE FORUM

BALI

BUILDING BRIDGES-ENHANCING MULTI-STAKEHOLDER COOPERATION

FOR GROWTH AND SUSTAINABLE DEVELOPMENT

OCTOBER 23, 2013

8:30 A.M.

WORKSHOP 106

CYBERSECURITY: THROWING OUT PRECONCEPTIONS

 

 

                             ***

This is being provided in a rough draft format. Communication Access Realtime Translation (CART) or captioning are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

                             ***

 

>> NICOLAS SEIDLER: Okay. So welcome, everybody. I'm Nicolas Seidler, policy advisor at the Internet Society. I am going to moderate this panel discussion organized by ISOC and OECD. So today we will discuss about cybersecurity. Like many other cyber issues, it tends to mean different things to different people and while that doesn't really help when we try to work together and collaborate to common responses. In the case of cybersecurity, there are many preconceptions that shape this landscape and influence the technical and policy responses to this issue.

So the goal of this workshop will be to address these preconceptions, these perceptions and hopefully have a clearer picture of what we should actually be talking about. So to that end we have invited a set of really distinguished panelists. On my right, Alan Marcus is senior director, head of information technology and telecommunications industries at the World Economic Forum, USA.

Next to him, Laurent Bernat is policy analyst at the OECD in the areas of information security and privacy protection.

Next to Laurent, Liesyl Franz is a senior policy advisor in the U.S. Department of State, office for the coordinator for cyber space. She is not under the preconception that she is a technological wiz but she has been working on cybersecurity for over 12 years. She is happy to join the conversation today. Welcome, Liesyl.

Next to Liesyl, John Selby. John is an academic at the University of Macquarie in Australia. John is trying to work out why some things change and others stay the same. However, studying the Internet doesn't give him much opportunity to look at things which stay the same for very long. Welcome, John.

Next to John, Malcolm Hutty. Malcolm is the president of the European ISP association. He is also the head of public affairs at LNX, which is the London Internet Exchange.

We also have two expert discussants as part of the audience. Merike Kaeo, in the front seat, is security evangelist and she has spent the last 15 years traveling the globe trying to help technical, operational and policy constituents to be understand what is meant by the term "security."

We also have, I think, T.H. Nguyen here. Welcome, T.H. She is a Legal Counsel and Director of Policy at Artemis Internet. She enjoys designing human systems to solve human problems. As we spend more of our lives online it requires her to read a lot and to learn from people who attend Internet conferences like the IGF.

So one last housekeeping note about the discussion format. We will have about one hour discussion with the panel, with a set of questions. That will allow time for response, between 30 seconds and 60 seconds. So it is very short. I will actually not hesitate to use my disproportionate moderator's power and wave this red flag if my dear speakers go over time. I hope we won't get to that point.

Following that we will have 30 minutes interaction with the audience and key discussants.

So first question, the term cybersecurity has been criticized sometimes for being too vague to be actually useful. Is this something that you agree or disagree with? And if you agree, is there any alternate terminology that you would suggest we use in the Internet governance dialogue? So anyone would like to start?

>> LAURENT BERNAT: Everybody is shy. Welcome, everybody. I'm Laurent Bernat from OECD, co-organizer of this workshop. So everything I say is a personal view. That is my disclaimer.

I think cybersecurity is, as Nicolas said, cybersecurity, everybody has an understanding of this term that is different. So that is where the problem is, is the term. But from our perspective at the international level in international policy discussions, from where we stand we don't have a problem with the term. Everyone will use the term. Each country will use the term which they think is possible, but if we think of what does it mean more conceptually, I will have a problem with cyber. I would have a problem with security. I think the word security has become misleading because what we want to address here is not from again an OECD perspective, which is an economic and social perspective. What we want to address is the risk, not the security. Security is not the goal because that goal would be unachievable. You cannot have security.

What you can have, what you can do is manage the risk, reduce the risk to a level that is acceptable. And when you use the term cybersecurity as being the objective, then that conveys the idea that you can have a secure environment. And everywhere, cyber or anywhere else, there is no such thing as a secure environment.

>> JOHN SELBY: Hi, John Selby, Macquarie University.

From an academic perspective, cybersecurity is a contested term. There is disagreement about what is or is not security rather than what is or is not cyber.

Different stakeholders promote different agendas when they use the same term. That can lead to a lot of technical and other confusion. If we downplay or exclude stakeholders by defining cybersecurity in one particular way, is there a risk stakeholders will shift their agendas into other forums. Taking the multi-stakeholder process, we accept there will be quite a bit of fuzziness about this definition and that's a cost of that process and a feature of that process.

I don't think alternative terminologies is going to resolve it for the stakeholders. That's my position, as such.

>> MALCOLM HUTTY: Thank you. Malcolm Hutty again. I'll add everything I say today is going to be my personal views and opinions. I do think the term can be unhelpful because of the uncertainty and lack of consensus about meaning. It can be used to fuzz a conversation and avoid reaching clarity. If you think about the term, consider some electrical security. What would that mean to you? Your electrical security, you can use electrical systems, you can abuse electrical systems to cut corners, and there's all kind of harms when we talk about cybersecurity harms. We don't use that term most of the time because we think oh, electricity is mainstreamed. Would we say cybersecurity is something that would go out of fashion assign becomes part of the mainstream? To an extent that can be yes because the cyber side does involve specialist knowledge, specialist skills and indeed experts in cybersecurity argue there is no such thing as an expert in cybersecurity. No reasonably qualified expert in the field can be confident in feeling relevant in what they are dealing with. In that sense it can be unhelpful.

The term security itself, I agree with Laurent, is very loaded. The discussions in the last year at Kitts around security and introducing security there, showed that different stakeholders had different agendas as to what they wanted to achieve through raising this topic and how they wished to use it. When you talk about down playing stakeholders, it's not so much that you're down playing stakeholders but the agendas of the stakeholders in certain contexts. Otherwise, I absolutely agree with that point.

 

>> LIESYL FRANZ: I am going to take a slightly different view because I think that cybersecurity has developed a general sense of what the concept is and I don't think that that is necessarily a problem. I think that definitions are useful when we are trying to precisely organize around something that is a little bit more narrow. If you are looking at the narrow concepts or narrow issues, then definitions are useful for discussing how to, interoperability, work interoperably with it. Cybersecurity is meant to be a broad brush of any number of issues that could fall under its umbrella. In that sense I don't have a problem with the term. Everyone in here today is here because the title of the workshop was cybersecurity and it wasn't something else. I think the notion of trying to come up with another term -- I mean, I am not under the preconception that I'm a technical wiz and certainly not under the preconception that I'm academic. So I'm speaking from layman's terms in that regard.

If there's a general concept around which there is some common usage, some common adoption and the flexibility to have any number of issues that might fall under it that might require a little more crystallization that is useful. So there are probably sub-elements to cybersecurity that need that kind of tweaking and level of specificity that people might be looking for.

With regard to risk, you know, we are absolutely supportive of the risk management concept, but I think we have to recognize in using the term security it is not absolute security that we can ever achieve. That's when the risk management concept has to be built in. Thank you.

>> English is not a precise language. We just have to deal with that. So security comes with all kinds of connotations. That's life.

I agree with Liesyl at least to the point that it is a generally recognised concept, but it is generally recognised by people who are in the cybersecurity world and they kind of have a general recognition.

From the communities that I spend a lot of time with, the first thing they brought to my attention -- now we'll go back a couple years -- was there's a big taxonomy problem. If you are a CIO, if you are a cyber expert, if you are a technical audience we can have a good conversation about cybersecurity. If you are a CEO of a non-tech community, if you're a Prime Minister, agricultural minister, these concepts are complicated for you. I think there is opportunity there.

I think security is too narrow in definition and particularly Liesyl and Laurent both talked about risk. I think it includes risk. I think it includes recognizing that you don't eliminate risk but you mitigate it. You build resiliency. It's not in terms of cyber resiliency. It's making sure that the catastrophic challenges to your digital assets are not catastrophic, right? This is what people think today. It is much more about risk and resilience. But on the other hand, let's understand that English is imprecise. No matter what we use it will mean many things to many people, and loaded is always a great way to think about it.

>> NICOLAS SEIDLER: Thanks a lot. On to the next question now. What do you think are the top two or three preconceptions that you or your stakeholder group has regarding the following two aspects of security: The security of the Internet itself as a network and the security of Internet users communications and data?

So two or three preconceptions of yours or your particular group.

>> LIESYL FRANZ: Microphone protocol. I guess from a policymaker's perspective, there are a couple of preconceptions that I think permeate. I don't know that they are mine. I'm not sure it's a stakeholder preconception but it's a general community preconception sometimes. Two, I have two. One is that going in and two is actually some people do view that there is one silver bullet solution that can solve the problems. Some people look at it in that vein and try to find the one thing that can fix the problem. That's one thing.

The other preconception I think people have, and this will obviously depend on where they sit, but it is either a technical solution or a policy solution. And one of the largest issues I think we have in cybersecurity is that lost in translation between the technical community and the policy community. It has gotten better over the years, but it is still, it still resides in our discussions between those that are deeply steeped in the technical issues and know exactly how this all works and those that are trying to figure out how to address it from a policy standpoint.

Thank you.

>> JOHN SELBY: From an academic perspective, we come from many different disciplines. We have different core assumptions. We don't all have the same preconceptions in academia. I can suggest some which may be more common among academics, but there will always be those that disagree with me. The academic community realizes that many design choices that drive the succeed pull growth of the Internet as a research tool are also contributing factors to the security risks faced by the Internet. As the heterogeneity of the stakeholders has increased over the last decades. Many mathematical and technical models have been proposed, but we are ensuring implementation of those -- there are public records of the weaknesses within the mathematical and technical models especially in the face of agencies vested many Ph.D. and mathematics graduates. Quite a few people within the academic communities were aware of the weaknesses exposed by, among others, Snowden leaks. What we have not done is communicate security flaws to the public and why the public remain ignorant for so long is an interesting research question.

>> MALCOLM HUTTY: Thank you. In terms of preconceptions within my own community, data network operators, you are probably going to be looking at factors that result from other splits, other divisions within the community itself around other factors. So you will get very much one group that will say: Well, when it comes to user security, that's the user's issue and nothing to do with us. Our job is to make sure that continuing to run the network, the network continues to push bits and you have another element of the community that thinks quite the opposite an it is actually their responsibility to ensure an end-to-end experience that is the, that it is safe and it excludes the possibility for certain bad behavior, to the extent that that is achievable. And would attempt to take some measures to achieve that outcome.

That I think is just a reflection of a broad thing within our community that permeates into other areas other than security.

The technical pulse split is an interesting point. If I may use up my time for the second question as well, because I think everybody is taking them both together. I also see that as being one of the key splits between technical and policy. But it is not just that all the technical people think it's technical and the policy people think it's policy. On the contrary, on the contrary I find that technical people often complain that policy people think that certain security issues are susceptible simply to a technical fix and the policy measures required is to instruct the technical people to bring about that technical thing and we don't know how it is because you're the technical people so just get on with it.

Often the technical people are looking for policy people to develop policy means to address security issues. Whether that is in a broader policy sense or indeed within the narrow context such as William and company, through policy and procedures or personnel measures other than technical measures and so forth.

>> So first, just very quick, I'll qualify kind of the stakeholder that I am representing here. At the world economic forum we deal with some of the top decision makers in the world. I'm talking about heads of, CEOs or Chairmen of the world's largest companies, heads of state or heads of government, heads of Civil Society organisations, university presidents. This sort of group.

And kind of on this question, a couple things. One for sure, there is this bifurcated kind of concern between the policy and technical. I would say most of these people think it is a technical solution. I kind of agree with the comment that the policy decision is that the technical people deliver the solution, but I think there's this kind of notion that they don't have a role or responsibility in this. I think that is the biggest and the most concerning preconception. If I look at who attends IGF and I think about the communities that the stakeholders I represent, very, very, very few are here. There is a whole economy out there that is heavily dependent on the Internet, on digitization and has no voice or no recognition of their role or responsibility in this space. Certainly the notion of cybersecurity is something they have to take far more seriously and there's a giant gap. I think that's a big one.

>> LAURENT BERNAT: From my perspective, my stakeholders are more microsystems, policymakers, whether government or across the various stakeholder groups and I think what we heard reflects the fact that across the stakeholder groups, the levels of maturity are different within each stakeholder group. So at a high level, and we see consistency is not too hard to get. But in order to get everybody in society across each of these stakeholder groups to understand the concepts and carry them is more difficult.

So I think the notion that this is a static environment, people would agree it is not a static environment. This is a very dynamic environment. When it comes to security, they tend to think that as Liesyl said, you can have a silver bullet. You can fix that problem. And it is even worse, someone else will fix that problem. When we come to the CEOs and the management boards and the people who are on the decision making side of the economy, they think -- some of them, not all of them, may tend to think that somebody else, the techies, it's a technical problem and the techies will solve the problem. I pay people to deal with it, so it's an economic and social problem for the business. It is a question of my business succeeding or failing. So it is an economic problem. And to solve and economic problem I as a CEO have to make decisions. And then down the road it is implemented at a technical level through security measures.

So again I'm going to repeat that many days, I'm afraid. Again, it is a risk management issue for which security is one part of the solution. Security measures are one part of the solution. Security is not the goal. Security is the means. Security measures are the means and the problem is an economic and social problem requiring attention from the decision maker on the economic and social side.

So this is far from being totally widespread and understood by the economic and social community at large, whether it is private sector or other actors.

>> NICOLAS SEIDLER: Thank you very much. That was a useful sort of assessment of your own communities' perceptions of cybersecurity. The follow-up question was: What do you think are preconceptions that other stakeholder communities have? Malcolm touched on it that policymakers think that techies have the solution and techies think that policymakers have the solution.

Would any one of you like to address preconceptions that you think other stakeholder groups have? Or do you think it has been covered? John?

>> JOHN SELBY: In terms of the public preconceptions, one perhaps is that cybersecurity agencies have been focused exclusively on cyber terrorism and cyber war far. The extent the capability is much less recognised. Article coming through the press this morning about the south Korean cyber warfare command engaging in a campaign against the opposition parties during the election. That's just come out in the press.

Second, the magic bullet solution that has been discussed a few times, I think a lot of the marketing in the tech community has contributed to the misperception that these silver bullets exist. Whether they can deliver that is another matter. In the Australian context, strategies which might be effective in offline context such as border controls are effectively translated to the online environment where the relationship between attackers and the defenders may be very different than the offline environment.

>> LIESYL FRANZ: Thanks for reminding us of the follow-up question. I guess I would say that, I'll pick up on the comment that stakeholder group, each stakeholder group or perhaps subgroup within the stakeholder communities might all think it's somebody else's problem to solve. And sort of miss the point of what I think we talk a lot about is shared responsibility across the community, whether a user or provider or government or business person. Everybody -- and depending on what part of the cyber is yours, whether you are a vendor or a user or an individual who is blogging. There is a concept of shared responsibility I think that is missed when you think that, you might think it's somebody else's problem to solve completely.

>> ALAN MARCUS: I'll give just maybe two anecdotes. I think it kind of expresses a bit of an answer. One, a CEO stood up in a session that I was involved with and said: Okay, so I know when missiles from another country fly over our territory and blow up my data center, I sort of know my job is not to fire missiles back. I know my job is not to create an umbrella over it. I rely on my government to respond to such an attack.

But when my data center is destroyed through some sort of digital attack, I actually have no idea who to call. I don't even know what I am supposed to be doing. And working with to prevent this from happening. I thought again it was at a senior executive level this very open statement they made.

The other statement was something along the line of, it was a board meeting actually at a U.S.-based company and they make medical devices, which are connected digitally outside of the body, not initially to the Internet but some other machine which of course is connected somewhere to the Internet.

And they were concerned about people using this technique to maybe kill people or steal Intellectual Property or anything of the like. And the Chairman of the board basically said: Well, it's okay. We are working with the U.S. government. We will be protected. They'll ensure that our property and our concerns are mitigated. This is a company that sells more stuff outside of the U.S. than sells inside of the U.S. And they never once understood or thought through what happens when these challenges that they are worried about happen outside of the territory of the U.S. Who actually is accountable to help them? And can they even really depend on the U.S. government to do this?

When I hear those sorts of stories and look people in the eye, you can imagine how much deeper these gaps in understanding are.

>> LAURENT BERNAT: Thank you. Well, I think there is another widespread misconception. I'm not sure across which community is the most widespread. Actually, I do have an idea. That security is always good. Whatever the kind of security, more security is always good. When you see it again from an economic and social perspective, of course you need security because without security you don't have trust. Without trust there is no economic and social interaction. You need security.

But if you have too much excessive security you fall into another problem, which is suddenly it conflicts with privacy. And people are thinking they are being surveilled so they take less risk because they feel that their confidentiality is breached. They take less risk which means less innovation, less creativity. The security is good in as much as it is balanced. And of course, there is a community which thinks in terms of security is always good and that's the national security community. And that's, I mean we are happy that they have that role because we need also national security.

The problem is how does the balancing take place to ensure that national security view of security, which has to be there, is balanced and doesn't lead to too much security that then leads to inhibiting economic development? Because some measures will prevent people from, or reduce the incentives to be more creative, et cetera. That is really an issue. In one way again to balance too much security versus not enough security is to see it from a risk perspective, which means understanding that security is not the objective. There is a level of risk that you always have to accept. You never have full security. You have to accept that level of risk.

>> LIESYL FRANZ: Just to add another element of the sort of too much security aspect is certainly the synergy between security and privacy. But also the synergy between security and functionality. You know, everybody makes -- that's part of the risk management decision that any enterprise, whether it be government, company, organisation, needs to make as they're determining what their output is and how they are going to protect that, but how they are also going to enable use and functionality. That's where the risk management piece comes in as well.

>> LAURENT BERNAT: I am going to have the red flag soon. I just want to add on that. Privacy is, if there is too much security controls, it will conflict, but you have many others. As you mention, functionality convenience, usability but also freedom of speech. And pretty much all the other dimensions, if there is too much security, will suffer. So it is really a matter of balancing security versus all the other dimensions.

>> NICOLAS SEIDLER: Thanks. Actually we get back to the notion of balance, trade-offs and the notion of social responsibility that you mentioned before. These are all the elements we are going to get back to.

One element that also Laurent mentioned is the notion of trust. So what do you think, what elements need to be put in place to ensure that all Internet users, including citizens but also companies and governments, have confidence in the Internet as has been said, confidence and trust in the network is a key requirement to have a vibrant Internet for economic and social growth.

So what elements need to be put in place to make sure that there is this confidence?

>> So the first is we need more cooperation between all the stakeholders. I think that starts with the ability to share information that in many cases right now is actually prohibited, particularly at the corporate business level where a lot of concern lies.

So some people might call it safe harbor or something like that where companies can actually share information with each other; can share with governments, can share with others to really understand more of the threat and become a more resilient type of organisation. I think that's certainly one.

>> MALCOLM HUTTY: I have a few incomplete suggestions. First is accurate stakeholder expectations about the extent to which the Internet or any aspect of life will be free from risk, getting those expectations right. The confidence that people have in the Internet is rapidly shattered when unrealistic expectations crash against actual events. Secondly, broad and effective education about risks and how to minimize them. So internal task.

Third, the ability to make informed and varied choices about a variety of Internet parameters including privacy, empowering stakeholders, rather than having one model to fit all Internet stakeholders. Fourth, economic oversight of public sector, national security agencies and private sector entities to help with that balance that has been discussed.

>> Wow, I don't have a bullet checklist of everything I want to put in place, but some of the more practical level of dealing with the risk, I think it's about the risk management is what I would like to come back to. And the element that I think that people of all stakeholders want to see in place is that, it's essentially the elimination of infinite variables. Basically risk should be something that they can mitigate. If it's just pure uncertainty and they don't know anything about what they are exposed to or anything about what they should do if something goes wrong or how to respond, then that's not a secure environment by any understanding of the term and it is not an environment in which you are going to feel at all comfortable.

So you are looking for how do you ensure that risk could be managed? How can you ensure that it can be mitigated? Just coming back to Alan's example about missiles flying overhead, one of the key threats in this space is threat to confidence, that you could be subject to attack and not know where it came from or where to turn. And all the protection that is realistic is for most people and for most stakeholders, nation states being the possible exception, is purely in the light of defense of security, protecting security, making sure it doesn't happen in the first place. That is limited. In terms of being able to turn around to your country and say can't you do something about this for me? The confidence isn't there at the moment because you don't have the confidence that government can even attribute accountability for attacks that have taken place in a way that they can then pursue or be willing to pursue.

That is a major problem.

>> LIESYL FRANZ: I guess I would add two elements to the discussion. One is communication. John mentioned it. Communication of your aims, your goals, your terms of use, say. So the more information that users have, the better they -- the more empowered they are to make their own choice about whether or not to pursue their search or their interaction online. So that is certainly one.

The next one I would say is it is a medium in which there is a constant development and evolution of best practices and behavior that users come to expect as they have their interactions online. If you don't go along with what those best practices are, you will not be utilized in a way. And I think there's, that is a social developmental acceptable use kind of norm, I guess I would call it, that I think is continually developing.

>> LAURENT BERNAT: I think cooperation is a very essential keyword here. It is not just cooperation, say, public/private cooperation. That is absolutely essential. It is not just public/private. It is within the different silos in government. It is cooperation between the national level and the international level. It is cooperation across private sector players and private sector, within private sector civil society and business and Internet technical community, et cetera.

It is also cooperation from the technical level to the policy decision making level in both ways. So it is really a very important keyword. And it deals with sharing information. That is absolutely true. But it is also before that perhaps dealing with sharing views and understanding in order to build the sufficient level of trust for at some point sharing information.

There is three musketeers slogan: One for all, all for one. And I think we are not going to achieve anything in that space if there is not a shared culture, shared understanding that this is all about everyone sharing some responsibility according to their role. And I'm quoting here the OECD 2002 security guidelines. That's a very important concept.

That leads naturally to the multistakeholder model, but I think there is another keyword that is important to have here which is transparency. Perhaps transparency is a cross-cutting theme. If you put it in the discussion and people start to understand they have to play under that rule, it helps.

>> NICOLAS SEIDLER: Thanks. And actually the fact that you mentioned cooperation is a very nice transition to the next question. So where does the responsibility for addressing Internet security issues lie? And can we actually, or how can we most effectively combine efforts from different sectors?

>> Pass the mic? Internet security needs to be addressed by all stakeholders, not the supply side demand side or regulatory side. None can solve security on their own. All stakeholders need to understand the incentives and goals of other stakeholders so they can more effectively communicate with each other. Governments, and civil society could combine to fund interdisciplinary research, self plug here, to collect empirical evidence which will better inform policy debates and accelerate the development of new models for security. That might be one solution.

>> ALAN MARCUS: I think we are all going to say "everyone." When we say everyone, everyone, every citizen, every employee, every business, every organisation, every government, every silo within government, everyone has got a responsibility here. I think until we all recognize that -- I mean, it's in the same way if you think in terms of physical security where one lives. People who live in neighborhoods that don't feel responsibility tend to have higher crime rates. They tend to not know their neighbors well. They tend to not be a community, versus communities where people do know each other, take pride in understanding and working together, where they are cooperative with law enforcement. There they tend to have lower crime. There is not much difference there. We all need to recognize we have a responsibility.

>> LIESYL FRANZ: I guess I would come back to the concept of shared responsibility that I mentioned earlier. And combined with communication about where, sort of one area, one person's part of shared responsibility bleeds into another person's. That level of communication is really key. We had an exercise in the U.S. to discuss combating botnets. And one of the biggest parts of that discussion is that it is not the responsibility of just one part of the community to address this issue because it permeates across the spectrum of providers, users. Not just the ISPs, not just the application providers, not just the users. But all in this spectrum of the world that is -- that are exploited by botnets.

So that kind of conversation I think has to occur more often.

>> LAURENT BERNAT: I have not much to add. All participants are responsible according to their role. The problem is to get that message across, down to the -- my grandmother is an Internet user and the CEO. And the minister and all stakeholders, really all of them. At some point we need to capture that simple concept and disseminate it across society.

And it is a challenge with this concept. The first principle of the 2002 security guidelines, so some time ago, was awareness raising. If you don't have awareness that there is a problem, you are never going to feel responsible.

The second principle is responsibility. So really, that is at the core.

>> NICOLAS SEIDLER: Okay. So now it has been mentioned before, the notion of balance can be quite central. So how could we actually strike a reasonable balance between on the one hand the nation's interest in protecting the security of the citizens, and on the other hand the citizens' rights to privacy, freedom of expression, access to information, freedom of association?

Actually, should we talk about a balance? Or can we maximize all these elements? Tricky question.

>> ALAN MARCUS: Okay, maybe I'll start. There's a lot of fear up here, I think.

So first I think there are generational and cultural differences as we see around the world between what things people want to keep private and what people want to leave open. I think security is necessary no matter what you want private and what you don't care about being private. There's still always a security necessary. So I think balance is achievable. People always say "or." Privacy or security. Why can't we say privacy and security? The minute we start thinking that, we start thinking about how to solve that. I think it's quite possible.

But there are, in anything, trade-offs. That came up before. When one thinks about risk, it is about trade-off. It is about function versus risk. And sometimes you're willing to take the risk because the opportunity is just so great. And sometimes you're not. I think these are cultural and societal challenges and I don't think there's one answer to all.

An anecdote that always reminds me in this discussion, if you have been to the Serengeti or anything like that, in the rainy season the grasses are quite tall. If you watch animals they are quite fascinating. If the grasses move, gazelles run. They just take off flying as fast as they can. It doesn't matter what causes the grass to move.

In the dry season when the grasses are low, gazelles walk right past lions sitting there with no fear at all. It's because they understand more what is going on when they can't see things than when they don't. Things like transparency matter. I think that's a big key to the balance.

>> I think this is an ideologic loaded question. I think about this from the point of view of someone who prefers certain values over others. I do believe that the listed values here, citizen's right to privacy, freedom of expression, freedom of association, can be maximized. I don't think it needs to be balanced. I believe that you can have a view of security that says that what we are seeking to do is to maximize these values. I believe that, I see this list as being from someone that prefers these values and would see basically taking an individualist's view of fundamental rights. And that would see that the job is to try and maximize the individual's autonomy. And a secure environment is one that allows them to do that while respecting someone else's autonomy. That is an objective that is shared by many people.

But there are also other values as well. There are those that think that no, it's all very well, but I would rather prioritize shared cultural norms, protecting my society from being influenced by other cultural values, by protecting my government from being exposed to threats or the lack of confidence that is created when unhelpful or unfortunate information comes to light. Those sorts of things, which are not individual values. They are collective values. If you share those, then you would have a very different interpretation of this, and probably a very different approach to the security measures that you undertake.

So I would answer this question by saying yes, you can certainly do that if that's what you want to do. But there isn't necessarily a worldwide consensus that that's the aim.

>> LIESYL FRANZ: I just would like to refer to the U.S. international strategy for cyberspace, prosperity, security and openness in a networked world.

One of the core tenets of that is Internet freedom, supporting individual freedoms and privacy. We commit to support individual freedoms as well as privacy in civil by achieving security safe platforms and freedom for expression and association. Our policies in this area and many others have not changed.

>> LAURENT BERNAT: Well, it is a loaded question. I think there is a shared understanding at the highest level and at the social level that we want to maximize all of these dimensions, of course. But there is a technical -- I don't want to say technical. There is a fundamental reality that when you secure something, you create a boundary around it. You create a perimeter around it. That is going to reduce the openness of that something. If you have an environment which is completely an open field and something on the field, and you want to secure it, you are going to put a perimeter around it. Suddenly there is less flows between that thing and the rest of the world in the field. So security inherently in order to protect reduces the flows and reduces the dynamic nature of what it is protecting because any change in what you want to protect will introduce uncertainty. Uncertainty is a source of risk. If you really want to secure it, you have to reduce uncertainty.

So yeah, well, we have to maximize, that's the objective. But the reality is any security measure will reduce the open fess and the capacity to change and the information flow. So at the end of the day if you are talking about information, so at the end of the day that is the problem we have to solve. So there is a discussion on balance. It is not should we maximize our balance. We have to balance in the real world. The question is how you balance.

There, what you said is really interesting because there you will have -- I know that the question does not refer to the Internet or to information. The question is formulating a very general term. So what you will find in the digital world is just a reflection of the culture you have in the offline world, that's all. And it is not different and there is no reason why it should be different online or offline. And we will face the same problem when we discuss these points online as we face them when we discuss them offline.

>> NICOLAS SEIDLER: Okay. John, would you like to say something?

>> JOHN SELBY: One issue that has not been mentioned here is the question of power. And security can provide power to some groups in society if they benefit more from that security than other groups. So particularly in democratic states the issue of the oversight of security, the effectiveness of that oversight, the effective communication of that oversight to citizens and their voice in that process, I think, is vital to ensuring that the balance, if you want to have a balance of these perspectives doesn't provide or lock in power for certain groups in society at the expense of others, particularly in the long-term.

So I think the danger is particularly if you design into infrastructure particular designs of security, they can be locked in through network effect be very hard to remove if the society later on believes that there needs to be rebalancing.

>> NICOLAS SEIDLER: Thanks a lot. So we've reached the end of the first section of the discussion. I think we have had some quite fascinating points from all different speakers. Before I open the discussion to the floor, I wanted to get back to our two key experts in the audience, Merike and T.H. There are specific aspects of the discussion that resonated with you? Or any issues you wanted to highlight based on this discussion? Do you have a mic? Thank you.

>> MERIKE KAEO: Yes, there's actually quite a few things that I was thinking about, what would be the most poignant to this group.

One is that for some reason people think that security in the physical world is easier than in the virtual world. And really, they are both the same. For some reason people think that in the virtual world you have to have as much absolute security as possible, but that is not even existing in the physical world.

And when people think about, okay, the over reaching cybersecurity, what does it mean? It does mean absolutely everything. An when you talk about it, I think that's the assumption. But when you ask somebody: Well, how do you provide your cybersecurity? Then it starts getting really complex. It deals with physical security to the devices, network security, application security, in so many different ways. When you get into the intricate details that's where things become complex. I think that's something where we do absolutely have to have the intersection of the policymakers, the technology folks, operational people. And I do not mean the network operations people but basically how do you run your businesses? What are your business operations.

And the third factor is law enforcement. I will have to say in the last five years I have seen a lot of collaborations between all four communities which I think is great. So that was one of the statements I was going to make.

Another one is Mr. Marcus brought up the fact of data sharing. The criminals are really good at sharing data. We are not. Right? And we keep talking about what is a privacy issue? And I ask people, well, what is the privacy issue? Well, it's a privacy issue. I want to get into more detail. In today's world what do we mean by privacy? If a criminal can get access to data fairly easily, why are we trying so hard to protect it in the name of privacy? Maybe we have to reconsider what is meant by privacy in some places. Those are my comments.

>> NICOLAS SEIDLER: Thanks a lot, Merike. Very insightful. T.H., same question on the specific aspects you felt resonated with you?

>> T.H. NGUYEN: Yes, hi. I want to actually go back to the first question which was the definition of security because I think some of the responses I was, I noticed that maybe your definition of security was a little bit different. So it is very interesting, the question about I believe it was Laurent, Laurent mentioned that you can't have perfect security. That is not ideal. I think the basis of that may be that you see security as more traditional. There's this boundary, right, you're keeping assets away from other people. So if you have a world of no risk, perfect security, then that means no sharing at all versus I think Mr. Hutty, his definition may be less traditional. And it is more the risk management framework that security is really the optimal boundary. I think maybe you two are actually aligned. It is just that your definitions of security are different.

The second point I want to mention is with respect to Liesyl, the shared responsibility. There was a lot of talk about shared responsibility which is really interesting.

I come from a company that is trying to make security kind of a no-brainer for consumers. Our thesis is that the market is sort of broken because consumers, normal people -- I work at a security group, I'm a policy person who reads RFCs. Still hard for me to decide, what do I do when a security certificate is displayed to me? I work with guys who are experts on this and ask them all the time. I still don't feel capable of making this decision. I have been taught don't click.

So it's really hard for consumers. Yet we place on them breach notifications, right? We want to be transparent. But is that, should that be, should -- does that really reflect the reality? The analogy I like to think of, in the U.S. at least we don't place a burden on tenants to know how to fix their own apartments and this frees me up as a single renter to be able to move around at will and just follow my career passions. I don't have to think about oh, my gosh, I have to upkeep this home, go to Home Depot on the weekend and I can't even take care of a plant, right?

It's interesting that in the security space we put that burden on consumers without asking: Is that the right way to do it? And my last point is, I really like Alan Marcus' comment about community and that if you give people skin in the game, they'll help to make it flourish. They'll have a neighborhood watch. This is a plug for what we are doing. We are thinking about this a lot. We are applying for a top level domain. We are actually going to be at security software regulatory program that is tied to a top level domain. Crazy, never been done. And the idea is to use this top level domain called Dot Secure to claim a neighborhood on the Internet that is secure. Consumers, for consumers the idea is for it to be a no brainer. For businesses, businesses that register on Dot Secure we say hey, before we actually put a space in this neighborhood for you, allow you to have a hop here, you are going to abide by these policies, collection RFCs that the industry we galvanize says this collection is going to maybe the website the safest on the Internet and still you can make money. And then on an ongoing basis, our technology platform would use software and human cops to patrol the neighborhood and ensure that there is continual compliance with a set of baseline policies which will be upgraded to keep up with the latest attacks.

Thank you.

>> NICOLAS SEIDLER: Thank you very much, it has been very interesting. Unless some of you would like to react to those two points? Okay, please.

And please, you have been very patient. Prepare your questions. It is coming.

>> LAURENT BERNAT: Thank you very much for giving me the opportunity to clarify my point. I think we agree. I was just commenting on cybersecurity and to me the problem with security is that it is generally understood, and there is a technical reality that you have to put the perimeter.

But taking that into account what you really want is to manage security risk. So we actually do agree. But in order to manage security risk, the security measures you put in place are going to, you are going to have perimeters, you are going to have limitations of flows to some extent.

The question is, where and how much? And what defines where and how much and which measures is the risk management process. It is not a security decision to decide -- technical security or organizational security decision to decide where and how much you put perimeters in security measures. It is a risk management decision because it will impact the way business operates. The way the activities are going on. It will reduce them to some extent.

So the problem is when we use the term security as a noun, like we need security. We want a secure Internet. When we use it like this, all of these nuances go away an we suddenly want something a is a problem that is solved. It is never solved. There is always some level of risk that has to be accepted. Which is why I would support to always say, but nobody will do, but not security but security risk. The point is risk. And it is risk in the area of security.

>> LIESYL FRANZ: Thank you. I just wanted to mention two things. Thank you for raising the level of, I think increased cooperation and collaboration that is happening across all the stakeholder groups and bringing in law enforcement because that is a key element of the equation as well.

And secondly, I just wanted to mention that we talk about shared responsibility purposefully. It doesn't mean equal responsibility for each element of the spectrum. And so it's important to figure out what those responsibilities may be in any given situation. But that's just kind of an important distinction I would make. Then yes, I do think we have to talk about what level of expectation is put on the user, whether they are an individual user or they are an enterprise user or whatever it is. It is going to vary, right? But I think we will always want to think of it as a burden, but introducing the notion of choice for the users. So that it's not an imposed level of security or protection measures that are put on an individual. They still have the element of choice.

Just concepts to think about as you're thinking about what that sort of picture looks like.

>> NICOLAS SEIDLER: Thanks a lot. So now we have about 20, 25 minutes of interaction with the audience. I see many interested people. Maybe we can take two or three questions at a time. I think at the back, please introduce yourself. Go ahead, thanks.

>> AUDIENCE: Hi. My name is Alex. I'm quite new to cybersecurity. I was confused by the session. I came here to learn about cybersecurity.

So my question is: What is it actually that we are talking about securing? Thank you.

>> NICOLAS SEIDLER: Thank you. Back to the basics. Maybe just one or two more questions? Can you pass the mic?

>> AUDIENCE: Hi, thank you. My name is Keechang Kim from Open Net Korea. I would like to have the panelists' opinion about how useful it could be to draw an analogy between offline security and online security. I think in the offline security, the power and resources for ensuring security tend to be monopolized. We have military and we have please police force. Not many countries tolerate militia. This sort of monopolized power to ensure offline world, would it work in securing online security? Do we want some sort of converged upon optionalized, centralized unit which will take the vast burden of ensuring security? Some countries talk about setting up cyber warfare unit in the hands of some military or century led Internet service. As John mentioned I'm from South Korea and this signed of cyber warfare command could often turn out to do more damage to their own people than protecting their people.

So I would like to have the panelists' view about how useful it can be and can this monopolized and centralized security model work in cybersecurity?

>> NICOLAS SEIDLER: Thanks. I think we will take those two questions now. It is two very interesting questions. Up to you.

Malcolm?

>> MALCOLM HUTTY: Monopoly. Taking that first. In the offline, well, we don't have a monopoly for the state on security measures. I lock my door. Most of what we do when we are talking about security, certainly in the private sector, is actually about protective security. So by extension it is a version of locking your door. Putting the burglar alarm in, all that sort of stuff.

When you raise the concept of a centralized body that has force, you are actually talking not about defensive security, you are talk on offensive security. You are talking about getting somebody to go after the bad guys. That is more in the realm of cyber war type stuff, but on a more mundane level there's regular investigation of any crime that happens, whether it's online or offline and that continues. The jurisdictional challenges in the online space, because things happen across borders so readily, are well understood to be a problem and there is cooperation being worked on that.

But to be honest, I don't think that there is anything really that radically different in this space with the exception of the point that I raised earlier which is a form of attribution. The investigation is so much more difficult.

Going back to the first question, what are we talking about securing? There I think it is going back to the original issues that we were having about, you know, what does security mean? What is it that we are talking about here? And there, I mean Alan started the discussion by saying actually he thought the term cybersecurity was reasonably well understood and I conditionally agreed with that, but only depending upon context.

If you get a bunch of network security officers sitting around a table and say hey, guys, what has been your big issue this week? You know that they are going to be having a conversation pretty much on the same terms. They are going to be understanding the bounds of that conversation. They are going to understand the kind of things they are going to want to talk about. It will be a productive conversation.

You get a group like the IGF together with the breadth of stakeholders there and say let's talk about a secure expert, it is not in my opinion a helpful conversation because the definition of what they want to achieve from that is too varied. There are too many differences as a starting point.

>> JOHN SELBY: A couple of points. The issue of cybersecurity people talk about, are we securing things connected to the Internet? Are we securing the Internet itself? Are we securing a wide range of things? And we talk about the Internet of things in the future potentially. That will grow broader and broader.

So the sort of separation between cybersecurity an security may collapse a little bit over time.

The second one, how useful would it be to sort of have a centralized system for security? One big difference between say the telecommunications traditional space and the Internet space is between top-down regulation and bottom-up regulation. I think that creates one big challenge for any sort of centralized security body. Because they would not be able to respond quickly enough to development by individuals somewhere on the planet of a new protocol or new software application or new piece of hardware.

If you had to get approval from that centralized body before you could release that out to the world, it would destroy innovation on the Internet. So centralized security I would argue is not going to be particularly effective.

One other thing I would draw back to a historical model, the state monopoly you were talking about. One perspective that I talked about with a number of governments as times is the idea of investing in one big strategy. The French Maginot line in the First World War was a solution to that era's problem, but changes in technology meant that France could not withstand an attack in 1939, 1940. A zero day attack took down that defensive line, the German paratroopers that went in by gliders and landed in the France fortress an took it out from the inside. That was a zero day attack in the physical world.

So the concern is if you build a centralized system, you can't keep it up to date with the changes in the technology. It will be too slow to respond. I argue that's not an effective solution.

>> LIESYL FRANZ: Just an element to add to that is that we talk about Internet as being a distributed decentralized network of networks. So I can't imagine that any centralized form of security would be effective.

>> LAURENT BERNAT: Thank you. What are we securing? From our perspective we are not securing. We are trying to ensure that the infrastructures that provide the IT with us with water, food distribution, water, health, transport, that these can continue to be effective and even further developed and be more effective. This is our problem. This is the problem of what we are trying to achieve. I'm focused on infrastructure, but it is not just an infrastructure. It is a question of people's dealings in their daily lives, of business operation operating, but pretty much the facets of society today they should continue to work and they should continue to function and continue to develop and improve. This is the problem we are trying to achieve from an OECD perspective and managing the risks that these face is in the digital environment for us is cybersecurity risk management.

The central monopoly security thing, the words here lead to what Liesyl says. This is a completely distributed environment which needs flexibility. It is totally dynamic. If you stop using this then you lose all the economic benefits that go with it.

Now, there is a collective dimension to addressing this issue. There is a more granular individual dimension to solving these issues. The collective dimension is going to be addressed according to the culture of the country. And some aspects will be more centralized, and some will be more distributive. We still have the individual. Everybody has a role, as we said, and shares some responsibility.

On the collective aspects, it needs to -- whatever is in place needs to keep the flexible nature of the environment and the openness and the capacity to, for information flows to go in all directions. So that is the problem with the monopoly approach and a centralized point. It tends to be static and seems to be part of the problem, not everything.

That is a very conceptual response. It's a conceptual question and hard to generalize on the topic.

>> ALAN MARCUS: So societies create laws and part of those laws are to protect things we think are of value, digital or physical, we want to protect them. To me that's the analogy, online versus offline in what we are securing.

One point, security has a factor in it called time. We lock our door. Do we put on a burglar alarm? It depends. What are we protecting and for how long do I need to protect it? We separate enforcement of the law that society created from ensuring that we are not being infracted upon. That's just time. What are the right tools, centralized, decentralized? What is the time factor that we need to protect against, and pick the right tool.

>> NICOLAS SEIDLER: Thanks a lot. We have 15 minutes left. It is very short. Do we have remote moderator's questions? Okay? Merike, you wanted to say something very quickly? Then we get to the second round of very short questions and very short answers.

>> MERIKE KAEO: Yeah. I mean, I thought so helped so many people secure their infrastructures that I wanted to address the first question, what are we talking about? The first thing is, you have to figure out in your environment, be it a government, be it a business, what is it that you are using on the Internet, right? And what is it that is important to you to protect? Because you're protecting the data. It is all about the data and the information on there whether or not it's in transit or whether or not it is something in the ... (garbled speech.)

What you are going through, if somebody has access to this data, am I going to go bankrupt, am I as a government to go to have big political issues? Starting from data classification is really where you then start looking at well, how many ways can people get access to it in a virtual environment, right? And then it becomes hard because you have to look at all the details. And so it starts with actually what are you trying to protect, which one of the panelists also brought out.

>> NICOLAS SEIDLER: Thanks a lot. So ten minutes left, about I take three, maximum three questions. Someone here? Thanks for your patience.

>> AUDIENCE: Thank you. Venicia Black ...

(Lost audio.)

>> -- personal risk management.

>> NICOLAS SEIDLER: Thanks a lot. Any other question? The gentleman over there I think raised his hand.

>> AUDIENCE: Thanks, Mike Nelson, Microsoft. Actually one comment and two short questions. The comment is, we are not protecting only data. We don't agree with this principle when we attack the, you know, processes of companies, et cetera, it is not data that is attacked. It can be much bigger.

But the two questions I have are very short one is in protecting critical infrastructure when we talk about creating infrastructure for countries, in many cases we are leaving the decision of data protection to the CEOs of those companies. And don't you think that it is actually a policy decision not a CEO decision to protect whether it is utility, transportation, et cetera, because they are not talking -- in many cases they are looking at it from their own company, I would say, not from the country level. And even back to the country level. I think there's an opportunity to look at it from a policy perspective what it means. So my first question.

The other comment I want to make, we talked a lot, we looked at security from a protective sort of things, which I looked at it from another angle which is a lifecycle issue. We are protecting ourselves. At some point in time we know there will be an attack. The question becomes how do we detect it? How do we solve it? And how do we recover?

So it is not just about how to protect but actually it is a sort of lifecycle. I would argue that attacks will be always there. The question is really how we detect and how we recover, is another element of the discussion.

>> NICOLAS SEIDLER: Thanks a lot. I know many people will be very frustrated but these are already two quite broad questions. So I think we close the open questions and if the panel could answer and if possible, again quite concise. Thanks a lot.

>> ALAN MARCUS: All right, I'll start.

(Lost audio.)

(Please stand by while we restore the audio feed.)

>> ALAN MARCUS: It is nice as a parent to try to protect your children from everything they will do wrong and it would be great if government and other institutions could do that for us. The reality is we are going to have to test a lot of this ourselves. It is going to be painful at times. Some people will get hurt more than others, but that is the nature of learning as a society. So I think we will see a lot more of that. It is interesting and I'm quoting an ITU number that something like 92 percent of critical infrastructure in the Internet or digital telecom space is owned by private sector. Private sector has a responsibility. I think that's a challenge. How do we make national policy against critical infrastructure if most of it is private sector owned? I don't have a tech solution to that but we do need to recognize it is easy to say let's have national policy but execution is in the hands of the people who run this. Unless we are going to nationalize infrastructure which is a whole nother debate. We need to understand that there is a challenge.

>> LAURENT BERNAT: Well, all this infrastructure that I mentioned that enables society to work are not the only -- I was making the point this is not about securing the Internet. It's making sure that society works and continues to prosper. And it is again not very different taking from your question, it is not different from the offline world. There are many fundamental, almost all fundamental infrastructures that enable society to work are managed by private sector. And sometimes there is some degree of regulation because there is a collective dimension to that. So you say hold on, it is not your interest, it's the nation's interest, all the people's interest as a whole. You should do this and that to make sure it all goes well.

That is not different because there is bits and it is digital. This is going to be the same debate. Difference is that it is going very fast. It is very new and there is a lot of immaturity in approaching it. It protects, requires generations to acquire the culture to approach all of this from policy and technical perspective in a consistent and coherent manner. This is going very fast and we see the threats increasing and the risk increasing, putting everybody under pressure, sometimes wrong decisions are made because there is an emergency and people want to do something. We are facing all of this. It is the process of getting more mature in that space. It is a multigenerational issue.

The question on the -- I'm not sure the citizens and the individual responsibility is different online than offline. Again the difference is nobody is mature enough. It is very new. We are in the infancy of this. It took how many years to get better at road safety, make sure that people fastened their seat belt, that kind of stuff? At some point legislation had to step in and I mean, that experience is different across countries. It is the same here. I think again it is a multigenerational issue and it starts with awareness raising and approaching it from the right perspective. Again when I see the marketing of some companies or the, some politicians saying we need a safe secure Internet and that product is going to make you secure online. No, that's not true. We will never have a safe and secure Internet. And that product is not going to make you safe and secure. That is biased approach and not helping the multigenerational challenge of improving the culture. It takes time.

>> LIESYL FRANZ: I think I'll also take the first question to start because I think it really is grounded in awareness raising, which is an inexact science. It is a difficult art and requires collaboration and investment and a will at sort of the -- of all the parties that are involved.

In the U.S. we have a campaign called stop, think, connect. It is a public/private partnership. It brings in many of the stakeholders in providing for Internet and Internet types of services and government in its goal in raising awareness for our citizenry. And we also work with other countries to coordinate and collaborate on building out awareness on a global basis as well.

All of it can be done -- there can be more done in that area. I think it's just something that if that is the goal, then there needs to be that investment. I think that's something that we should as expire to.

The second thing with regard to critical infrastructure and I'll also be parochial for a minute and talk about the U.S. government's approach. Earlier this year President Obama issued an executive order for cyber security and a companion Presidential policy directive for cybersecurity in the critical infrastructure.

It represented sort of an evolutionary process of how we deal with our critical infrastructure in what we call a public-private partnership. I think it's a little too stark to say that all companies that run critical infrastructure only care about the company. I mean, they in the end have customers. We have a reputation and they also have a mission, a goal to provide for that critical service or that critical good. So it is not entirely true to say that all they care about is the bottom line. If that is gone, they don't have a bottom line.

So that partnership has been very, very important and the executive order lays out a direction for how government agencies work on cybersecurity and how they work together with industry. So it involves information sharing, cybersecurity information sharing between the government and the private sector. It also calls for the development of a cybersecurity framework that identifies, to identify and implement better security practices among the critical infrastructure sectors. So there is a recognition that sort of a top-down mandated approach is not -- back to my, just to go full circle, back to the silver bullet. It is not going to achieve in one fell swoop the solution.

So that collaborative effort of building best practices, coming to consensus about what kinds of standards might work is sort of the goal of those two efforts.

>> NICOLAS SEIDLER: John, we consider that as final statements because we have reached the hour. So John and Malcolm, would you like to say any last words? Do you have any preconceptions that you are going to throw out after this discussion?

>> Well, I was going to say one thing in relation to the question about how we enhance the dialogue that might serve as a concluding remark.

We need a bit of modesty in this space. I see that question as being related to the point raised in the key discussion about I see this certificate warning. What do I do about it? How do I know? And we don't have good advice. We don't. But actually, looking at that as an example a bit further, we don't even know what a certificate of authority is a good way of helping people. There are real reason to suggest not. That this is a poor security practice to have security certificate authorities. Yet it is a key mechanism that is relied upon and needed.

What I think this little example shows is that we just don't know. The idea that we can, that it is just simply about education in a trivial sense. We all work together, we cooperate. We work to help deliver good messages to consumers and individuals and so forth. That's in a limited sense.

But in the broader sense, looking back at how we are going to deal with this in the future we have to realize there is a lot we don't understand about this field. We are not at the stage of road safety of saying well, we've got cars now and people need to know that they need to look both ways before they close the road and we have to have a campaign to make them do that. Many of the things we pay be saying may turn out to be bad choices with the best knowledge from the state-of-the-art as it currently exists at the moment.

That I think goes to the question of excessive security and excessive controls. To be honest, in my opinion there is nothing so such thing as too much security as long as I get to decide what security is. We can all agree you can have such a thing as too many security controls. In this space you can have inappropriate controls and that includes messages out to others about how they should use, how they should behave, what they can do. We have to recognize that it is possibly, we need to take -- this is my pitch for risk management approach. We need to look at what we are trying to achieve and find ways of getting by in the world and managing problems or threats that we face in a way that is good enough. Not by attempting to achieve a secure outcome because we don't know even what -- not only don't we know what that is, but we have no idea really about how to get there. What we can do is try to protect ourselves, mitigate the risks that we are exposed to and be open to continually learning and changing.

>> NICOLAS SEIDLER: Thanks a lot. That is an excellent way to conclude this session. I hope it has been interesting to you. I would like to thank the panels and thank the audience for your participation. Let me thank ISOC for taking care of most of this. Thank you to Nikolai and Christine who couldn't come.  

(Applause.)

(The session concluded at 9:37 p.m. CDT.)

(CART provider signing off.)

 

***

  

This text is being provided in a rough draft format. Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

 

***