The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> Okay. Let's start. Good afternoon. This workshop co‑organized ‑‑ my name is Nathalia Patricio and standing in for my colleague professor Graham Vila and I am not so familiar with the subject. But I will do my best. Well, the title of this workshop is measurement and speng specs to support Net Neutrality enforcement. I would like to thank all the people here today. Thank you very much. It is a pleasure for us to have sufl qualified professionals in this workshop. Our table is multi‑stakeholder. Mr. Sunil Bajpai, Chris Marsden, Christopher leu from public knowledge. Klaus Nieminen from Ficora. I apologize if I didn't announce your names correctly. I would like each of you to make a precise presentation of yourselves before you start your speech. Concerned to the content of this session, it is focused on measurement standards tools techniques and so on. All the that means that could support Net Neutrality regulations oversights worldwide. We posed three policy questions to the speakers to guide their interventions. And they are there. A, what are the main challenges for the consolidations of network net fruual in network operation all over the Internet ecosystem and how law enforcement has been dealing with network knew treual wait y C what the issues to address common standards *** and measurement tools that could support efforts of harmonizing practices for Network Neutrality conform ity. Those are the principles access for the session. They are merely key topics to provide us some common ground for the session organization.
The speakers will be free to approach the questions in their presentations as you wish. And, of course, they already know the questions.
Regarding the dynamics they have up to 10 minutes to present and we have 30 minutes for questions and comments of we should start the conversation.
>> Alicia you go first.
>> Thank you. So my is Alyssa Cooper. I am the Chair of the IETF and I do work at Cisco but I will be talking with IETF related investments and I did my Ph.D. thooe sis work on Net Neutrality. Its a personal interest to me as well. For those who may not be familiar with the I teshgs F it is a global standards development organization that develops the core Protocols for networking on the Internet. Some of the ones you may have heard of IP, http, DNS, VGP and so on. In the IETF we look like to follow the approach of creating building blocks. We haven't created anything like a top to bottom measurement system. But we like to create pieces such that others can go off and pl those complete systems and architectures. What I am going to talk about today is the ‑‑ some of the key building blocks that we have created for measurement on the Internet. And also I'm going to touch a little bit on some important developments that are happening in the standardization area that are going to effect the future capabilities of many different bittives toible a to measure in the feature. The intersnet is composed of many, many different entities even within a single transaction you might have many entities involved and packet loss and all kinds of things happening on the network that are hard to detect. So we have done a bit of work to try and make it easier and more harmonized but gathering measurements in a systematic is still a challenge on the Internet very much. So the main piece of work that I am going to talk about today is called large scale measurement of broadband performance which we have ‑‑ which we call LMAP. This was started several years ago and the scope of it was to to define standards to allow for measurement of broadband devices, personal laptops, mobile devices and home and enterprise routesors and so on. And the goal was to be able to have measurements that could be made using the same me Trishings and mechanisms for a very large number of points on the Internet. And to have the results collected and stoored in a standardized form. The assumption that we had in building this system was that the whole measurement system would be under the control of an entity. Either the by regulator or industry and we had involvement from the several of those regulators. And we were trying to create some stand darPds to help facilitate those fro jeks. In the LMAP architecture you have three basic entities. You have a measurement agent or a probe as it is called in the diagram. And this can sit on some sort of specialized hardware like a probe or a white box. Or it can be sitting on your very own PC or mobile device. The measurement agent foerms measurement tasks. It can observe existing traffic. For people who are a little more familiar with measurement. It can do passive measurements or it can actually generate traffic of its own on‑demand in order to be able too measure the networks of those are called active measurements. So that's the measurement agent. Then we have the controller or the measurement server. And the controller basically manages the measurement agent. It provides the measurement agent with a schedule of measurements to perform and schedule for when it should report those results to the collector. And then the collector's job is to collect those results and store them in a repository. So that later on some software or some human being can do some analysis on the collected data. That's a basic architecture quite simple actually.
So what did we actually standardize? First we standardized a framework that basically describes this architecture and what all the entities are and what they are supposed to do. We stand ashedized an information model. So this is an ab tract model that essentially describes the what the controller can tell to the measurement aigt how do you forment late the tasks and schedules to perform measurements and how to do you formulate the reports that go back to collector.
So that was a kind of ab tract information be model. We also standardized a specific instanceation of that information model in something called a yang language model. It is very popular in the industry now. In IETF we have standardizing close to 300 yang models. That's a lot. We took a very generic language which is used for all different kinds of things and we built a data model specifically to be able to do broadband measurement. The other benefit of using a standardized language like yang is that that means that the existing tools that network operators already have for doing configuration and management on their networks can also be used to substantiate this yang model. So there is protocols that are always used with yang, net comp is one of them. Operators who are already familiar with the net comp protocol can use it for this measurement purpose as well.
In other standards organizations and other Forums the information model has translated in to other different kinds of data model using different da to model languages in. Broadband Forum which focuses on fixed wire land standard s, they substantiated their own model. It be used on fixed wire line net works. We also standardized a reference path and this is pretty key. Often when you talk about measurements of what's going on in the network, you hear descriptions like this is end to end measurement or this one went from user to user or we are measuring the access network and these things are pretty am big gous. Where does the end actually begin. It is not always oust when you are trying to gear across different networks and when people are using those terms casually you can get confused pretty quickly about whether you could actually compare two different measurements. Which standardizing a reference path we wanted to provide an unambiguous way of describing a measurement path. We wanted to give you away so you so describe it and you could see ‑‑ check against somebody else's description of their path to ste if they are fultly measuring the same thing or not. And what you can see in the diagram there is the ‑‑ is part of the yang data model. Just for reference. So we also have a couple of ‑‑ so those are all done as ‑‑ an as people are who are interested. But we also have a couple of further items that you really need to complete this picture and those are still in progress.
So what we are creating a metrics registry to ensure that two parties who decide to use this system to run measurements on their network can be sure that they are measuring the same thing. So is not just the path, okay we are both defining the same path but okay. We both want to measure round trip latency for UDP based traffic let's say. For the kind of traffic that that supports voice and video. We ‑‑ we are defining a registry so you can register what that metric is and then if bo two people want to care compare them they know they are testing the same metric. That's what the registry is for and we are inputting in to the reg industry some basic metrics. So things like the round trip delay, round trip loss, DNS response latency, and loss. Jitter and variation metrics. These are all kind of standard metrics that you see in lots of different measurement projects. We are putting these all in to the registry so they can be used with this system and those specifications with nearly finalization but they are not finalized yet.
So who is using LMAP might be a question in your mind. The biggest deployment of LMAP is by this company called Samknows bi is a global Internet manages many system. They implemented LMAP on the measurement scheduling interface piece. And they have it in their customer premise equipment. So that means that across many, many countries three different continents you have millions of devices that are running part of the LMAP as part of the Samknows environment. Data model has been developed but hasn't quite seen as much deployment yet.
Thanks. So that's what I wanted to tell about LMAP. That's what the IETF's contribution to this space has Ben. Trying to provide ways to standardize across different providers. If you want to understand a baseline of, you know, what the delay is, for this kind of traffic, what the latency is what the loss is, we have a standardized way of doing that. The other thing I wanted to talk about a little bit is future challenges to measurement. So if we look at the transport layer of the Internet, the part that is responsible for managing congestion historically on the network, the protocols that e have for doing that and the two core ones are TCP and UDP, those were standardized in the 1980s. And as a result they db some of the features that we know about themming are in need of a bit of an update. 30 years is a long time on the Internet. There have been many transfer protocols that this ven developed since then. They have struggled to get deployed. They have required modifications to the operation systems to get deployed and having difficulty to get through firewalls. Often firewalls will only let TCP and UDP through. In has been a push for a new kind of transport protocol and the one that we are working on in the IETF is called quick UDP connection. Acronym that doesn't represent the letters in the name. We are real wli good at that. So in deseen of quick is meant to overcome these obstacles. Whereas in the past when you can see on the left side of the diagram let's say you are using an increpted HTTp connection. It would be encrypted using a TLS and then all the content that runs on top of have of Http whatever da Tau are sending from Youtube or Facebook that would be encrypted. That's the baseline for encrypted web traffic. Quick takes one step further. Quick encrypts all transfor the headers that would be available that normally TCP makes available. These are things about the data about the congestion window size and session identifiers, not the payload data but lots of data is very commonly used for the kind of the measurements that I was talking about. So operators and others are very accustomed to using all this rich information in the TCP header to measure things like delay and loss. With quick all of that object if I Kated by encryption to the traffic can get through firewalls. People have realized this kind of tension, this dichotomy that by making the protocol more secure we are also going to make it harder to measure. We have spent more than a year having a discussion about adding a single bit to the quick header that would allow for the measurement of round trip lay tented tensy. It has enbeen recently concluded this this bit will be represented in a header but not mandatory. And so it is likely that there is going to be a lot of quick traffic that gets deployed, that isn't setting this bit and as a result it will not be possible to passively measure the round trip latency of quick traffic. And again is there a tradeoff here between securitied and privacy which is why people are concerned about adding one bit to the header and the ability to measure.
So that's going to have a big impact in the future. Already more than 30% of Google's traffic is running over quick. So this protocol is likely to have an impact on the network. And so intermediaries really they can't read payload data. If you are used to doing measurements in the middle network you are not going to be able to do it. Quick will also have imfact in terms of traffic differentiation. If you are using to differentiating between different kinds of traffic all of quick is going to look like an increpted blob. And so the choice for operators is going to be that you are going to have to treat all of quick traffic the same. Either you can block all of this. You can throttle all of it or you can let it all through. But because it is encrypted is there is not a capable to distinguish between. Right now the focus is on building an http layer on top of quick as shown in the dpi diagram but pee pemel interested in running protocols and so increasingly all of that traffic to a network operator is going to look the same and we are going to have to figure out how to manage that.
So I'll leave it there.
>> MODERATOR: Thank you. Now Mr. Sunil Bajpai. I'm sorry.
>> SUNIL BAJPAI: Thank you very much. And you pronounced mi name correctly. It is pronounced as Sunil Bajpai back home. It is close to what you said.
>> MODERATOR: Thank you.
>> SUNIL BAJPAI: I work aspirin pal advisor in the Telecom Regulatory Authority of India. And my job includes quality of service, and consumer affairs and I tishgs. So these three fields I look after in the regulator.
I would like to thank Alissa for the very interesting presentation that you just made, very informative and it shows the om COM plexity of doing the measurement and what kind of challenges it present. I want to ask four questions. The first question is why is measurement a concern. That may Sunday elementary because we have regulations. So we need to measure to see whether people are conforming to the regulations. But you see the problem from the other side of the table. One service provider said that it is very difficult to hide prioritization. Because if you are paying to get your traffic prioritized how would you not talk about it. What is the point in prioritizing something and not letting the consume know. And if that's the case, then why you hide it. You can hide throttling because it is possible that the network itself offers competing services which are also offered as OTT services by someone else and he may have interest in throttling it.
So there is that problem certainly that survives. The second question that I have in my mind why is it difficult? Why is it difficult to do this measurement. And so some years ago the TRAI decided to launch an application to do measurement of network speed. Especially on the mobile device. And we had good traction and we had millions of downloads and everyone was using it to check whether their network was working well or not.
But when you look at it closely, you realize that it is extremely difficult what you are doing. Measuring on‑demand with request and participation of the user it is still very difficult to know what you have measured. And so we published a white paper that runs in to some 60, 70 pages which describes what we are measuring and measured it. And all the of difficulties in doing so. The radio network changes constantly. You do not know the condition at that moment. You do not know the condition of the handset. You do not know what the frequencies the handset supports or what doesn't support. What protocols it supports or doesn't support. A what else could be going on on an at the advice which could be fall functioning, suspend for awhile and all that affects the measurement. So we went through all the statistics that we could derive from the measured values that reason in to millions. And then we found that during the day it changes, by week it changes. And by almost every factor we measured value changes and we were able to isolate some of those effects. So that leads to my third question, when you do the measurement and it is a difficult measurement to do, when you do the measurement what does it mean? After you have done ‑‑ after you have ‑‑ you collected all the data what does it mean? It is not always easy to interpret it. For instance, I have said this before, that if you can ask a question, which looks well formed it doesn't mean that it is well formed and it does have an answer. For instance, my favorite example being that if you have a circle and you take two points randomly on the circumference of the circle what's the average length of the cord of a circle. It may seem that you can find that average length but cannot because the way in which you generate these cords will determine even randomly the way you generate these cords will determine what the answer will be z. The same thing with what we have measured and this problem is almost philosophical. When you read the classic philosophers and they start asking who am I and what sdwishs and it spins your head but the point it is so very difficult to pin down exactly what we are talking about and that is the very same feeling that you get when you ‑‑ when you start looking at what you have collected and what you have measured and what exactly it represents and what does it mean and why is it meaningful and to whom is it meaningful and for what purpose is it meaningful and that's another problem. What could make that measurement ineffective, final question. After you have done all that hard work and you have done so much thinking and developed the tools, what can make it ineffective? So many things can make it ineffective. For instance, 5G has we all know, of course, we have been talk about T the option to do slicing. Now once you allow infinite slicing of the network underlying hardware, network hardware, you can virtually put together a bundle of these sliced networks and offer a composite pipe to a user. Ha allows to do everything each of these slices because each of themming could be driven by an SLA and each of them could be driven by an SLA which the end user has configured and the business deals that are offered can drive what we understand and what doesn't get done and you could have the same effect. You could get back the issues that we have been talking about related too Net Neutrality through this group. I think that could happen perhaps. And mobile edge computing is another issue sh issue which complicates these measurements. In a fluid met net work, that adapts almost constantly it is difficult to know what is happening. If you add to the complex dwit the network is offering services to the back end service provider. And that is added in to the mix. Then it will be‑Fuddel all the measurements all the over again because you don't know what was decomposed at which stage and how it was served. The third issue could be transparency. You want to tell how you do the measurement and you want to tell how you define everything and the moment the rules are out, there is incentive to game. You know the rules. So you play by the rules. I don't break the rules but you get your end result anyway what you are trying 20 do. So that creates another problem and you have to be transparent because if you are not transparent all the massive evidence that you have created may not be able to prove the crime that you accused somebody of. So all of this shows to my mind that this is an extremely difficult task. And therefore, the clarity with which terms are defined as you pointed out and tools are created and data is interpreted, all of that could be greatly beneficial. That's what I think.
So thank you very much.
>> MODERATOR: Thank you. Now Chris Marsden you have the floor.
>> CHRIS MARSDEN: So thank you very much. And thank you to all the organizers of the event. I was at the C dwshgs I20th anniversary celebrations a couple of years ago. It is a ple is your to do things with CGI as much as possible. And I'm also on the stakeholder advisory Committee of Nonimat in the UK. The message I was about to give if I would have the slides put up again, you just put them away. When they reappear, technology, huh? Is that essentially if you can't measure what's going on, you can't measure if someone is violating what's going on and I'm going to refer to a conversation that was had in this very city 13 years ago. Can we put up the slides again? They were there. We all saw them. It wasn't Fake News I promise. I am also working on Artificial Intelligence and Fake News. So is this a separate conversation that we can have afterwards for the European Parliament and faik news and Net Neutrality we will be discussing for several years to come as we go forward. There was a book called Net Neutrality, there is two books called Net Neutrality because some of the older people in the room remember that I published a book in 2010. This has got a sub eight tell from policy, to law to regulation. And maybe we should have had a question mark after regulation. But for the most part we have talked a good game on law but we haven't developed it. There is Net Neutrality in Brazil and Mexico. Those people from those countries say yes, the law is on the books. Ab that's a good thing. (Question after law)
We are back up. So is there a problem to solve, this is the famous claim that is made by those are always used to opposed Net Neutrality, and I have now said they are in favor of Net Neutrality as long as it is not regulated. If you can't properly measure we won't know what the situation is. You have already got this in front of you. I am a lawyer and so is this is going to be a fairly nontechnical talk. I have realized Alssa I can say I was external examiner of the Chair of the IETF. That sounds cool. This is a recent Article that I published in the communication of the ACM. Which has some reference in Net Neutrality. Some of you might have seen that. This is zero rating map and wider rating map. But, of course, we know that there is one consensus towards Net Neutrality and now with building of walls, taking place and very popular, but I do discuss measurement in the book a lot and one of the things that I talked about in the book is about the Paris conference 13 years ago organized down the road at the I call it an Aim where there was ten or so tier 1 companies gathered to discuss and they all decided that the only fool proof method and we have a problem which is so many politicians and even regulators keep saying ISP, as if it is the ISP we are concerned about. When it is the access provider pefkically and the FCC hases charming acronym, the bias provider who is expected to provide neutrality. And we can only know if one our friends in the room is discriminating when they not only admit to the practice because they are using it as marketing to sell their services and look at this wonderful voice yoer 3G we provided with. And so that's really interesting. This is tucked away on 210 of the book we strnt been doing a lot of funded research in to the measurement of neutrality. And that's been a problem. I list it in a way that we are solving the problem now. But if it solve the problem of measurement, Ms. Schroedr would have known what happened to his cat. We know a design that fairly effective for awhile. So what should we measure and I will say this briefly. Qual of experience is some measures. I am going to give a shout out to Noi bot, I am pronouning it in the German style. We had a poll yesterday. Data across networks, I am going to say something about Cisco da tachlt it is a shame that the regulators of the world haven't got ton together to measure traffic and traffic speed increases. We have to use commercial measures because we don't have a measure that is being taken by regulators. ARCEP has between Iing to measure what is happening between tier 1 providers and we don't necessarily know that we have to use other proxies. Like Netflix publishes and Acai publishes its data. CDNs what happens to their data there. And I know that they will provide some data but we it would be nice to get more. So I suppose the base can message we should stop guessing how traffic is flowing and BEREC has done some workshops in this Paris. Paris is where we end up discussing these things and OFCOM has done some work. There is a thing called the connected nations report that is packed with data in the measurement of traffic and the FCC open Committee decease. It might be in cold storage but that was David Clark. If it is free I will two. And this is 0 rating of my network. Which is actually telling you exactly when we are going to discriminate, right if if you use this content you can see in the bottom left corner, that will be free on your package. But it is not for me because I'm too cheap to qualify for 0 rating. I don't have a package that's expensive enough. I have a half gate limit and I use WiFi. So let's face it my handset is a WiFi device with occasional network access. The network is very act tifrly measuring what's happening. And I guess text reminders when I get halfway to my limit. Continual text reminders every single day of the left. I get told you can upgrade. I don't get told anything about specific issues of Net Neutrality but think it might be z interesting for to us think of the networks are capable of issues customer service announcements that tell us about the implications. I roam in Paris as if I am at home. Can I have nigh slides back? I'm not exceeding my time. I just keep getting con if I Kated.
>> It is network manage:
>> CHRIS MARSDEN: Yeah, quality of service sucks. We can't do this thing. So if we can't get this back quickly then we might just wrap up and you can ‑‑ the pretty slides are sitting on the Internet somewhere. You can find them on one Microsoft's ‑‑
>> MODERATOR: We need to get information.
>> CHRIS MARSDEN: Move on to the next speaker.
>> Just a minute. Just to explain. Very sorry for that. We are experiencing some technical issue because it is lacking some ‑‑
>> CHRIS MARSDEN: It is okay. We can move to the next speaker. Just to say I am sure that everyone enjoyed the denial of a service attack on the IGF website. We are not the only one afflicted. Thank you.
>> MODERATOR: Thank you. Now the next Christopher, Christopher Leeles please.
>> Thank you. You get another Chris. I am also ‑‑ I am not even a lawyer. I am an NAFTA policy advocate and we all work with engineers. And so my presentation will be even less technical because I don't have slides but I did want to pick up where kind of where Chris was leaving off. Because, you know, when I come from the country where we have created this lack of consensus that you mentioned. I apologize. We are working on that. But also I think it is highlighted for us as policy advocates some of the real concerns that should motivate average citizens, average folks that we all want to mobilize to demand not only strong Net Neutrality protections but accurate measurement and ongoing monitoring. This highlights some clear reasons as to why that's important and why we need to talk to the public about those sorts of needs.
And we certainly advocate for having a strong agency that has not only the skills to ‑‑ and the technical expertise to measure but to also look at the broader overlap with other layers and other parts of the network for very specific reasons. So one I want to highlight the importance of providing acceptance among the public. The longer that we don't have strong measurement and strong case‑by‑case analysis of different business practices, the more acceptance we will have from the public that what they are being offered, whether it is free data or other wonderful marketing tactics is the way that this it should be. It is easy for an average consumer to accept what they are being offered as to the way the Internet is supposed to work. I think it builds trust for the public to know that you have company being watched. Broadband Internet arc is hes services are some of the least trust companies certainly in the U.S. but around the world and we need to provide those sorts of measurements to build trust that they are ‑‑ they are being straight with the public. And think it is also fair to companies that once you have high level basic rules that can be measured against as they develop new practices they know what the rules of the road are. And they can innovate within that space and work on a case by Case basis as they come up with new management practices. I think it promotes the building and the advancement of networks. And network technology. Networks are cheaper and cheaper to deploy. We hear a lot about 5G. 5G and mobile networks require a strong powerful backbone of fiber. As we develop new ways to deploy and can be ramped up we are going to get in to a space where it is cheap or to have un limited data plans and going to continue to come down because of cost of materials, ease with which you can upgrade a network's capabilities. So I hope we will be in world one day where weigh are looking at mobile networks and wired networks as equivalent. We are not there yet. Certainly the return in the marketplace that we see of unrimented network offers shows some of the ‑‑ ***.
>> (Off microphone).
>> Chris: This is why Kluas is going last. He is going to address that more. I would add that the ‑‑ an agency of ‑‑ that's looking at the broad scope of the Internet stack, and measuring carefully is important that we are ‑‑ it is important because we don't want to confuse the definition of Net Neutrality and we often hear in recent years calls for Network Neutrality for entities that are not networks. We hear it most often with large platforms and legitimate concerns around plat form dominance and competion concerns and agencies that are empowered to measure and define traffic management can help us clarify that for a public that is not technically adept as the folks doing the measuring. And that's important because as we look to a future of regulation around dominate platforms it is important that we are clear that is not Network Neutrality. We may have come to a time where we need to talk about neutrality for platforms but that looks different for the broad range of services that are dwe fined as platforms. And I think it is important that we separate those two ideas out. So that the services that the public has come to love can continue to function away that uses network affects but does not have the same or the function differently than broadband Internet access service. So that's a distinction that's very important. There is a long list of protections and monitoring that an agency can do and I think that's probably for another panel but it starts with things likes privacy, it starts with things like due process for content management and having an agency that is expert and views that whole paaply of potential harms clarifies the challenge of Net Neutrality harms and so I think those are some reasons why we need to have management at an agency not done in the private sector. We are seeing it right now and so that's the direction that I think we need to see policy making go. Thank you.
>> MODERATOR: Yes. You can go.
>>.
>> KLAUS NIEMINEN: Good evening. I am from Finnish regulatory. I have been doing a that for 15 years. And well, first of all I have to say that at least in Finland the situation has increased a lot during the years without having any regulations in place. In 2004 we had our well a lot of our throttling of peer to peer fraefk or many ISPs were planning to to that but alls of though were traffic management practices were decreasing and decreasing all the time. Now we have the European Open Internet Regulation. Actually as a legal background and that has helped a lot and we have a legal founding for your decisions and we can do the enforcement. Can I change the slides? Let me see. It doesn't react to anything.
>> MODERATOR: Just a minute.
>> KLAUS NIEMINEN: Now it is working. Good. So I ‑‑ we have been talking a lot about Net Neutrality today. And I'm not going through the European open Internet regulation. I want to say a few words about it. To able to illustrate to you the factors we need to monitor. So basically the end users have a right to use and provide information content services and applications. They have a right to use the equipment of their choice. And also the regulation imposes requirements for ISPs what they can agree with the end users. So the commercial practices and traffic management practices.
And especially regarding our measurements, it is important to understand the basic principle in the regulation regarding the traffic management that ISPs should read all traffic equally. There are exceptions but that also defines what kind of the things we actually need to measure or monitor.
Well, how do we do that monitoring in Finland? Well, I have to say that the situation has been improving a lot. And currently we don't see a need for technical measure rments. We have done them. But basically we are using a lot of collaboration, cooperation we try to promote dialogue with the industry, with the stake stakeholder groups and end users and ISP and manufacturers and basically having the discussions on what's ‑‑ what things are allowed basically giving the information and also rooefring the information and we monitor the complaints. And send information requests when needed. But, for example, if I need to know what ports are blocked, I send the information for the ISPs and get let's the say 50 answers. It is much easier than trying to start monitoring every one of them technically. So I'm sigh saying that the technical measurements they are not definitely the only way of doing the monitoring and as said that we do believe there is a need for Net Neutrality measurement tools. We actually believe that the work that's now can done in parallel is really excellent and we are going to take up the tool. I have seen a lot of positive feedback regarding the work done in parallel and I understand that also many regulatory authorities actually are planning to take the tool in to their own national implementation.
So basically there is definitely an I would say like quite a big need for this kind of harmonized measurement tools. And then we are really talking about also the standardization. We are using the architecture as a basis for BEREC tool. We think it is a good Foundation for measure many A architecture. What we are looking from the standardization side is the measurement metrics. Because the approach chosen by the standardization bodies it is a bit different. For example, (inaudible) is aiming for crowdsourcing tools. Different operating systems, meaning the apps, meaning the browsers and basically that also lichlts on what the information you can actually gather from the under user environment and what kind of the mechanisms are available for the measurement system. And definitely we are not in a full control in the measurement environment. At least I have seen that to be one of the design criteria for many of the measurement standards but here we really need to maximize and I'm actually talking now from the Ficora hat on, but basically we really need to maximize the usability and deployability. So we can get the tools for everyone. And that means that well, we need to have, of course, also the accuracy. But the accuracy needs to be added with. It is not the only design criteria. And we are having a lot of focus on the how to evaluate the measurements, how to evaluate if they are correct. As said that well, we believe this there is a need for standardized harmonized tools, I have already published an Internet draft regarding the Net Neutrality measurements, that's currently expired but I am probably planning to revise it because they have been quite a much interest in this and we really try to make information available so that people can help us to develop tools for everyone.
Because it is not only for the regulatory authorities. I mean that it is ‑‑ at least from my perspective it is more important that we can give the tools for the normal users.
Okay. I wanted to say a few words about the BEREC work. So one of the items we have done is the BEREC Net Neutrality regulatory assessment methodology that describe methodology for doing some those measurements. To implem the measurement tools and help monitoring of TSM regulation and it helps hopefully to implement ‑‑ in implementation of the measurement tool and also in the standardization of those methodologies. We have a few different items, what the regulators need to monitor and measure. The first of them is the quality of service. I mean there are a lot of standards that define how to measure let's say speed delay cheater. Those measurements all have a different metrics and the results they are not really comparable. So we have been building methodology for European countries that's is base on TSM regulation and BEREC Guidelines and we hope to have m harmonized way of measuring quality of service. One of the main goals is to make those results comparable. So basically that the definition and also the results of the measurements are the same.
Okay. I said that the measuring quality of service is (inaudible). But we are going to the actual Net Neutrality measurements. The connectivity measurements they are redoible. We can measure in some communication ports are blocked and we can increase the reliability of those measurements by having a multiple different entities doing them. Let's say if we have a crowdsourcing scenario where we have let's thousands of subscribers measuring certain topic, then the end user environment impact is lesser in those cases.
But then we are going to the traffic management practices that may affect the quality for certain applications let's say throttling or prioritization. Yeah, we have also ideas on how to to those measurements but that's really the hardest topic. Because it is tricky to understand what's really impacting the quality of service for certain applications. And I would say that the measurements way especially in this area they give an indication of potential problems. We can't always say that there is a problem let's say in the ISP network. It is possible to define that well, there is something happening for this traffic but we don't necessarily know if it is in the ISP's network, in the content service provider, or if the end user network. And that's one of the reasons we have been focusing on identifying and eliminating the end user environment impact. And that's something that, for example, benet has done in Germany and we are hoping to get good results on that. If you want to have a reliable measurement results that can be used by the end users and regulatory authorities and by the ISPs we should be able to rely on those measurement results. And, for example, getting the information regarding the end user environment. And how would that impact the results. It is really crucial.
There are also some other chapters regarding the assessment method lol Joe gee. If we want to publish the results, how that should be done.
So the agreeigation basically. And also they are saying the regulation, there is a notion of certified monitoring mechanism. So we give guidance for the regulatory authorities if they want to certify monitoring mechanism. And basically that's the assessment methodology that's available in the Internet. Sorry if you want to find it just put the name in to the Google. It is easy to find. And I mentioned that the BEREC is building a Net Neutrality measurement tool. The target is to help the regulators to do the measurements but also help the end users to measure the access quality and detect potential problems in the traffic management practices. We had the providers. It was won by (inaudible) and the target date for the tool was mentioned to be next year but we can say the hope is it will be available end of next year so we can start implimenting it nationally, for example, in fin land. And if you want to have further information on the tool there is also the specificication that's available in the BEREC website.
But we hope that this tool will actually help the collaboration further between the different regulatory authorities but also that this tool could actually help the collaboration industry wide. So that was my initial presentation.
>> MODERATOR: Thank you Klaus. Now we should start the questions. No questions? No one? Anybody? Do you want to start again the presentation? Information or ‑‑
>>.
>> CHRIS MARSDEN: The slide you didn't see from me, Cisco numbers aren't accurate. Thank God we didn't see those slides in the first place. You want to add some information.
>> KLAUS NIEMINEN: I was asked about 0 rating. I can maybe say a few words about that. So basically we had a funny situation a few years ago when one of our mobile ISP was starting to offer data caps but the main competitor started a television advertisement campaign against that. And basically after that none of the big ISPs they weren't able to start offering data caps in Finland. We had unlimited Mobile Broadband and that means that no 0 rating. Have never actually occurred in Finland. There is no use case for that if you have the un limited data cap.
>> You said it is going to be crowdsource and you are talking about the pros and cons of that when you crowd ‑‑ for crowd sourcing you need to have this understanding of the end user environment and a lot more veashibility and you can't control when people are going to run the tests and what environment he are going to be or how many of them are going to skroin the crowd and on but it under sos like that was the right model for you because you are not really intending to use this as a regulatory tool but more as a means of gathering information that gets put back to skon sulors. Did I undersoon that correctly?
>> KLAUS NIEMINEN: Information could be used in defining the subscription. It is not that black and white you can always say there is a fault or a problem with ISP. Need to access controls so the server is not overloaded we hope that we have actually taken everything in to account. We my improve that tool also later. But yes, it is a tool that is voluntary for the regulatory authorities so they can decide how to use it. Some of the regulatory authorities may want to certify that tool as certified monitoring that ‑‑ should give enough information for end user to fill in the complaint and then basically now it is really up to the national regulators on decide on the implementation how they want to use it.
>> MODERATOR: Mode now we have a line for questions.
>> My first question would be to the Indian colleague I think you touched on a very important. Can be gamed about ISP. One point to add here is that actually if you look at the a companies that produce those measurements software even Alidan that is tasked with the tender to produce the European software. The ma jrt of money is madedy selling to ISPs and having those tools built in to the customer complaints process. Of course, that would possibly also happen if those tools would be open sourced and another company could just adopt them. But there is definitely the circular reference and I have no solution for it. But it is something to keep in mind on. My question on the BEREC side, are you sure that this there is no obligation for regulators to really certify any measurement software because how else should a user come to remedies when they don't get the contractually agreed speeds. Consumer should not be left out to drive when the provider is selling them something that didn't deliver. This whole notion I am selling you up to three apples and you only get one, I think it is one of the things that is it rightfully added to the Net Neutrality network. In order to then trigger any consumer remedies and the second point, I ‑‑ can you give any assessment on how committed the BEREC community is when it comes to open data on the measurement front. I know that, for example, in Serbia they adopted an open source Net Neutrality measurement tool and then intentionally stripped to open data part away out of ideal logical reasons and not very good reasons I would say but again I mean this open data principle is also important for the comparability between countries if we don't want to solve this problems for every country because it really cries for a bigger European or global solution.
>> MODERATOR: I'm sorry but did you say your name?
>> Thomas Lohninger and I work for digital rights organization.
>> MODERATOR: Thank you. One more question. Let's hear, Fabio please.
>> Thank you for the speech. I'm Fabio from Brazil from CGI and I'm curious about exchange of information and cooperation between BEREC and national authorities in Europe. What could you add about it? Chris, can you say anything about this? Thank you.
>> CHRIS MARSDEN: I don't think that Klaus has anything to hide wean the national authorities and BEREC. In response to Thomas, I was having a look at some of the things that sand vine has been doing and nice what they have done in Egypt and Turkey and they have major stuff.
>> KLAUS NIEMINEN: Later question, so basically BEREC is the body of European regulator and every regulator can participate in any of the Working Groups and we have pretty good participation in Net Neutrality Working Group. And it is a Forum for collaboration sharing the information trying to see that our interpretation of the regulation is in line and basically now also developing the tool. So at least from my perspective the collaboration has again good and gives a lot of support on their own actions. I would say we quite a close cooperation if that answers your question. Well, the open data, wler the purpose of the too many is that enables open data but, of course, as the implementation of the tool is up to the regulatory authorities, so also publishing the data. So I can't really promise from any parties' behalf on that.
>> MODERATOR: Please go ahead. Say your name, please.
>> Yeah. My name is fraud Sorenson from the Norwegian regulator. Regarding open data, it is an important aspect of the tool that there is national control of the data that is collected by the tool. So therefore when a national regulator uses the tool, he would have access to the raw data. When it comes to the open data there is intention to claek da on a European level from the different regulators for the different national regulators and then it is important that when data is exported that the privacy regulation nationally is applied to. That is the reason for not providing the raw data as open data. There is a limit in the law, in the privacy law. Regarding certification, to your question Thomas, I think it is important also to realize that the tool is, of course, provided to the consumers, to check your own speed. But it is also as a tool for supervision for regulators. So the regulator may use the results from the tool to take regulatory decisions. So I think the tool is really important even though if it might not be certified as we say according to the European law, it can still be used to a large extent as supervision tool for the national regulator. So think the role of the tool is important when it comes to the A model regulation and has to ensure compliance with the Net Neutrality law. That's what we want to achieve. I think it is also important finally to as a final comment to emphasize that the tool is open source. So it means that it is also available for other stakeholders. It is available for regulators in another continent than Europe and also available for other stakeholders. For example, ISPs that want to check whether the tool is according to their understanding of service and also for Civil Society and any research organization, et cetera. The tool is available and we also hope, of course, that the tool when it is published as open source it could also be developed over time. So there is also a possibility to use the tool for other aspects than this crowdsourced approach that is chosen in this first version of the tool. But, of course, it is up to the future to prove what comes out of this in the long term. Thank you.
>> Alyssa: Have a follow‑up question. If the tool is open source then can't you ‑‑ can you specify an arbitrary server to Act as the data collector? How do you limit to can collect the data from the tool assuming that somebody takes the source and wants to deploy it in their own country in their own network.
>> It is specified in more detail in specificication of the tool. We do it in two steps. One step ‑‑ that's the English word. Protected and only available for the national regulator. And then the data will be washed whereby only open data will be available in the next step. And this data are also collected in the BEREC database to provide European level statistics which will then only contain the open data. I don't know if I answered the question.
>> Alyssa: I think so. Thank you.
>> KLAUS NIEMINEN: Of course, if the some other stakeholders want to implement the tools so they can ‑‑ I would say build up their own measurement servers and infrastructures. We have been planning to have our own measurement server and we can do collaboration in BEREC. But especially for the other stakeholders. So it is ‑‑ you can just install your own servers and infrastructure.
>> MODERATOR: We have time for more questions. No?
>> CHRIS MARSDEN: If we want to think a little bit more broadly about the measurements that we still need nad to the tools I'm concerned that I'm sure there are lots of people in this room concerned that we don't have hard data on such issues as 0 rating. We haven't been able to test 0 rating. But it would nice to test some of the claims. We don't know the arrangements or operates contractually. In Europe it is maybe less important but it is contentious in some Developing Countries. There are certain countries that it is not clear how the data is hosted which way the money flows through the data. And so these are the types of measurements which think are also really important because everyone country that's represented in the room compePt tore Finland has got an issue with 0 rating. Maybe Norway as well. We don't have the measurement tools that at our hands to be able to tell us whether or not 0 rating is a small or large problem and you can't ask an unregulated company like Facebook, I would tell you if I was Facebook and you asked for that data I would tell you none of your business. Come up with a court order.
>> KLAUS NIEMINEN: We are talking about commercial practices. I am not sure it is a good starting to build the measurement tool to measure them. It is information collection exercise and you need to evaluation but basically I wouldn't really call it the measuring.
>> MODERATOR: Okay. So on behalf of the Brazilian Internet Society and steering Committee I would like to thank you all that came here today audience and speakers, thank you also to the IGF team to provide us this opportunity. Sorry for technical problems. And thank you to come here.