2015 11 13 WS 235 Results from the First Deliberative Poll @ IGF Workshop Room 4

The following are the outputs of the real-time captioning taken during the Tenth Annual Meeting of the Internet Governance Forum (IGF) in João Pessoa, Brazil, from 10 to 13 November 2015. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

>> MAX SENGES: Good morning, everyone.  I think we have one hour so we should get started.  There is a lot of, hopefully, interesting materials and good discussion and feedback to go through.  So, welcome.  Good morning, my name is Max and I have been participating in this IGF and WSIS process since the first WSIS and I think I have wore all the hats, first U.N. and then Researcher and then Civil Society and now for the last six years I work with Google.  For the last year now in California with Google Research and in that context, I got in contact with the colleagues at Stanford, Professor Jim Fishkin and Professor Diamond, who can unfortunately not with us today but with us in spirit and we are constantly working with him with this Internet.

Thank you very much for coming.  We will start with a short presentation by Professor Fishkin about both the method and the substance in terms of access for the next billion that we developed both in the online pilots we did and the session on day zero.  Let me just say a couple of words to put things in context.

I think the Internet multistakeholder Internet Governance practice and process is an emerging field and hence, we are constantly looking for new methods and instruments that can enhance the deliberations and discourse around production of insights and information, but informs the actual decision-making, especially here at the IGF, which is not a decision-making forum.  So I think this method actually brings in a Democratic element.  There has been a discussion about the lack of Democratic practices in the multistakeholder environment.  So that is when Professor Fishkin and Professor Diamond and myself discussed the interesting field of multistakeholder governance and the new aspects in it that Deliberative Polling would be an interesting addition.  We are here to discuss whether you agree and what we can do to this method.  This was a pilot.  Certainly we are here to learn and see what worked, what didn't work, how the method can be adapted, whether it is a valuable contribution.

I think it is important to note that we are relatively close-knit community at this point.  We used to compare the IGF process to an Indian arranged marriage in the beginning.  It was a very shy hand-holding and after a couple of years, there was a first kiss and after 10 years, I think you can say that it almost became so intimate that it's not easy for outsiders to come in and understand all the practices and relationships that have built over the years.  So bringing in new blood that has actually been decades of experience in Democratic practices and Professor Diamond is an expert in consulting with how new states set up institutions and governance and government systems.

I think that really enriches our debate and I hope in that context, you see the efforts that we have put forward now and we can have an interesting debate.  We will have the presentation by Jim Fishkin first and then just to keep things a little bit structured, we thought we would start with questions and discussion about the substance of the policy theme, access for the next billion, and then we can go into questions about the process and the approach of Deliberative Polling and we are very much looking forward to your constructive feedback and how it can be applied maybe in other forums to make the results more dependable, et cetera.

With that, I hand it over to Jim Fishkin to hopefully have a good pace, not run us through too quickly, but it's a lot of information.  So Jim, I think 15, 20 minutes maximum for that discussion.  Thank you.

>> JIM FISHKIN:  Great.  Thank you.  So, the premise of this application of what we call, Deliberative Polling, is first that the IGF is a relevant community for these issues.  And second, that the IGF aspires to be a deliberative body.  With those two premises, max and Eileene Donahue and Larry Diamond and I and Alice hew, who from our centre who couldn't be here today, we all thought that Deliberative Polling might add something to multistakeholder processes.

Some people said that well, if you consult an expert community rather than ordinary people, as I will show you most of the deliberative polls have been constructed with random samples of ordinary people, it will never work.  Because the experts are too well informed, they also won't feel free to react as net simple rather than citizens -- net I sens rather than as representatives of their organizations.  So they won't change their views and they won't learn anything.  So nothing will happen.

Many people confidently predicted that.  However, wait and see what happened.  So, to set it up, Deliberative Polling -- I won't tell you about it all over the world.  But we have done this in 23 countries plus multiple times -- tries in the European Union with all 27 countries at the time.  Now of course there were 28.  But at the time 27.  Recently, we have been doing Deliberative Polling in Africa and Ghana.  I'm just back from Ghana where we did a project in the City of Tamale.  We have done Uganda, Tanzania.  We have done actually all in habited continents have had Deliberative Polling, including several in Australia.  Even in China.  It's being used for local budgeting issues.  In Japan, it charted the logic that led to the pension reform on a national basis, pensions and other applications in Japan, China, Macau.  There is places around the world it's been used.  Especially for complicated policy issues.

And so, the basic idea is most of the time, most people are not engaged in the complexities of public policy and social scientist have a term for this.  They call it rational ignorance.  If have one voice and thousands and millions, why should I pay a lot of attention?  So we take a good random sample of a population and we engaged that sample in intensive balanced discussions, I say balanced because we always have an Advisory Group that Vets policy options and arguments for and against.  And you may have seen our briefing document in this case, which was developed by an extensive advisory process, which is I think, balanced in its structure and balanced on its and it is available.

We have an initial survey and then an intensive deliberation and then a final survey where we see what the opinions are at the end of the day.  Or at the end of the process.  So in this case, we thought of it as a pilot.  It is a relatively small project.  The risk of a small project is that the changes might not show up as statistically significant even if they are substantively large.

In fact, we had 59 deliberate, 243 non-participants, and first on representativeness, very few significant differences between the participants and the non-participants either in the demographics or in the policy attitudes.  In other words, because we had the questionnaires from the 243 non-participants.  And in their attitudes and all the specific policy questions and indeed, empirical premises and values and other things, there are only 5-30 that have any difference.  And on the demographics you'll see in terms of sector, gender, education, geography, you'll see that the non-participants and participants were very similar.  I'll get to that in some detail if we have time.

Now, so, we actually had, for what we were calling a pilot, but I think I'm beginning to think we ought to drop the word, pilot, because it was more successful than I thought.  Let me just go through the slides.

>> MAX SENGES: If I may, quickly, non-participant means the control group where they took the survey but have not been exposed to the deliberations.

>> JIM FISHKIN:  In this case, the nonparticipants are people who could have participated.  We also have a separate sample, which we haven't analyzed yet, of people who never were invited.  So this is to analyze the question of selection bias.  We had all of these people.  We offered them an opportunity.  59 took up the opportunity and 243 did not.

So we think, and if you look when we get to that, if you look at the geography, the sectors, people said the government officials would never participate.  We actually have a slight over representation of government officials.  A lot of things that were very surprising.  Now, did they change?  The headline is, that there were 13 policy options.  A lot of additional policy attitude questions that might help us explain the changes, but there were 13 options in the briefing document with arguments for and against.  And some of the other questions are about the arguments for and against so we can then analyze why people change.  We haven't in two days gotten that or have been able to do that part, but we have the basic results here.  7 of the 13 policy options changed significantly.

So, we think we got plenty of change.  We also have knowledge questions.  We can demonstrate these people became more in formed.  So even if they are experts, they learned from each other.  They learned the arguments that others offer but there was specific policy relevant facts that they learned that they didn't know before, which we measured.

And then there were very strong event evaluations.  As you'll see, and I have had qualitative -- I mean, informal discussions, they loved it because as one person told me yesterday, instead of being talked to, she talked with other people.  And the deliberation among ordinary citizens we find is very engaging of the deliberation with.  The deliberation with experts turned out to be -- and I think part of it is the experts tend to flock together when they come to these meetings and they don't really encounter in this kind of depth for hours of discussion, the views of other people.

So, we think that actually, the people did behave as net I zens.  Otherwise they wouldn't have change their views.  And there are changes and we think those are the considered judgments at the end on these issues.  And some of them are quite -- the fact that there is no magic bullet on the issue of access shows the difficulty, the fact they are really deliberating.  The root idea of deliberation is weighing competing arguments.  And these people really weighing the trade offs and sometimes struggling with what the answer that they preferred was.  And by the way, this process does not attempt to achieve any kind of consensus in the sense that if you announce the goal of a public consensus, people feel social pressure to conform to the consensus.  No.  We get the considered judgments in confidential questionnaires and if a consensus is there, we see it in the data.  And if it is not there, then we see the disagreements and hopefully the reasons for the disagreements.  So we are trying to get the conscious judgments of these people.  We think there is also even a merit of the balanced briefing materials because it helps to clarify the issues.  And these were extensively voted by some of the people in this room, and we are very grateful to them.

Now, on the opinion changes -- by the way, some of the things that didn't change, I also argue that that is very significant in the sense of, suppose you have considered one -- suppose you have a view beforehand, but you haven't really deliberated, haven't test today against other arguments.  If you go through this deliberative process, and if you happen to come out with the same view at the end, even though it registers as the same opinion in some ways it is different because you really struggled with the arguments on either side and confirmed your initial view.

Now, the only difficulty of small sample is that sometimes -- we had the people rate all the policy options in the same 0-10 scale.  People came in on the low side, that is against zero-rating before.  And they were even, by a pretty substantial margin, even more against it afterwards by half a point.  If we had a larger sample, that would have shown up as statistically significant, I believe.  So we don't have results on that.  But what we will do, I should say, that one other dimension of this is, we have tape recordings of 25 hours of small group discussions from this.  We will look at the arguments, the qualitative arguments, and maybe even code the arguments.  And we will look at -- that will help shed light on the changes.  This is both qualitative but unlike a focus group, it is quantitatively large enough that we can combine the quantitative and the qualitative in looking at this.  So we'll understand better.  But clearly there was a struggle about zero-rating.

The advertising funded free equal rating went down below the midpoint from 5.25 on the scale of 4.3.  That's a significant change.  So, clearly they saw difficulties with that too.

Facilitate free public access by non-government institutions such as local businesses or user communities.  That went up to one of the top choices.  I'm just going to whiz through these.  Encourage, coordinated International action through the digital solidarity fund.  That went down significantly.  People obviously had some reservations which we'll try to explain with the, both the quantitative analysis.  We'll do regression busy what is causing changes and we will also look at the interviews.

Establish a multistakeholder clearing house to connect funders with projects for global Internet access.  That was one of the highest before, but it went down significantly.  But it is still pretty high.  That is 5 is the midpoint, notice.  Governments should be encouraged to make best efforts to ensure access to the Internet as a right.  That went down significantly, although it is still 6.83.  It is still high.  But it went down significantly.  This is an Agenda for our research to figure out why, what the concerns were.

Place limits of intellectual property Crofts for Smartphones and other access enabling technology.  That went up significantly.

Now I'm not going to go into a lot of this slide.  But we have some trade off questions.  Some people think this and other people think that.  And those trade offs will help us explain some of the changes.  And as well as some empirical premise like free public Wi-Fi access will disrupt the market for commercial Internet providers, or government-sponsored free public Wi-Fi access will raise surveillance in monitoring concerns.  That is on a different scale.  That's reverse.  That's on a scale where 1 is strongly agree.  But those were significant changes too.  This is for later where we will look at explaining.

Now, one other thing is, since these 13 proposals were all on the same scale, you can look at the end of the day, after people have considered all of the pros and cons, which were the highest ones.  Even if they went down, they may still be higher on the same scale than others.  And so, the free public access by local government, that was the highest.  And the non-government that was also high.  And even Internet as a right, at the end, turned out to be one of the top 5.  I'm running out of time.  So okay, knowledge questions.

We had significant gains in knowledge.  Now, you look at all of these attitudes and you see very few asterisks on the attitudinal representativeness.  On the demographics, on the gender, education, age, employment status, material status, income, regions.  You can see that we had all the regions of the world.

Our big problem was getting the Internet -- e-mail addresses from publicly-available information and then some of the invitations going into people's spam folders.  If you assume that the distribution of spam folders and clutter boxes is relatively randomly distributed throughout, then we have a really reasonable case that comparing the participants and non-participants is meaningful for representativeness.  So in the non-participants, we had 12% sub-Saharan Africa.  In the participants, we had 12% sub-Saharan Africa.  We had a few less Europeans.  I think some of them may not have been able to come.  We had a few more north Americans.  We had a few more Asians.  So those are the demographics

The event evaluations.  74% thought the overall process was valuable.  82% thought participating in the small group discussions was valuable.  The large group Plenary Sessions were rated a little bit lower but still they were -- my group Moderator provided the opportunity for everyone to participate.  86% agreed.  The Members of the my group participated relatively equally in the discussions, 74%.  My group Moderator sometimes tried to influence the group with his or her own views.  91% disagreed with that.  And that is what we wanted.  We wanted the moderators to not to impose their own views.  The important aspects of the issues, 68% agreed with that.  A few members dominated the discussions, only 34% thought that.  The Members of the my small group respected each other's views.  91.4% agreed with that.  I learned insights I would like to share with my professional colleagues.  Almost 80% said that.  They had read the materials.  The materials were thought to be mostly or completely balanced.  You add mostly and completely, that is 87%.

So you see here, that we got -- this is a method by which we can assess the informed, hopefully, if it's done right, the informed and representative views of the IGF community about what should be done.  These are recommendations.  It is just another voice in the process but it is a way of getting beyond bland consensus statements.  It's a way of getting something that represents all of the -- if our sample as small as it is, is as good as we think, it's a way of getting all the voices in the IGF represented.  Now there were some things that didn't work.  I mean, two of the experts from Developing Countries were stuck in the other Panel, in the main Panel, and they didn't get to the Plenary Session.  And some of the delegates said they didn't like the fact there were no experts from developing creases.  There were supposed to be but they didn't make it across the -- they didn't make it to the session.

There were lots of little glitches but basically, the process adds up to a picture of, this is a representative microcosm.  It changed it's views and became more informed.  It was a atmosphere of substantive discussion and mutual respect and engagement.  And the collective results, I think, provide a basis for recommendations.  This is only a first shot.  Obviously, if we do this again, in future years, and we work hard and have even broader consultations about exactly what the policy proposals should be, what the pros and cons should be, and we may be have a longer deliberation, maybe we and a larger one.

This is a picture of how this approach could enhance multistakeholder governance.  Thank you.

>> MAX SENGES: Thank you Professor Fishkin.  I think it is important to note that A, obviously there have been a number of similar or comparable, at least, applications of the method and he and his team analyzed this is indeed a successful pilot, and it shows relevant changes and it worked to the degree that -- the method works.  And we are looking now to improve it and hear the feedback.  Marilyn I have you on the list.  One point I wanted to quickly make because one question is definitely about the net sens and stakeholder representation and how does that play out?  And that is why we included the after question, do you want to share the insights with your colleagues?  Because that basically means that even if you're from a government and you felt you can't really change your opinion, you're from the Private Sector and you have to hold the party line, this question and the strong agreement that there is things you will take to your colleagues, is an indication that maybe not immediately in the poll you were able to or you felt you weren't able to change your opinion, but you will go back to your team.  So one of the colleagues from the Advisory Board actually said it would be interesting to see the developments over time.  So now that we are in the beginning of hopefully another 10-year period of IGFs and thank you, Julie, I have you on the list too.  This is what we want to talk about.

Again, I wanted to suggest that we first look into questions about the substance of the discussion on access and then look at the method.  If there is a few questions on the substance, I'm happy to go directly to the method.  But Marilyn, please go first.

>> MARILYN CADE:  Thank you.  My name is Marilyn Cade.  I am a member of the business community and I'm a member of the Multistakeholder Advisory Group appointed by the U.N. Secretary-General to advise the Secretary General on the development of the IGF.  And I want to be sure that that is on the record.  Some of you here know me but I think it is really important, because I have an in-depth understanding of the concerns raised within the MAG about many of the initial approaches when the approach in the Deliberative Poll was put forward, including the budget, et cetera.

However, within the MAG, although there was significant concern, I did as a compromise, agree to explore the use in this manner of this approach as a pilot.  I come from the business sector and have extensive background in using tools of this nature, and I'm very familiar with the excellent reputation at Stanford and at my days at AT&T and AT&T Alliance.  However, I have a number of questions about this.  And the reason I took the floor before I asked my questions is, you both made some statements that I would ask you to please park until we conclude this discussion.  And I want to explain what those statements are.

I believe this sample is neither representative nor in any way broad enough.  But we need to ask more questions of you about the sample.  The second thing is that Professor Fishkin, you made a comment that you think -- and perhaps you didn't mean this, but your language in the transcript says that you think the collective results formed the basis to make recommendations.

I really strongly object to that conclusion.  And I object to it both as a participant in the process.  I'm here to engage in a discussion and better understand it.  But I really strongly caution against you making that decision without hearing from those of us who are here.  And I do think the sample is very very small.  I also have some feedback to share with you and everyone else at the request of colleagues from business about why they did not participate.  I'll save that for later.  I want to get to the substantive questions but I must first make sure that we are not having this discussion with a pre-cooked idea that you have put forward as a proposal that we are ready to, the collective results form the basis to make recommendations.

I think we are still in the process of kicking the tires of understanding how this can be applied and participated in.  Thank you.

>> PANEL MEMBER: It is related so you might want to respond to both.  I hold a totally opposite view of Marilyn.  And I'm particularly interested in the methodologies you have used.  And also beyond this particular application is that we have a new Constitution in my country, Kenya, which is passed in 2010, and no policy, law or regulation would be passed without public participation.  That was not defined in the Constitution.  So I find this extremely important way of sampling and getting those opinions because in the manner that the top opinion that is have seemingly dominated opinion, this seems to go deeper.  So I'm asking to be maybe the methodology you share with me later for application in particular intervention I have in my country.  Thank you.

>> MAX SENGES: Thank you.  Professor Fishkin, do you want to address the question of sampling?  Let me just say that it's not a pre-cooked idea.  It's an established method and we are trying to apply it here and we are here to learn and make the necessary adaptations so it works with the community and with everybody here and produces results that are valuable.  So with that.

>> JIM FISHKIN:  Well, look, we offered this as a pilot.  There were certain challenges.  Getting e-mail addresses from publicly-available data, and the spam filters; but if you look at the demographics and attitudes, it came out better than most of us expected it to.  We viewed this as a pilot where our focus would be on the issue whether people changed significantly and even given the small size, we had a lot of significant change.  Whether people would learn, we can demonstrate they became more informed.  And I think as we do some regressions with various explanatory variables and look at the transcripts, we are very likely to find that they changed in ways that can be explained, and the reasons we can identify the reasons why they changed.

So, is it a recommendation?  It's a recommendation from this sample, and it is considered judgments.  There are lots of recommendations.  We heard the access Panel the other day with 80 recommendations from different groups.  I don't see any reason why if particular groups can make a recommendation, why a scientific sample of the MAG, small as it was, but compared to the non-participants, reasonably representative.  There may be also some matching and some other things we can do if we discover that any flaws in the sample, so there are some statistical corrections that can be employed.  We think it is defensible.  I'm much more enthusiastic about this after we got the results than I was before.  I was very worried that all of the critics who said the informed people would never change, they would never behave as net zens.  So it is a pilot.

Now we have done Deliberative Polling in Tanzania on a national basis with the centre for global development, very successfully.  We have done two local Deliberative Polls in Uganda, your neighbor, very successfully.  And also I'm just back from Ghana where we did one in northern Ghana, very successfully.  By the way, in those cases, there was almost no non-response.  Almost everybody were taking the initial survey, participated and this was random sample -- random selection in the households.  So it works extremely well in Africa.

>> MAX SENGES: Jim, we have many contributions.  I tried to take notes.  Please let me remind you, I also want to give people especially the chance to ask about the substance in terms of insights for access, just because I think there is a lot of questions about the normative style and the possibilities or not, of this pilot.  Please let's also consider the access questions themselves.  I have Julia, pangwa (sp), Eileene and then two people over there.  Dan, and then the gentleman over here.  Julia?

>> Julia:  I'm representative of the European Parliament but in a previous life I also studied political science and one of my pet peeves with the multistakeholder process in the Internet Governance field has always been there is a big sense of already knowing everything about how to structure such a process and if only you allow everybody to participate and come together magically, there will be substantive discussion and good results.  I think we started from an assumption that professionals in these fields never change their minds, then the IGF wouldn't really have a point in the first place.  I mean, why bother talking to each other if everybody is complete newly-established in their positions?  And I think also all the stakeholders have to recognize that if they do support the multistakeholder process, then they also accept the idea for each group there is some room for maneuver and a possibility to learn.  So I would like to thank you first of all for doing this project and I would like to see a continuation of it and perhaps look at the methodological difficulties and see how we can arrive at a more representative sample in the future and encourage participation in these things.  So I would like to hear really what the concerns from the business community have been.  They may not have participated in such great numbers.  And I would also like to discuss the methodology in a bit more detail.  That's why I'm not going into the substance of the results because I don't know if you corrected for multiple testing.  I don't know if these numbers of significance are really necessarily -- if they can tell us much.  And I think actually analyzing the content, qualitative side of the discussions that have taken place will probably give more insight into why people change their minds than just looking at the numbers now.

>> MAX SENGES: Thank you.  We have a number of participants in both the online and pilot here in the room.  If you want to identify yourself when you make a contribution, I think that would also add a little bit more color than just from us, the organizers.  With that, Professor penwa (sp).

>> AUDIENCE MEMBER:  From Singapore penwa (sp) from Singapore.  I think that different understanding of recommendations.  So and where we are coming from, sort of a authoritative outcomes or decisions whereas coming from the point of view of an ordinary person and here we think that not using the same ordinary meaning of words.  So I think if you want to be sort of safe in a sense, you can use the outcomes.  But having said that, actually the IGF mandate does allow recommendations, it's just -- there are some issues with that but outcome would be a good operative word.

>> MAX SENGES: Thank you for that suggestion.  I think what we said on the slides was reference, a point of reference for a poll of informed positions, which is slightly different from recommendations.  Thank you penwa (sp).  The lady next to Marilyn.

>> DEIDRE WILLIAMS: Yes, good morning.  My name is Deidre Williams and come from the Caribbean and three times I was involved in your poll.  I was contacted I think it was about May for one hour Skype interview, which I was unable to do, but I answered the questions and submitted them rather late, because my life was very busy at that time and then during the summer, I was contacted to comment on -- I believe it was the document that came before the document that people looked at in the final process.  And then I was very surprised to be contacted again for the final process because it seemed to be that probably I was ineligible and I did point this out each time, and I was told no, no, go ahead.  So I went ahead.

I was a little surprised to find that although I was one of your participants, there aren't any women on here.  I suppose that is a typo.

>>  JIM FISHKIN:  (Off mic)

>> DEIDRE WILLIAMS:  Females.

>> MAX SENGES: You mean gender distribution?

>> DEIRDRE WILLIAMS: It gives the male statistic and not women.  My island isn't on here, which I pointed out.  In fact not just my island, a whole lot of islands.  I don't know, maybe we are too little to count?  But I don't think we are too little to count.  And that is why I came here today.  Thank you.

>> MAX SENGES: Thank you for that contribution.  Eileene, do you want to comment next?

>> Eileene:  Maybe I'll take that up directly because as somebody who has worked in the digital rate, cybersecurity Internet Governance space, I was immediately attracted to this project as a potential tool to expand the pool of people who might give in input.  So this pilot without a doubt has had many glitches.  Let's just -- glitches.

However, I believe it has a tremendous amount of potential for the following reasons:  Number 1, deliberation is good, especially with respect to very complex policy options in the Internet Governance space.  Number 2, getting input from new people is good.  In approach has the potential to be replicated in many different communities on many different issues in many places around the world.  And actually get to people who are unable to even attend the IGF.

In addition, for people who do attend the IGF, I have heard from activists that one of the issues is the inability to actually engage in conversation.  There is too many panels of experts.  And we want to share our views from around the world when we actually make it here.  This tool had that potential, and I think those who were satisfied with the experience may have been representing that view that it gave people a chance to engage in dialogue.

It is essentially a capacity-building tool.  And then last but not least, going to punning what you's comment, I would say that it is dangerous to overclaim the implications of the results here in terms of?  Dictating any policy outcomes.  I would say, getting to the language question, no one is suggesting that the outcome should dictate policy outcomes.

All it is, is a snapshot of the views of the people who participated.  And then, you have to evaluate who are those people?  And is it possible for those people to potentially feed their views into policy outcomes?

Last thing is, the IGF gives people an opportunity to talk but many people over the years have talked about the need to have influence and have voices heard in the policy discussions that are happening by default outside of this event.  This is a tool for that.  The NETmundial event was an outcome document of principles.  Many people have said, we are faltering on implementation.  This is a tool for that.  That's what it is.  It's a tool.

>> MAX SENGES: Thank you, Eileene.  The gentleman next to you?

>> JIM PENDERGAST:  Hi, my name is Jim Prendergast.  It is a process question I have.  But I think it helps me not necessarily react to the policy answers but to think about how they were derived.  And I apologize for not -- I wasn't at the sessions early in the week because as everybody knows there is tons of conflicts and all of that.  59 people are the participants.  Are those the people who fully completed all the different cycles?  So doing the initial response, doing the five-hour remote session, doing the plenary and then filling out the response towards the end of the week?

>> JIM FISHKIN:  We gave people an opportunity to do the four-five hour deliberation either an online session or face-to-face.  So, these are people who completed the before and after questionnaire and did the deliberation in one of the two modes.  Eventually, if we had had more access to addresses, we would have tried to -- at some point I'd like to do an experiment where we randomly assign the online version and the face-to-face version and compare them.  But most of the participants wanted to do face-to-face.

>> AUDIENCE MEMBER: And then on the 243, those are folks who started the process at some point but didn't --

>> JIM FISHKIN:  Those are folks who took the initial questionnaire but they did not take the invitation to deliberate further.  Or many of them for practical reasons.

>> JIM PENDERGAST: Yes, budgeting five hours was kind of difficult.

>> JIM FISHKIN:  And we didn't pay any incentives.  We didn't give them the day zero hotel.  This was a low budget operation.

>> JIM PENDERGAST: Having done dozens and dozens of political surveys, the fact that you're able to get people to commit to a five-hour session.  I can't even get people to do a three minute survey.  I will agree with what the member of the European Parliament said, the verbatims, I think there probably is a lot of interesting feedback in there that could be useful.

One other question just on the numbers.  How many people did you invite to participate in the first place?

>> JIM FISHKIN:  We sent out -- I don't want have that exactly, but it was more than one thousand.

>> JIM PENDERGAST: 305?

>> JIM FISHKIN:  No, no.  The problem is we were doing this in waves in replicates and we were trying to stratify it by sector, region, and the rest of it.  And we were discovering that when we were using survey monkey, we could find out that most of the or a lot of the e-mails were never opened.  And so, we kept adding more and more people.  So we will have all of that in a final report.  But, we know that a lot of the e-mails were never opened.  We began only tow have success when Vint Cerf very kindly agreed to have the e-mails sent in his name.  Because we didn't know things were going into spam folders or if they were going or if people were just not interested.  But once Vint was e-mailing them, we had a much higher rate of people opening the e-mails.  So actually, one of the things I want to do is look at distribution of opened e-mails and how that compares.

And I also want to look at the demographics of our participants compared to any information we can get about the IGF community.

>> AUDIENCE MEMBER: One last question.  The process of a whole, the movement of opinions over the course of a week.  Is there the potential -- I'm not saying it is or isn't, but is there the potential for, I don't want say gaming of it.  But you give your feedback and evaluation in an insulated environment prior to coming here.  And then you as a stakeholder group get-together and hey, I was in part of this Deliberative Poll and they were asking these questions.  Somebody mentioned earlier, we need to hold the party line.  Is there the potential for that playing out over the course of the --

>> JIM FISHKIN:  Perhaps if it were viewed as consequential.  Yoke we have that problem.  One of the things I'd like to do, if we expand this, would be to make it a fully-controlled experiment and have a pre and post for IGF participants in general and then a sample that specifically invited to deliberate and compare the deliberative experience of these people with the experience of others.

We never actually had in any context, people gaming the results.  So although I have rational choice colleagues who have modeled what that would look like.  But it never happened.  People -- I think anybody who listened to those discussions or viewed the transcripts, would think these people are actually deliberating sincerely for overwhelming -- maybe not every single person but overwhelmingly they were weighing the merits of the policies.  But in theory that could happen.

>> MAX SENGES:  Thank you, Jim.  I have Dan, the gentleman over here, Edmond and Julia and if I may, we are really also looking for feedback.  This is a pilot and we will explain the methodology and everything in the paper and the publications that are forthcoming.  To your question, I think is online a better possibility than taking time out of the schedules here at the IGF?  Is it maybe good to offer it as a tool for workshop organizers to see did their workshop change people's opinions?  There is a number of ways this might work or might not work.  We would love to hear from the community what you think.  And I add you to the list.  Please keep your comments very short.  One maximum two minutes.  We are running out of time.  Thank you.

>> DAN WARNER: I'm a T.V. guy so I'll be very, very short.  Dan Warner from Public Television in the United States and I have been the producer of many of the polls and I came here to observe the implementation of this one here.  I want to go back to the first point that was made, because I think there is not semantics but -- I work for McNeil air productions.  We don't tell people what to think under any circumstances.  My deep involvement with the Deliberative Poll is that it is -- I think it's a fantastic form of consultation.  Not recommendation.  Recommendation has a sense of, we are telling you what to do therefore you haven't taken our recommendation, especially in a multi-- in a universe with as many players as this.  I think the value of the Deliberative Poll is it form an effective, once all the issues are worked out, but it can be very effective form of consultation where players who aren't necessarily heard in the IGF process along with the players who are heard, can share views and there can be a result.  But I would delete the word, recommendation and replace it with the word, consultation, because I think that is what -- and it's heard.  Of all the deliberative polls I have done, that's it.

>> MAX SENGES: Thank you.  It was a long list.

>> AUDIENCE MEMBER: That's all right.  Garlin McCoy technology education.  Just to actually some -- a couple of minor things but it goes to -- seems to be embedded throughout this.  I took the participated in at least the poll part.  I didn't unfortunately have the bandwidth to do the longer.  But for example, you're talking about establishing funds within and between a country to increase access to the Internet.  And again, this is just one but there are several examples through here that I marked up.  But, for example, in Kenya, we worked on to get the cable in and everything but one of the things we worked on was cross-border data and also roaming.  You have this now moving throughout Africa where hopefully the goal is you'll have one Africa in terms of free roaming, no roaming fees.  But that is not so much a funding issue, collective action issue, yes.  Governments do working together, but not so much pooling their funds together.  It's just simply ratcheting down, eliminating a regulation.  So in a lot of instances -- it doesn't always evolve around funds.  Sometimes you can do a tremendous amount of good-looking at what -- again in terms of collective action can be done in a region or whatever, that would lower existing regulatory barriers to then -- the movement of both Private Sector and NGO assets get deployed and this sort of thing.  But just one of several instances.

>> MAX SENGES: I think that is the deliberation that is happening in the process for the briefing, all I can say is we had an Advisory Board.  We had several sessions of several days of people that are considered real experts in this field, to look at the materials and by no means are they the final or complete thing.  I think they were very good consolidated pro and contra waiting.  But by all means.  Thank you for that.

>> JIM FISHKIN:  If we do another round, we should talk to you and others, clarifying further additional options or refining the options to get at this question.

>> MAX SENGES: Edmond.

>> AUDIENCE MEMBER: Edmond Chung here.  I did participate in the process.  I filled in the survey and I wanted to respond to Jim's point.  So, I guess yes, I admit that after deliberations I made changes to try to move the needle.  I mean, that is one of the things that you're mentioning.  But, that is in reflecting on it, that is a result of the deliberation, I think.  Much more than gaming the system.  Why?  Because we weren't told what the scores were before we enter into deliberations.  So we had no way, at least the participants had no way -- I thought.  At least my experience was, I didn't know the scores before.  I only see the scores now.  Right?  So I don't know which ones I need to try to really move the needle.  Except that during the deliberations there would be certain things that I felt my -- more strongly about and maybe I wanted to make it a little bit more dramatic.  There are things that I think more people are already seeing the view that I'm seeing so I probably will make it less dramatic change or scoring for it.  And I guess that is natural.  But without having the pre-scores, I think it is at least my reflecting on it, the gaming factor is not that -- however, when it becomes more it does become a quote/unquote, recommendation, then that activity will probably manifest itself a little bit more.  But that's my point.

>> JIM FISHKIN:  I want to say, at the end of the day when you filled out the questionnaire, did you tell us what you sincerely believed?

>> Edmond:  Yes.  But as I mentioned, it is changed by the deliberation and the deliberation sometimes when certain views I think are -- I feel very strongly about, I might just inch it over from 8 to 9, right?  That is a human nature of things.  But maybe that is just me.

>> MAX SENGES: We did hear that the scale from 1-10 was good because people didn't remember, I said yes or no and I needed to shift.  But they really felt and answered the questions on a scale where it was a new question, a new consideration of the same question.  Julia.

>> Julia:  I'm one of the people who read the Vint Cerf e-mail and then didn't participate.  I think there are probably different ways of expanding the sample or getting it to be more representative that is to respond to the needs of the different stakeholder groups.  So I have a bit of concern hearing kind of well, I don't think the sample is large enough so therefore perhaps we shouldn't fund it because of course for some stakeholder groups, funding to actually participate would be really important like the stakeholder groups that have a lot of time but not a lot of money.  In my case, it would be kind of a different thing.  You had an overrepresentation of government stakeholders but probably not a lot from the parliament Aaron side but rather administration ministry.  So in mine case, I would have been able to participate if I had read that invitation six months before it is taking place because I can make room for five hour session but not two weeks in advance.  I was actually in the U.S. but I would have had to change flights and stuff.

>> MAX SENGES: Thank you, Julia.  In the interest of time.

>> PANEL MEMBER: I'm Shirley from Tunisia.  The way I see deliberation, I like to see it as a system,s ins a rigid system about you how one particular deliberation, it could be having a way forward.  What is next after this deliberation?  So, I was thinking that probably like let it flow and let's see what happens after this deliberation whether the participants there will be willing to spread the words from here, or probably the Governments, any Governments, they would like to take the results from here, or probably we could have the other way like this discussion through any IGF sessions or workshops or probably to have these results as the other workshop where okay, this is the Deliberative Polling.  So I was like just wondering the ways forward for this deliberation result?

>> MAX SENGES: Thank you very much.  And we have room for one more contribution.

>> AUDIENCE MEMBER: I'll be very quick.  I am just wondering after we went into group discussions we then have big discussion like this and there was some key issues that were discussed there.  Would it be possible to have a way of sensing like getting indication of whether maybe people's opinions shifted after such a big event?  Because I do remember there were certain pertinent key issues and discussions at the roundtable even to be possible because then you can tell the marketplace of ideas whether maybe there has been an influence or maybe certain issues being discussed have a general opinion of the constituents.  Thank you.

>> MAX SENGES: Thank you very much.  And that is another thought.  Is the IGF itself a big deliberation place and we should test before and after the IGF.  How does the IGF as a whole change people's opinion without them?  I know Professor Fishkin is critical of this idea because it's not a controlled circumstance.  There is so much going on here that it is a different point.  And I'm sorry, you had your hand up much earlier.  So if it is okay, please try to be short as possible.

>> AUDIENCE MEMBER: It's just feedback.  Caterina working at CIGBR (?) and I was one of the participants and I saw the e-mail of Vint Cerf.  Just feedback about the process.  So our group after the plenary didn't really discuss anymore.  I think you were in my group, right?  Anyway, we discussed a lot before but after the plenary, the group, most of the group thought, okay, we were waiting for a more technical approach other than discussions itself.  And just feedback too that maybe it can be changed a little bit or talked further.  And I have a question.  Are you doing anything academically with this poll?  What is the next step?  What are the next steps?  Doing a paper?  Or just a combination?

>> MAX SENGES: Thank you very much for that question.  We are indeed planning to do the proper analysis and then publish a paper on the piece.  Or equally important we are thinking about how to move it forward.  We are indeed as organizers very much encouraged by the experience here and the feedback that we got.  Thank you very much also for the critical feedback in some cases we might respectfully disagree with, and some of the points it is really important to bring this out and to listen and weight the arguments for and against the Deliberative Poll.  This is always a deliberation about how to make it most valuable and in the end, the practice will show whether it is indeed a useful tool or not by acceptance and offerings that we can make.

It leaves me to say thank you to all of you and you came here and say thank you to the people who participated in the poll, to the people who helped prepare the briefing materials, to all the organizers, to the team itself who worked really tirelessly to make this work and not-easy circumstances.  Pilots, the first time you do something is always the most difficult one.  So I think we are looking forward to a long way forward that hopefully leads to lots of value for the multistakeholder community and the Internet Governance process.  Jim, you want to say a couple last words?

>> JIM FISHKIN:  Yes.  I don't care about the words recommendations.  I'm used to making recommendations that nobody follows all the time.  But whether you call it -- but what I do insist is these are the considered judgments of the sample.  So then you evaluate the sample and people can take the considered judgments for what they are.  And I think this world of IGF is filled with recommendations.  But I don't want to get stuck on a word.  I think that that if at some point if we did a controlled experiment where we had a larger sample of people who didn't take the poll and we had it before and after and we had the deliberators and compared, it would be very interesting and we will certainly analyze the viability of both the online and the face-to-face versions and the more I get involved in this space, the more I feel that -- and I have sentence Friday many participants talking to me, a kind of hunger for deliberation in the sense of serious discussion of policy options and a context where their voice will be listened to.  That doesn't mean their voice will determine policy.  But their voice will be listened to.  And I have to tell you, that for a pilot where we had so many difficulties in getting the e-mail addresses and the spam folders and the rest of it, that the data turned out to be, I think, credible.  I say that on first analysis.  Highly credible.  But we will analyze in much greater detail and we will publish and we will offer the views of the sample.  After all, a lot of people did a lot of hard work deliberating.  It's exhausting.  They did a lot of hard work to think through these issues and we have a responsibility to disseminate their views.  They are considered judgments at the end.  If you call them considered judgments rather than recommendations.  Some people can take them as recommendations.  Anyway, thank you for so many people in this room who helped either as participants, advisors, and if we dent get the policy options exactly right, we consult further and start earlier and we'll try to get them even better.  But, most people who saw this document said, it was pretty good as a first draft.  It was pretty good.  And we can do better.  Now if you want to see more about Deliberative Polling with ordinary people, and you people are extraordinary, but ordinary people, go to the centre for deliberative democracy website on the Stanford website and maybe see where we have done this in many places.  Who knows, maybe some day we'll do it in Kenya.  Thank you.

>> MAX SENGES: Thank you.  We are here to answer any questions and I'll involve you in any way shape or form you like to be involved in the follow-up.  Thank you.

>> AUDIENCE MEMBER: Is this electronically available anywhere yet?

>> MAX SENGES: We will make it available.  Obviously we couldn't before because of the materials.

>> AUDIENCE MEMBER: Thank you.