From E-Consultation Guide
Jump to: navigation, search

Alexandra Samuel on making consultation better

Exit interviews

Has anyone on the list done exit interviews or follow-up interviews on participants in your own e-engagement projects? That would be the kind of analysis that I think could help address this question of how to do consultation "better".

Maybe I've spent too much time with the social capital crowd, but much of my interest in the field of civic engagement has to do with the micro-level impact on participants. Does the experience of participation transform (or even affect) the opinions, values, participatory attitudes and engagement activity of people who participate in online engagement?

We won't know if or how different types of electronic engagement affect our democratic capacity unless and until we start systematically measuring the impact of participation on e-engagement participants, by doing intake, exit, and ideally also follow-up surveys. I would love to know whether the experience of participating in some sort of reasonably intensive online dialogue left participants with a greater inclination to engage in future on- or off-line civic activities -- and to follow-up a year or two later in order to find out whether that inclination translated into action. And of course we'd need to have some intake data, as well, so that we could compare the participant population to other groups of non-participants.

But I've yet to see any data that does this sort of analysis or comparison.

Evaluation resources

Peter's question sent me running to Google (where I discovered a lot of consultations on the subject of evaluation...maybe we could organize an exchange?)

It turns out that the Hewlett Foundation has funded a joint research project on evaluating dialogue & deliberation, jointly undertaken by the Deliberative Democracy Consortium and the National Conference on Dialogue and Deliberation (the NCDD's web site, is a great resource for general consultation and deliberation issues).

The project has gathered 50 assessment tools, reports and papers, some of which will soon be available on the NCDD’s resources page. To access the full set of resources immediately, you need to access the archives of the NCDD’s listserv on evaluating dialogue and deliberation (which requires you to register with the NCDD site — a very quick & easy process that will be initiated when you try to access the archive). Visit the NCDD’s e-mail list page and click on the "Evaluating Dialogue & Deliberation" list.

The NCDD’s web site also includes a paper by Angie Boyce of the Boston Museum of Scienc that she offers a very nice review of the evaluation literature. I’ve pasted some excerpts into this message, below; those who would find it useful to read the literature review in full (3 pages of a 9-page paper on “Evaluating Public Engagement: Deliberative Democracy and the Science Museum”) can download the paper in Word form at [1].

The Canadian government has a report on ´´Evaluation and Citizen Engagement´´ that seems to be aimed at public servants trying to build evaluation processes into their own engagement projects. The report includes an annotated bibliography on the subject, much of it focused on “subject-centered evaluation” -- i.e. evaluation by participants.

Hope these resources are useful.


-- Alexandra Samuel