In my final post of material from the Handbook of Public Communication of Science and Technology, I’ll include info from two chapters on research on public understanding of science. First, Martin Bauer on public survey research:
Summary: Bauer discusses the history of research into PUoS, and outlines three eras of research paradigms. From the 1960s-80s, focus was on “science literacy:” basic literacy and civic competence (facts & methods, appreciation, rejecting superstition). Research assumed a public deficit & measured knowledge and attitudes. Critiques centered around lack of emphasis on trust issues; definition of “superstition;” is literacy a continuum or threshold level; and focus on facts/process rather than knowledge in context. During the mid-80s-mid 90s, “PUoS:” foregrounded public attitude deficit and assumed more knowledge would lead to more positive attitudes toward science. Research was based on one of two assumptions: “rationalist” (people need knowledge and training & will evaluate sci. issues rationally) or “realist” (people decide emotionally, so market-based research.) Critiques: relationships among interest, attitude, and knowledge not clear; and positive attitudes are not correlated with knowledge. From the mid-90s to the present, focus has been on “science in society:” the public’s lack of trust in scientific experts. This research focuses on science as a single sector of society, assumed that decline in public trust leads to a skeptical but informed public, urges more public policy involvement, and generally takes an interventionist stance. Critique of this research paradigm centers on time-consuming nature of focus groups and ethnographies; the creation of a new professional class of evaluators; and suggests a need to return to PUoS measures anyway to see if focus groups actually have any effect on participants.
Comments: Bauer frames these three research paradigms as discourses surrounding research, not as complete shifts in research methods (e.g., old methods still have relevance.) Though field as a whole has shifted, different methods are probably still useful in certain circumstances. An alternative PUoS “successor” is “public participation in science,” which aims to include people directly in research, rather than focus on P.R./trust issues.
Links to: Cornell Lab papers (public participation, as alternative to trust-focused research)
And last, Federico Neresini and Giuseppe Pellegrini on evaluating communication efforts:
Summary: The authors lay out what seems to be a common-sense approach to evaluating results of research efforts, including the need to clearly state objectives at the outset and evaluate on that basis, usefulness of both quantitative and qualitative methods, and the need to plan for evaluation. They acknowledge that when evaluation becomes structured and formal, there is a political aspect to it. Another thing to do is match methods to the communication model you’re operating under (e.g., deficit model-evaluate public; dialogue model-evaluate all actors in dialogue.) They cover different types of evaluation during project phases (assessing ability to complete objectives, formative eval., summative eval.) For communication, the idea is to establish the extent and nature of change in the audience (or audience + communicators + other actors in discussion.) Changes can occur in knowledge, attitude, mental models, and behavior; different methods are appropriate to measure different types of change. They discuss experimental design issues: problems of correlation vs. cause/effect relationships, pre/post survey biasing of participants, deference to interviewers, and short-term vs. long-term effects.
Comments: Evaluation of communication results is apparently a controversial subject in this field (according to the authors), but their discussion of methods and things to be aware of seems reasonable to me. Fair review of this type of material.