Categories
evolution science studies visuals

Did Charles Darwin use a slide rule?

One of the questions I’m interested in is how changes in the technology available to create science visualizations have affected the final products. While there are obvious differences related to the final display media- like paper and ink vs. monitors and pixels- other “hidden” technologies also play a part in creating visualizations.

Let’s look at the case of visualizations for which you need to do some sort of math in order to create. These include graphs and charts, but also less obviously number-based visualizations, like those showing the relationships among species.

Essentially, we base our understanding of the relationships among species upon how similar they are. Today, this means classifying many traits of different species, creating a table of the different types of traits each species has, and determining what percentage of traits they share in common.* Then we compare the percent similarity among each species, and use that information to construct phylogenetic trees- visualizations of the relationships among the species.

Example of a phylogenetic tree (c) Surachit CC-BY-SA 3.0

Clearly, some mathematical calculations need to be done here. And the more traits and species you are trying to deal with, the more complex the calculations become. Today’s biologists use computers- and even supercomputers- to help them crunch all the numbers they need to be crunched. Once that step is done, other computer programs help them construct the phylogenetic trees.

So, let’s step back in time to the early days of evolutionary biology. In Darwin’s day, this mathematical approach to evolutionary biology didn’t exist. Species were classified in a more qualitative way, but one that was still based upon similarities and differences. This approach relied more on the judgement of the individual scientist in determining which species were most closely related, rather than a compilation of percentages. The visualizations that resulted relied less on math, and more on individual judgment and traditional conventions of design.

But does this mean that early biologists didn’t use math when creating visualizations? Not at all. Most probably used math at least in some amount to help them figure out relationships between species. And (to get a bit off-topic) they also used complex math for other aspects of biology, such as determining population densities. To help them in this task, they actually had a fair array of tools, such as logarithmic tables, Napier’s bones, slide rules, and possibly even mechanical calculators.**

Modern slide rule (c) ArnoldReinhold CC-BY-SA 3.0

But were these tools really widespread among biologists? And did the introduction of more powerful calculating devices help spur the change in biology that led to the number-heavy world of today? I suspect the answer to the latter question is yes.

So, getting at the title of this post, did Darwin use a slide rule? My (admittedly limited, so far) research hasn’t been able to turn up any examples of this. However, I did find an autobiography of his fellow biologist, Alfred Russel Wallace, that talks about using a slide rule as a boy:

“My brother had one of these rules, which we found very useful in testing the areas of fields, which at that time we obtained by calculating the triangles into which each field was divided. To check these calculations we used the slide-rule, which at once showed if there were any error of importance in the result. This interested me, and I became expert in its use, and it also led me to the comprehension of the nature of logarithms, and of their use in various calculations.” (From Wallace, A. R. 1905. My life: A record of events and opinions. London: Chapman and Hall. Volume 1. via Charles Darwin Online)

If Wallace used a slide rule, it’s reasonable to think Darwin might have too. And early visualizations of evolution probably did use tools other than pens, ink, and straight brainpower.

* I’m oversimplifying this a bit, because another important thing to consider is whether the traits are based on shared ancestry, rather than convergent evolution, and some other factors. Because of these factors, some of the traits in your table are more important than others, so are “weighted” more heavily in your calculations.

** Charles Babbage of Difference Engine fame was a contemporary of Darwin’s, and they even corresponded about his calculating devices.

Categories
politics science communication science studies

The Science of Why We Don’t Believe Science

An interesting article posted yesterday at Mother Jones looks at the psychological research behind why we often believe information that agrees with our previously held beliefs, and reject information that challenges those beliefs. The article, by Chris Mooney, builds on the psychological theory of “motivated reasoning:”

The theory of motivated reasoning builds on a key insight of modern neuroscience: Reasoning is actually suffused with emotion (or what researchers often call “affect”). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we’re aware of it. That shouldn’t be surprising: Evolution required us to react very quickly to stimuli in our environment. It’s a “basic human survival skill,” explains political scientist Arthur Lupia of the University of Michigan. We push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself.

We’re not driven only by emotions, of course—we also reason, deliberate. But reasoning comes later, works slower—and even then, it doesn’t take place in an emotional vacuum. Rather, our quick-fire emotions can set us on a course of thinking that’s highly biased, especially on topics we care a great deal about.

What’s interesting (and disturbing) is that, for scientific issues that are deeply tied to our sense of identity, education based on evidence actually often makes us less likely to believe the evidence:

…one insidious aspect of motivated reasoning is that political sophisticates are prone to be more biased than those who know less about the issues. “People who have a dislike of some policy—for example, abortion—if they’re unsophisticated they can just reject it out of hand,” says Lodge. “But if they’re sophisticated, they can go one step further and start coming up with counterarguments.” These individuals are just as emotionally driven and biased as the rest of us, but they’re able to generate more and better reasons to explain why they’re right—and so their minds become harder to change.

While the article focuses on science, there are political and ethical implications of this article as well. It’s a good introduction to this active area of research, with timely examples. Check it out!

Categories
evolution meetings pedagogy science communication science studies

Impressions from NARST

Earlier this week, I attended a conference of the National Association of Research in Science Teaching. I wasn’t presenting anything (missed the submission deadline), but it turned out to be fairly worthwhile. I ended up only attending two days of the conference, and focusing primarily on the digital tools/informal science sessions. I did get the chance to chat with a few people about my work and make some connections, which is always nice.

Here are just a few impressions from the conference.

  • The digital media tools used for science education seemed to mainly fall into two categories: simulations for teaching science concepts, and simulations for assessment purposes. (This is probably not a very profound observation.) The former seems to be the more ‘traditional’ tools, e.g., using racing games or pinball-esque scenarios to teach about physics. The latter are newer to me, at least, and are significant in that they represent an attempt to get away from multiple-choice tests for testing inquiry. There were some neat ideas from both these general categories.
  • The digital tools for informal learning were more wide-ranging, which you’d expect. There were some cool demos here; two I found interesting were FoldIt (which turns protein-folding problems into crowdsourced puzzle games) and Dancing the Earth (which uses a mixed-reality simulation to teach astronomy concepts).
  • The session that was probably most useful for me immediately was one on problems in teaching evolution. Some of the bigger conceptual issues raised here were: the challenge of linking evolutionary processes at different scales (e.g., population dynamics & speciation), teaching students to differentiate between useful and non-useful types of evidence, and difficulties with reading phylogenetic trees.
  • I also went to a session on philosophy of science, objectivity, and teaching about pseudoscience. Some of the ideas from this session would be useful if I ever did teach science again, since it was more geared toward educators. One presentation in particular stands out, on the subject of teaching science in communities which place a high level of emphasis on traditional ecological knowledge. The presenter tried to lay out a strategy that charts a middle course between immediate rejection or fuzzy acceptance of TEK, by focusing on talking about cultural technologies, rather than immediately comparing philosophies. The idea seems to be to focus on areas where there’s common ground (i.e., observation, testing, and building technologies in both traditional cultures and science), rather than immediately alienating students by dismissing their culture or dismissing science as a specialized way of understanding the world. This is an interesting idea to think about.
  • Finally, trying to present via Skype is just asking for trouble. I attended one session (a digital media session, naturally) in which two presenters were going to present via Skype. Even though everything was clearly set up and working during the break before the session, when it came time to present, something went wrong with the sound on someone’s end. The two presenters ended up being able to give their talks, after much technical tweaking, but this did not go smoothly.
Categories
metaphor museums science studies

Would Darwin be able to get a job as a biologist today?

As part of my dissertation, I’ve been looking at several metaphors that Charles Darwin used to describe evolution and natural selection in The Origin of Species. It’s been interesting to see which have survived to the present day and which haven’t: Darwin was a great popularizer of his own ideas, but some of his images have stayed in the public eye more prominently than others. The ‘big one’ is probably the metaphor of the “Tree of Life” as a metaphor for evolution in its entirety, but there are several others that he used to illustrate different aspects of his theory. But I’m planning on writing more about this later…

Today, I wanted to point to this discussion, held at the Grant Museum of Zoology of the University College London, on the question of whether Darwin would have been able to get a job as a biologist today. On the surface, it’s a ridiculous question: Darwin’s body of work was the impetus for one of the most profound transformations of the field of biology that has ever occurred. His work has had an incredible impact on the way we view ourselves as a species in relation to the rest of life (and also has been misappropriated in an attempt to justify some strikingly heinous social policies). But if you dig a bit deeper into the kind of scientist he was, and the way that science as a discipline has changed, this actually is an interesting question.

Darwin in his early 30's. No, he didn't always have that famous beard. (Wikipedia)

In a nutshell, Darwin was a ‘gentleman scientist,’ naturalist, and experimentalist, who spent a lot of time observing many different aspects of the natural world. While he’s famous for observing some birds on a long ocean voyage, he also wrote about volcanoes, fossils, coral reefs, plants, chicken breeding, and compost. Today, molecular biology and lab work are where much of the biology action is, and basic natural history research is seriously underfunded in many places. There’s also a much higher degree of specialization among scientists today. So, would Darwin have been able to compete in today’s environment? Take a look at what the museum’s panelists had to say.

My guess is that he would be able to make it in biology today- he seems to have been a pretty adaptable individual, social networker, and fundraiser. But would miss the hands-on excitement of collecting new species of insects, examining rocks, and poking around on tropical islands? I’ll bet he would.

Categories
exam readings information representation science studies visuals

Exam reading: “Framework for visual science”

I’m jumping from a cognitive science approach to visuals back to a more social & rhetorical approach with this chapter. Like my last two readings, this one provides yet another framework for analyzing scientific visuals, but the approach is pretty different (which is great, because I feel like I really need a break from the framework stuff at the moment.)

Also, I believe this is one of the longest titles in one of my readings…

Luc Pauwels. “A Theoretical Framework for Assessing Visual Representational Practices in Knowledge Building and Science Communications.” in Luc. Pauwels (ed) Visual Cultures of Science: Rethinking Representational Practices in Knowledge Building and Science, pp. 1-25. Hanover: Dartmouth College Press, 2006.

Summary: Pauwels’ aim is to establish a framework for analyzing scientific visualizations that includes: the nature of the referent, type of medium, methodology for creation, and uses of the resulting image. The nature of scientific referents falls on a continuum from material/physical to mental/conceptual: directly observable, visible with tools, non-visual phenomena, explanations of non-visual data trends, postulated phenomena and metaphors. Representations can include multiple types of referents (e.g., photo with arrows for non-visual process), and each representation expresses a reality that shapes the image’s interpretation. Illustrations should be both representative of their subject matter and valid examples of the subject (e.g., a photo of a specific bird vs. a stylized drawing of that species.) Production processes all have intertwined social, technological, and cultural aspects (affordances, conventions, and constraints.) Different referents will have “appropriate” conventions for presentation; conventions also vary with the purpose of the illustration (further analysis, teach concepts, etc.) The upshot is that representations have multiple purposes/motivations and may be interpreted differently (e.g., can be used as boundary objects.)

Comments: Scientific illustrations are less a transparent “window” than a carefully selected and stylized rhetorical presentation (though P. doesn’t use “rhetoric”.) Discusses the need for greater awareness of all aspects of his framework for scientific illustrators (and also public)- e.g., awareness of implications of disciplinary conventions for image format. Physical representations are inherently social objects, unlike mental representations. Visual media have one important constraint- that they depict a specific example, rather than words, which can specify a range (e.g., a specific drawing of a flower vs. “this flower has 6-8 petals”)- the viewer has to decide how significant each element of the illustration is (if they even have the awareness to judge this.) Verbal descriptions or use of conventions can help with this problem.

Links to: Kostelnick & Hassett (conventions & rhetorical uses of images); Gilbert (categories of scientific illustrations)

Categories
exam readings public participation in science research methods/philosophy science communication science studies

Exam readings: Public participation in science

Well, here they are: my last three readings for my public understanding of science reading list. After this, I’ll be spending the next week thinking ONLY about my first exam, which is coming up… And I will be presenting a paper at a conference this weekend- but more on that anon.

Anyway, here are the last three readings. These are all gray literature, but give a current overview of at least NSF’s thinking about the field of PUoS:

First: Friedman, Alan J., Sue Allen, Patricia B. Campbell, Lynn D. Dierking, Barbara N. Flagg, Cecilia Garibay, Randi Korn, Gary Silverstein, and David A. Ucko. “Framework for Evaluating Impacts of Informal Science Education Projects.” Washington, D.C.: National Science Foundation, 2008.

Summary: Report from a Natl. Science Foundation workshop on informal science education (ISE) in STEM fields; provides a framework for summative evaluation of projects that will facilitate cross-comparison. The authors identify six broad categories of impact: awareness, knowledge, and understanding; engagement or interest; attitude; behavior; skills; and “other” (project-specific impacts.) For funding purposes, proposals must outline their goals in these categories- while this won’t fully capture learning putcomes, in provides baseline information for evaluating the field of ISE. Also provides advice and suggestions, e.g., what to think about when coming up with goals, what approaches to take, how to evaluate, and how to document unexpected outcomes. It also discusses evaluation designs: NSF’s preference is for randomized experiments, but general advice is to use the most rigorous methods available (e.g., ethnography, focus groups)- discusses pros and cons of various methods. Some specific considerations for ISE evaluation include different starting knowledge of participants; assessments should be inclusive to those from different backgrounds (draw pictures, narratives, etc.) Also discuss specific methods, potential problems, how to assess impact categories for various types of projects (e.g., exhibits, educational software, community programs.)

Comments: Report is targeted to researchers being funded by NSF, to help them navigate new reporting requirements for projects with a public education component. Not useful for my purposes for theoretical background, but does give an outline of the current state of thinking of the NSF for this field.

Links to: Bonney et al. (use this framework for their report); Shamos (discusses different types of evaluation of scientific literacy)

Second: McCallie, Ellen, Larry Bell, Tiffany Lohwater, John H. Falk, Jane L. Lehr, Bruce V. Lewenstein, Cynthia Needham, and Ben Wiehe. “Many Experts, Many Audiences: Public Engagement with Science and Informal Science Education.” Washington, D.C.: Center for Advancement of Informal Science Education. 2009.

Summary: Study group report on public engagement with science (PES) in the context of informal science education- the focus is on describing/defining this approach. PES projects by definition should incorporate mutual discussion/learning among public and experts, facilitate empowerment/new civic skills, increased awareness of science/society interactions, and recognition of multiple perspectives or domains of knowledge. This approach is most common in areas of new science or controversy; the authors mention that the idea is not to water down the science, but to bring social context into the discussion. There are two general forms of PES in informal science education (ISE) projects: “mechanisms” (mutual learning is part of the experience- blogs, discussions) and “perspectives” (no direct interaction, but recognition of multiple values-e.g., incorporating multiple perspectives into an exhibit.) They contrast this approach with two views of traditional PUoS (making knowledge more accessable/engaging): the first view (generally held by ISE practitioners) sees PUoS as a public service; the second view (generally an academic STS/science communication perspective) sees PUoS as non-empowering, based on a deficit model, and not recognizing that the public can be critical consumers or even producers of science. PES arises from this second view: the key is that organizations must think critically about publics and experts are positioned in interactions, and bring in “mutual learning.”

Comments: While the authors recognize that “engagement” has multiple meanings (action/behavior, learning style, overall learning, participation within a group), the PES approach is not about directly influencing public policy or the direction of research. Presumably that approach is too activist(?)- they do mention the need to work toward using PES to affect policy/research. This report seems to take as a given that mutual dialogue between public and experts is a good thing; I’m not sure how well it would make that case to organizations who are skeptical about that approach.

Links to: Trench-“Analytical Framework” (assessment of the place of “engagement” model)

Third: Bonney, Rick, Heidi Ballard, Rebecca Jordan, Ellen McCallie, Tina Phillips, Jennifer Shirk, and Candie C. Wilderman. “Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education.” Washington, D.C.: Center for Advancement of Informal Science Education. 2009.

Summary: Study report on public participation in science research (PPSR) as part of informal science education (ISE.) History of ISE: began as public understanding of science (PUoS)- experts determined what public should know, explanations should lead to greater knowledge, which should lead to greater appreciation. Shortcomings of PUoS are that people have greater engagement when topic is directly relevant or interactive; focus is on content delivery, rather than understanding scientific processes. PPSR projects (citizen science, volunteer monitoring, etc.) ideally lead to learning both content and process. These projects involve public in the various stages of the scientific process to some degree. Three types: contributory (scientists design, public just gathers data), collaborative (scientists design, public helps refine, analyze, communicate), and co-created (designed by both and at least some public participants involved in all steps.) They evaluated 10 existing projects using Friedman at al.’s rubric; potential in PPSR projects to address all categories of impacts. Future opportunities include developing new projects (new questions, engage new audiences, test new approaches), enhance current PPSR projects (e.g., go from contributory to collaborative or co-created), add PPSR elements to other types of ISE projects, and enhance research/evaluation of PPSR projects. Two final recommendations are that projects should do a better job of articulating learning goals/outcomes at the beginning, and that comprehensive evaluation methods should be developed.

Comments: This committee report offers a current assessment of PPSR projects and synthesizes recommendations for future research. Scientific literacy remains a basic individual measure in this framework, even with the emphasis on participatory interaction (in contrast to social constructivist approach.) While the assumption is that PPSR projects do affect understanding of science, there are large challenges to assessing this, even at an individual level; part of the problem is that this type of assessment is often added post hoc.

Links to: Roth & Lee (conceptualize sci. literacy in PPSR as a communal property, not individual); Friedman at al. (framework for evaluating PPSR projects)

Categories
exam readings knowledge work politics science communication science studies

Exam readings: Science in the knowledge economy

These are both chapters from Communicating Science in Social Contexts: New Models, New Practices that put science communication into a very wide context of societal changes.

In “Representation and Deliberation: New Perspectives on Communication Among Actors in Science and Technology Innovation,” Giuseppe Pellegrini wants to reform the way democracy operates:

Summary: Pellegrini takes on the relationships between scientific experts, business, political institutions, and the public, and suggests that new governance models are needed for developing technical-scientific fields (e.g., nano, biotech, communications). He contrasts representative democracy (public delegates decision-making to political class, they delegate it to scientific & business experts) to deliberative democracy (participation of all interested parties.) In recent years, doubt has been cast on both scientific experts as a community of objective decision makers (e.g., scientists going into business), and on political institutions’ ability to regulate business or even remain functional (e.g., globalization, collapse of the social contract). This has been facilitated by: greater communication, the speed of scientific and technological changes in recent years, the end of consequence-free perception of progress, and a new appreciation of the uncertainty inherent in science (facilitated by a conflict-driven media.) Pellegrini suggests a new view of rights of citizens, which would include access to opportunities to participate in scientific social decision-making, and access to information about government workings (and ability to communicate directly with decision-makers). This would expand the deliberative aspects of democracy past traditional voting, or delegation of decision-making powers to elites.

Comments: Pellegrini is not clear about who will guarantee or fund these new communication rights of citizens, or guarantee that vested interests will not attempt to manipulate the system via traditional advertising, etc., (but acknowledges these are valid criticisms), and it’s also unclear how decisions will actually be made (he’s explicitly advocating more open discussion about science-tech-society issues, not decision-making.) He does mention that not all participants’ views should be equal (so still a role for experts). Mention of “powerful and authoritative scientists” making society’s decisions is ironic, given the recent state of political discourse in the U.S.

With somewhat related themes, Bernard Schiele’s “On and About the Deficit Model in an Age of Free Flow” redefines scientific literacy in the “knowledge economy.”

Summary: Schiele’s view is that science has become integrated into the “information society” to such an extent that the deficit model of communication is no longer useful. Science began by openly communicating in the vernacular, but increasing specialization and the rise of professional science communicating media separated science “producers” from “consumers.” The deficit model assumed that both science literacy and political literacy were necessary for citizens to participate in sci-tech decision-making processes. Shiele believes that the boundary between science and non-science is becoming blurred (e.g., psychology), and that the communication process is now about fostering multiple connections between science and society. He connects these changes to the knowledge economy: universities collaborating with industry (and communicating results to public), research is becoming more applied (problem-solving and products), and scientists are also becoming replaceable knowledge workers. The public now feels able to comment on the directions research takes; non anti-science, but feels that “progress” is not the answer.

Comments: I’m not sure to what extent Schiele’s characterization of scientists as replaceable knowledge workers is accurate. He seems to equate expertise with the ability to marshal (publicly available) knowledge at need and adapt to different contexts (so everyone could potentially succeed in any field); I don’t think this knowledge flexibility necessarily maps to understanding how knowledge is created & interpreted within different domains. He also seems to be defining science literacy as a way of thinking about science and scientific culture, and assuming that the public is educated about science/scientific institutions (as cultural actors; not about how the scientific process works.)

Links to: Shamos (very different definition of scientific literacy)

Categories
exam readings pedagogy research methods/philosophy science studies

Exam reading: “The myth of scientific literacy”

Morris Shamos’ “Myth of Scientific Literacy” starts off with some grim estimates on the state of scientific literacy in the U.S.: maybe 5-7% of Americans are scientifically literate- able to not only understand science terminology and know some basic facts, but also understand how the scientific process works. While this book was published in 1995, the situation hasn’t changed much.

Summary: Shamos claims that U.S. educational policy (in many iterations) has been trying to increase general science literacy and increase numbers of science-career students, and failing at both. Science is difficult because it requires a non-commonsense mode of thought; deductive/syllogistic thinking (commonsense) can lead to correct conclusions from incorrect assumptions, and science rests on a combination of deduction, induction, quantitative reasoning, and experimentation. Through the history of science education, there has been debate over what to teach and why; Shamos suggests three levels of sci. literacy: cultural (understand some terminology), functional (know some facts), and “true” literacy (understand scientific process). “Science” education generally is focused on technology or natural history studies (not sci. process)- which would be OK for “science awareness,” but also need to add an understanding of the use of experts to assist in making societal decisions. Broad-based sci. literacy is hampered by several factors: mathematical illiteracy, lack of social incentives, science can be boring & hard to learn, and disparagement by public intellectuals (and others.) He especially cautions against movements to discredit rationalism as the best basis with which to relate nature to society through science.

Comments: On use of experts: failing to create a truly sci. literate citizenry (which Shamos suggests is impossible), he suggests a system of public science experts who help make decisions in a transparent way (with citizen watchdog groups.) Overall, wide-ranging discussion of science education, philosophy of science, and possible future models for science education (also incorporating adult ed, though he focuses on formal ed.)

Links to: Pellegrini (models of citizen-scientist expert interactions); Holton (“anti-science” forces)

Categories
exam readings research methods/philosophy science studies

Exam reading: “Science and anti-science”

In contrast to yesterday’s reading, Gerald Holton’s “Science and Anti-Science” falls on the opposite side of the empiricist-subjectivist spectrum. It’s a collection of essays written during different periods, and a fair amount of knowledge of the philosophy of science is assumed (particularly in the first two essays.)

Summary: A collection of essays on philosophy of science, beginning with the rise of positivism in the early 20th cen. and the work of Ernst Mach and the Vienna Circle. Positivism is based on rejecting metaphysics and hierarchy, in favor of relying on empirically-derived data; explanations should be purely descriptive (not religious, metaphysical, mechanistic.) Three main concepts: no supernatural protectors, so need to help ourselves; we have the capability to improve life for individuals and society; and in order we act we need knowledge- the sci. method is the best way to get knowledge, so science is one of the most valuable tools to improve life. While positivism was the basis of modernism, the increasing importance of relativity and probability theory introduced some philosophical elements to science; these were resisted by some researchers. Holton discusses rhetoric of scientific papers: reliance on demonstration; dual rhetorics of assertion (of one’s own ideas) and appropriation or rejection (of others’ ideas) in communication; describes sci. papers as a dialogue between multiple actors (e.g., author & previous researchers). He defines three types of scientific praxis: Newtonian (“basic”/seeking omniscience), Baconian (“applied”/seeking omnipotence), and Jeffersonian (combined mode of basic research addressing a specific social problem/seeking to improve human life through understanding). Discusses differences between cyclical and linear models of human progress and how these apply to science (e.g., “science carries seeds of own destruction” vs. asymptotically approaching ultimate knowledge.) Final essay discusses “anti-science”: scientism (e.g., Social Darwinism), pseudoscience, superstition (New Age), misguided science (Lysenkoism). Anti-science is a complete worldview, not just an incomplete understanding of scientific worldview. Reasons for acceptance include sci. illiteracy, concerns with technology and global stewardship, and skepticism of authority. Advocates “new humanism” of rationality, acceptance of uncertainty & pluralism, Jeffersonian model of science; discusses ways to counter “traditionalists” and “postmodernists.”

Comments: Holton’s “postmodernism” involves extreme social constructionism; many postmodernist scholars would be moderately happy with his Jeffersonian model of science (though the insistence on science as the best way of knowing about the world would not be popular.) Illustrates use of rhetoric both within science and as a means to foster a scientific worldview and counter “anti-science” in the public sphere.

Categories
exam readings research methods/philosophy science communication science studies

Exam reading: “Crafting science”

Here’s the thing. In philosophy, there is a spectrum of belief about the “reality” of the observable world. This ranges from extreme empiricism (we can only know that which we can measure with our senses, therefore science is the only way to know the world) to extreme postmodern relativism (all perception is subjective, therefore scientific observations are only as accurate as religious or philosophical notions about the world).

Debate among adherents of both philosophies (as well as those who fall somewhere in between) has occasionally been bitter (and has crept into the political realm.) I fall closer to the empirical end of the scale, though I do believe that there is room for discussion of social construction around scientific models and science as an institution.

Joan Fujimura’s “Crafting Science: Standardized Packages, Boundary Objects, and ‘Translation'” comes from a decided social constructionist perspective (for example, a footnote at the beginning assures the reader that she does not consider “facts” to “represent reality.”) That said, she does present an interesting way of looking at they ways in which scientific concepts are transferred among different fields (though I suspect my watered down view of the role of social negotiation in science would seem inadequate to her.) Here is my summary:

Summary: From a social constructionist perspective, scientific knowledge is produced not by consensus or by referring to objective nature, but by negotiation and argument. Fujimura combines Latour’s “boundary objects” with Star & Griesner’s focus on collective negotiation in constructing scientific “facts.” Fujimura suggests that “standardized packages” of both technologies and a theory (i.e., several related boundary objects) facilitate cooperative work by acting as interfaces between different social worlds. The packages allow cross-communication (via “translation”) and cooperation between disciplines, while still letting disciplines maintain the integrity of their viewpoints. Such packages are more rigid than just single boundary objects, because the different parts co-define one another. She uses the example of oncogenes as a recent conceptual framework for cancer research to illustrate how this process works. In this case, boundary objects include concepts (e.g., gene, cancer), databases (which create a standard language), and sequences (DNA & protein). The primary theory is “translated”/mapped onto existing problems in different fields, e.g., links retroviruses (virology) to oncogenes (genetics), then oncogene proteins to proto-oncogenes (developmental & evolutionary biology.) When used together, shared theories and standard tools can ensure “fact stabilization.”

Comments: Provides a framework for how ideas are communicated across disciplines or among interest groups. Can this be used in a less-extreme constructionist setting? If people can argue about multiple perceptions of a thing, then that suggests that there really is a “thing” out there to argue about (in other words, I believe in reality.)

Links to: Hellsten & Nehrlich (metaphors in sci. comm); Bucchi (metaphors for communication between disciplines)