Categories
discourse community/community of practice exam readings networks

Exam readings: what makes an online community?

“Community” is a well-discussed term in the online realm- are online communities really communities? If so, how do they work (e.g., language-based, activity-based, social network-based)? And what is a community, anyway (village, workplace, civic organization, fandom)?

One of the issues that’s come up in my readings is how varying definitions of what a community is interact with different frameworks for how learning occurs. For example, the “communities of practice” framework says that people learn as an effect of the process of becoming members of a community. How a concept like this intersects with online communities, without the kind of face-to-face interactions that characterize traditional communities, is an interesting question. These three readings touch on this in varying ways.

Steven Brint. “Gemeinschaft Revisited: A Critique and Reconstruction of the Community Concept.” Sociological Theory 19(1): 1-23, 2001.

Summary: The concept of “community” is fuzzy and has fallen out of favor in sociology; replaced by oversimplified ideas of “interaction rituals,” social networks (focus on material benefits to participants), and social capital (focus on motives). Brint proposes a new definition of community: an aggregate of people with common activities & beliefs, bound together principally by values, concerns, affect & loyalty. The motives for interaction are central (though rational or financial motives can be a part, the ones listed are primary-work or interest groups/clubs mainly bound by rational means, so not communities), and groups can be any size, or dispersed. He provides a framework for differentiating subtypes at different levels of interaction: 1) ultimate context (geographic or choice-based), 2) primary motivation (activity or belief-based), and 3) either frequency of interaction (for geog. communities) or location of other members (for choice communities; dispersed groups here get 4th level of interaction, depending on whether they ever meet in person). The key is that these organizational features predict organization & “climate” features of the different types of communities (though he states that these are hypotheses), e.g., monitoring, levels of investment, pressure for conformity. However, there are factors of environmental context (e.g., geography, tolerance as a norm) and community-building (e.g., hazing, meeting places, enforced appearance) that will also be important in shaping communities.

Comments: Discusses the implications of this framework for liberal vs. socialist models of community; suggests that “community” persists as an ideal even though our typical experiences of it tend to be non-egalitarian and non-validating. However, he speculates that virtual or “imaginary” communities, experiences are closer to this ideal of egalitarian & validating community; perhaps these communities will be freer of vice and less judgmental of members. I’m not sure how this last idea really holds up in online communities- there’s certainly a lot of demoninzing of the “other” that goes on online…

Links to: Lave & Wenger (community/participatory knowledge)

Molly M. Wasco, Samer Faraj, and Robin Teigland. “Collective Action and Knowledge Contribution in Electronic Networks of Practice.” Journal for the Association of Information Systems. 5(11-12): 494-513, 2004.

Summary: The authors describe a model for “electronic networks of practice:” informal groups that primarily exchange information online. They call ENPs a special case of communities of practice, in which there are no formal controls and participation isn’t face to face (CPs would lie on the other end of a continuum of such groups). One distinction in ENPs is that individuals’ use of collective knowledge is “non-rival” and “non-excludable” (though individuals can “free-ride” on others’ contributions). In their model, macrostructural properties (e.g., medium of communication, network size and access) determine structural ties (generalized patterns of exchange- generally non-reciprocal bet. individuals). Structural ties affect the relational strength of ties (e.g., obligation, identification, trust-between network as a whole and individuals); these influence creation of understanding and community norms. Relative strength of ties affects both social controls (reputation, status, flaming, shunning, banning) and knowledge contribution. Knowledge contribution both influences and is influenced by individual motivations and resources; it also feeds back onto structural ties (this is the mechanism for re-creating, strengthening, and expanding the network).

Comments: The authors end with discussion of the model’s limitations (e.g., need to make modifications if there are F2F interactions, formal incentives for participation, reciprocal relationships that develop over time). They also see a need for looking at individual roles-many times, an active core of participants does most of the work.

Links to: Lave & Wenger (communities of practice); Preece & Schneiderman (discussion of process of enrollment & individual participation)

Jennifer Preece and Ben Shneiderman. “The Reader-to-Leader Framework: Motivating Technology-Mediated Social Participation.” Transactions on Human-Computer Interaction. 1(1): 13-32, 2009.

Summary: Describes how people get involved in social media by gradually increasing the extent of their participation. The authors’ framework tries to incorporate various related areas of research with the goal of providing a unifying framework for future research. At each of the successive stages of participation (reader, contributor, collaborator, leader), numbers of participants decrease; people can also jump stages, move backwards, or terminate participation. Readers can be attracted with ads, word of mouth; good interface design and reading user-generated content keep them coming back. Contributors add to the communal effort without the intention of getting too involved. Reputation systems (with communal ranking or tagging) and ethos garnered from association with credible figures help drive increasing participation.  Collaborators develop common ground with others and work on mutual creations (short or long-term). Satisfying discussions, building social capital, collectivism are all contributing factors. Leaders promote, mentor, set policy; they need good editing & synthesis tools, recognition, and opportunities to contribute meaningfully. Well-defined and focused groups are likely to have stronger group identity & participation. Final section focuses on future research needs (e.g., research at each stage, metrics for assessment).

Comments: Authors suggest using data logging/tracking for research, which has ethical implications. Also suggest that young people care less about privacy; I’m not sure this is a generational shift or just young people being dumb.

Links to: Lave & Wenger (LPP); Von Ahn & Dabbish (getting people involved with GWPs); Howe (crowdsourcing); Brint (discusses online communities)

Categories
exam readings networks pedagogy tech design

Exam readings: networked tech and STEM learning

Two related readings on networked technologies and science: the first a NSF task report on cyberlearning, and the second on “collaboratories”-collaborative laboratories.

Christine L. Borgman, Hal Abelson, Lee Dirks, Roberta Johnson, Kenneth R. Koedinger, Marcia C. Linn, Clifford A. Lynch, Diana G. Oblinger, Roy D. Pea, Katie Salen, Marshall S. Smith, and Alex Szalay. “Fostering Learning in the Networked World: The Cyberlearning Opportunity and Challenge.” Washington, DC: National Science Foundation, 2008.

Summary: Task force report designed to give NSF guidance on cyberlearning: “networked computing and communication technology to support learning.” Their focus is on using CL to support STEM education in a lifelong, customized setting- redistributing learning over space & time. The authors believe there’s a high potential now because of new technologies, increased understanding of learning processes, demand for solutions to educational problems. Some examples: Web tech & breaking down location barriers, open & multimedia educational resources, new techs. making learning affordable & accessible, cloud computing, customizable content, and an enthusiastic audience (though schools aren’t up to speed on digital techs.) Key potential problems: responsible data use/data overload, scaling technologies for large communities, how to apply software & other resources. Several issues require action: data management, open/accessible resources need to be guaranteed, NSF needs strategy of funding projects that produce resources for both education & research. They have 5 main recommendations, including a “platform perspective” (shared & interoperable designs), resources developed should be open & freely shared.

Comments: Apparently, while the public doesn’t respect education, we do like electronic gadgets- so the idea is to use these to educate people. This reference will mainly be useful for giving me a sense of the state of the field. They do ask one question that’s interesting: should we train people to work in interdisciplinary teams, or increase the versatility of individuals? (trend seems to be to work in teams…)

Links to: Finholt (collaboratories)

Thomas Finholt. “Collaboratories.” Annual Review of Information Science and Technology. 36(1): 73-107, 2002.

Summary: “Collaboratories”=collaborative laboratories or “labs without walls;” joint science work has historically depended on physical proximity, esp. science with large instruments (or specific study sites). While one answer has been residencies, the problems of this structure have remained, primarily barriers to access or research. Science has been moving toward large, complex distributed projects- can consider these a types of distributed intelligence. Collaboratories require two types of IT: increased communication + better access to instruments and data (data sharing/data viz. tools, remote-use instruments). Finholt discusses history of such projects, from “memex” concept & ARPAnet to current projects in various disciplines. These still involve a small number of participants; libraries and datasets have more use. Other lessons: people can use them sporadically and still be useful, easily integrated software is more accepted (e.g., web-based), some types of activity are naturally more collaboratory (data coll. vs. idea generation), and there are new expectations for participants. Challenges: moving from shared space to virtual space introduces new demands: must make implicit interactions explicit (e.g., pointing, gaze detection), willingness to collaborate and adopt tools is also issue.

Comments: Points out that increased communication can lead to Balkanization as well as broader communication (social exclusivity); benefits are highest for students & non-elite scientists, drawback might be these projects becoming pools for marginalized scientists (e.g., e-journals have lower status). F2F interactions still crucial for establishing contacts; meetings still important- these projects will augment, rather than replace current practice.

Links to: Howe (crowdsourcing)

Categories
exam readings networks tech design

Exam reading: “Games with a purpose”

People teaching computers to do stuff. Actually, I’ve played one of these games (though a 1-person variety). It was kind of fun

Luis Von Ahn and Laura Dabbish. “Designing Games With A Purpose.” Communications of the ACM. 51(8): 58-67, 2008.

Summary: Games with a purpose (GWAPs) involve people performing tasks that can’t be automated, e.g., image tagging, collecting facts, etc. Related to open-source software movement, non-game crowdsourcing, and gamelike interfaces in business apps. The authors first describe three categories for these games: 1) “output-agreement:” players see same input & must produce same output (e.g., give same label to photo); 2) “inversion-problem:” one player describes something, other guesses it (sim. to 20 Questions); 3) “input-agreement:” players are given inputs and must describe them to see if they have the same input or not (e.g., both describe a music clip and then guess if the other player’s clip matches yours). For enjoyable play, need to add features to these templates: time limits, scorekeeping, high scores, randomness, leveling up. They describe mechanisms to guard against player collusion, e.g., cross-checking, random matching of players; games can also be modified for n¹2 players. Games are evaluated by throughput * enjoyability (average lifetime play)= “expected contribution;” this doesn’t capture popularity or word of mouth. They point out that their examples focus on similarity/matching- need a different template for gathering diversity.

The goal of GWAPs is to capture large datasets for developing programs with advanced perceptual capabilities.

Comments: Focus here is on machine learning, rather than human (“useful computation as a side effect of enjoyable game play”-61), but one could potentially link such a system to an educational tool, for a crowdsourced informational resource.

Links to: Brown & Adler (general online learning); Howe (crowdsourcing)

Categories
exam readings knowledge work networks public participation in science

Exam reading: “Crowdsourcing”

This book was more substantial and less rah-rah than I’d originally suspected it would be. There’s a fair amount of discussion of the different types of crowdsourcing, which includes public participation in science as well as the more profound stuff like t-shirt design 🙂

Jeff Howe. Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business. New York: Crown Business Press, 2009.

Summary: Howe discusses the rise of the “reputation economy”: unpaid work for recognition within a community, as an outgrowth of cheap production, underemployed creativity, and online communities. He calls crowdsourcing a “perfect meritocracy;” it fosters collaboration (as its own reward) and community formation. He does discuss drawbacks: shifts in business models/professions (photography, journalism), globalization & flattening of work hierarchies, and the possibility of ushering in cultural mediocrity (though he thinks the last is unlikely.) Overall, he suggests it’s away to utilize human talent better (idea is that people would still have day jobs, and collaborative projects would provide a creative outlet.) Howe outlines several types of crowdsourcing: collective intelligence (group innovation for problem solving; need diversity, and interaction can lead to a limiting consensus), crowd creation (making things, rather than applying existing expertise; need interaction for this), crowd voting/ranking, crowd finance (e.g., microloans.) For success, you need the right crowd and incentives, some professional employees (crowds are great at gathering data/brainstorming, but bad at analysis & organization), an overall frame and guidance for participants, and breakdown of tasks into doable pieces. Mentions 90% rule: 89% of everything is crap/10% is good/1% is great.

Comments: I’m still trying to decide whether crowdsourcing is a brilliant way to achieve meaningful personal expression or a clever ploy by the capitalist system to get free labor. I don’t want to be too negative about these efforts, because they do have great potential to add to the human experience. It seems like crowdsourcing operates much like academia is traditionally supposed to: open exchange of ideas, focus on interesting problems, etc., except that in academia people get paid for their work (I also wonder if there are also connections here to the current diminishing status of experts in a crowdsourcing world, which goes along with reduction in academic pay…) While academia left out a big group of people who now have potential to use this process, there’s still a majority without access to these technologies or who do not have time for this sort of collaboration that are being left out. Perhaps it’s best to think of these projects as a good place to start, rather than an endpoint.

Links to: Lave & Wenger (participants can be seen as LPPers); Liu (core list-politics of knowledge economy)

Categories
exam readings learning theory networks

Exam reading” “What video games have to teach us…”

This exam reading, “What Video Games Have to Teach us About Learning and Literacy,” by James Gee, was not what I expected (after an admittedly quick look at the book synopsis). Rather than making the case for incorporating educational video games into the classroom, Gee uses their structural features to highlight techniques for teaching “critical” learning to students:

Summary: In this book, Gee tries to make a case for incorporating inherent teaching principles of video games into educational settings by drawing connectiong between v.g.’s and current learning theories (primarily situated cognition, “New Literacy Studies,” and “connectionism”). Learning occurs within semiotic domains: sets of practices that utilize different media to communicate meanings. These domains have two aspects: content and a social group (“affinity group”) with a specific set of social practices. According to Gee, current educational practices teach content outside of these social contexts, which makes learning shallow (drill and test-based) and difficult to apply to real-world contexts or transfer to new domains. “Critical” learning arises from experience in a domain, affiliation with the affinity group (at least at some level), preparation/practice for future problem solving in the domain, and understanding the “meta” structures of the domain (content and the affinity group). Another important aspect of learning is identity: learners have a core (everyday) identity, a “virtual” identity within the learning situation (e.g., student, elf), and a “projected” identity that involves the desires/motivations for developing your virtual identity in a certain way (e.g., not wanting to let your character down). This projected identity is crucial for critical learning, but can be challenging to achieve. Gee also views learning as situated within in specific contexts, associational and embodied (in the sense of embodying the learner’s choices and actions), rather than abstracted from general principles. Embodied learning occurs in a “probe, hypothesize, reprobe, rethink” cycle; what divides novice learners from experts (“critical learners”) is the added ability to critically evaluate the results within the context of the specific domain they are working in, rather than just from “real life.” Learning should also be scaffolded appropriately to pace students’ learning, and it should be recognized that learning in these contexts is social: different members of a group have different skills, and knowledge will be situated in various tools, symbols, and learners.

Comments: Gee’s work incorporates some concepts I’m familiar with from other contexts: communities of practice, social learning theory, and the associational/mental models theory of memory. His motivation seems to be less about incorporating video games into school settings than using v.g.’s as models of how “critical” learning should operate. Some of these concepts are things I’m looking into in my subject reading lists.

Links to: Spinuzzi (network-based learning); some of my subject reading list authors

Categories
exam readings networking networks

Exam reading: “Network”

Back to my exam readings in this post… “Network,” by Clay Spinuzzi, is an account of the operations of a telecom company: its development, problems, how it operates successfully, and how work that seems simple from the outside is really quite complex. A large part of the book is dedicated to exploring two theories describing networks (about which, more below).

While not long, it is definitely a dense book- not excessive repetition, but it did take longer for me to finish than I had estimated. I’ll read some short texts next so I can feel better about crossing things off my list…

Summary: In this book, Spinuzzi uses two theories to describe the structure and function of a telecommunications company: Activity Theory (AT) and Actor-Network Theory (ANT). He chose a telecom company as an example of the highly decentralized type of knowledge work that is becoming more common in modern organizations. AT is a theory of learning and development through interaction, based largely on Marxist dialectics; ANT is a descriptive theory that focuses on how shifting relationships among actors in a network help define those actors, based largely on rhetoric. Spinuzzi spends a lot of time exploring both similarities and differences in these two theories, and giving examples of how they apply to situations at the company. Four characteristics of highly-networked organizations are: members have heterogeneous skills/tasks, members are multiply linked, transformative shifts can change the goals of the network, and certain processes within the network are “black-boxed” (appear to be simple from the outside when they are, in fact, not). Texts help connect the different actors within the network in three ways: they are stable traces of (ephemeral) ideas, the structure of genre helps organize unfamiliar information into familiar patterns, and they act as boundary objects among actors operating within the network in different contexts. According to Spinuzzi, each of these theories can be used to describe different aspects of “net work,” although he concludes that AT (with its developmental focus) is most appropriate to use in similar future studies, if dialogue and rhetoric (strengths of ANT) are taken into account.

Comments: Although there are some interesting political implications of knowledge work here (e.g., worker segregation by education, the “homework economy,” the necessity of continual retraining), the most immediately useful aspects of this book for me will probably be the focus on learning in networks and how texts can function in shaping networks. There is a lot of material in this book, and it’s also useful as an introduction to AT and ANT that is grounded in specific examples.

Links to: Tomlinson (theoretical aspects of networks); Haraway (cyborg identity, workers’ need for constant learning); Brown & Duguid (information networks)

Categories
exam readings networks tech design

Exam Reading: “Greening through IT”

This book, “Greening through IT” by Bill Tomlinson, is one of the newer ones added to the T&T core reading list, and addresses one area that I think the core list as a whole ignored previously: the broad-scale material basis of electronic technologies. While many of the theorists covered in the program emphasize the connections between mind and materiality (e.g., the physical experience of interacting with a computer is part of what makes reading online different from reading a book), no one thus far has addressed the broader ecological implications of these new technologies.

I would venture that most theorists approaching the T&T field from a critical theory perspective are (understandably) not aware of the ecological sustainability issues surrounding electronic tech- for example, electricity use, e-waste, programmed obsolescence of devices. Most authors focus on the social/philosophical implications of new technologies, and there’s a definite assumption overall that we will be able to continue to physically make and use these technologies in the future, without too much consideration of natural resource limitations. Even the authors who focus on “materiality” of technology focus on the individual user-machine interaction.

So there’s a need in the program for attention to these issues (which are a main concern of mine, given my background in ecology). I think Tomlinson’s book does a decent job of addressing them. It’s not the perfect book on this issue for this program- I can see some of the more theory-centered students discounting it because of its low theory quotient (and that apparently annoying “evidence is used to support my theory, not contest it” thing). However, it does provide a needed perspective to the program, and I’m not sure what an alternative text that addresses these issues might be…

Summary: Discusses potential uses of information & communication technologies (ICT) for environmental sustainability. Tomlinson lays out a framework for “extended human-centered computing” (EHCC), which requires consciousness of scale (temporal, physical, complexity) when analyzing problems, guides system development in green directions, and compares technologies in a range of time/space/complexity scales to identify gaps not being addressed. There’s detail about various environmental problems, social barriers to change, and how we can use IT to address both of these large areas. There’s a lot of detail, but it basically boils down to expanding our sense of/ability to cope with large time/space/complexity scales. Touches on three orders of effects of technologies, which need to be considered: 1st (direct effects), 2nd (specific impacts on other economic sectors), and 3rd (general social or cross-industry effects). He breaks down his discussion of pathways for using green IT ides into industrial, educational, personal motivation, and collective action categories (a lot of detail is outlined for each category). He presents several case studies in education, personal data tracking, and collective action, and discusses how each worked (or didn’t).

Comments: This book was heavy on examples, and perhaps short on theory. It would have been nice to see how the EHCC framework was used specifically to address gaps in scale (is there a heuristic for applying it in specific cases?). In the case studies, it seemed that the more controlled the environment, the better the technology worked for its designed purpose, e.g., the museum display worked better than the online programs that utilized crowdsourcing. I’d like to see more research on whether crowdsourcing/networking actually works for more than just getting Betty White on SNL (granted, this area of research is in its infancy). It’s also possible that more advanced research in museum displays/childhood education in general are responsible for this effect. The question seems to be how to get adults to buy into some of these ideas. Some of the cited examples (Indian fishermen) seemed to be more effective, though that could be because he was emphasizing the positibes. The context/discussion of the theory of punctuated equilibrium was odd-with his emphasis on the importance of metaphors that work on more than just the surface (an idea I feel strongly about), this bugged me.

Links to: Norman (technology design), Feenberg (applied case of design with technical and social ends in mind), Spinuzzi (objects & network formation?), Johnson (design for a purpose, though T. specifically focuses on the larger system rather than the user- the opposite direction from J?)

Edited 8/28 to add links, correct Johnson’s name.