Group Cognition

 

 

Computer Support for Building Collaborative Knowledge

 

 

Gerry Stahl

 

Acting with Technology Series

MIT Press

2006


 

Contents

Group Cognition:

Computer Support for

Building Collaborative Knowledge

 

Contents

Series Foreword

Essays on Technology, Interaction and Cognition


Part I. Design of Computer Support for Collaboration

Introduction to Part I: Studies of Technology Design

Chapter 1. Share Globally, Adapt Locally

Chapter 2. Evolving a Learning Environment

Chapter 3. Armchair Missions to Mars

Chapter 4. Supporting Situated Interpretation

Chapter 5. Collaboration Technology for Communities

Chapter 6. Perspectives on Collaborative Learning

Chapter 7. Groupware Goes to School

Chapter 8. Knowledge Negotiation Online

Part II. Analysis of Collaborative Knowledge Building

Introduction to Part II: Studies of Interaction Analysis

Chapter 9. A Model of Collaborative Knowledge Building

Chapter 10. Rediscovering the Collaboration

Chapter 11. Contributions to a Theory of Collaboration

Chapter 12. In a Moment of Collaboration

Chapter 13. Collaborating with Relational References

Part III. Theory of Group Cognition

Introduction to Part III: Studies of Collaboration Theory

Chapter 14. Communicating with Technology

Chapter 15. Building Collaborative Knowing

Chapter 16. Group Meaning / Individual Interpretation

Chapter 17. Shared Meaning, Common Ground, Group Cognition

Chapter 18. Making Group Cognition Visible

Chapter 19. Can Collaborative Groups Think?

Chapter 20. Opening New Worlds for Collaboration

Chapter 21. Thinking at the Small-Group Unit of Analysis

Notes

References

Name Index

Topic Index

 


 

Contents. 2

Series Forward. v

Essays on Technology, Interaction and Cognition. 1

Part I. Design of Computer Support for Collaboration. 21

Introduction to Part I: Studies of Technology Design. 22

Share Globally, Adapt Locally. 27

Evolving a Learning Environment 43

Armchair Missions to Mars. 62

Supporting Situated Interpretation. 77

Collaboration Technology for Communities. 88

Perspectives on Collaborative Learning. 114

Groupware Goes to School 149

Knowledge Negotiation Online. 171

Part II. Analysis of Collaborative Knowledge Building. 184

Introduction to Part II: Studies of Interaction Analysis. 185

A Model of Collaborative Knowledge Building. 193

Rediscovering the Collaboration. 206

Contributions to a Theoretical Framework. 219

In a Moment of Collaboration. 237

Collaborating with Relational References. 249

Part III. Theory of Group Cognition. 266

Introduction to Part III: Studies of Collaboration Theory. 267

Communicating with Technology. 273

Building Collaborative Knowing. 289

Group Meaning / Individual Interpretation. 317

Shared Meaning, Common Ground, Group Cognition. 334

Making Group Cognition Visible. 348

Can Collaborative Groups Think?. 373

Opening New Worlds for Collaboration. 397

Thinking at the Small-Group Unit of Analysis. 418

Notes. 457

References. 466

Name Index. 487

Topic Index. 488



 

Essays on Technology, Interaction and Cognition

The promise of globally networked computers to usher in a new age of universal learning and of the sharing of human knowledge remains a distant dream; the software and social practices needed have yet to be conceived, designed and adopted. To support online collaboration, our technology and culture have to be re-configured to meet a bewildering set of constraints. Above all, this requires understanding how digital technology can mediate human collaboration. The collection of essays gathered in this volume documents one path of exploration of these challenges. It includes efforts to design software prototypes featuring specific collaboration support functionality, to analyze empirical instances of collaboration and to theorize about the issues, phenomena and concepts involved today in supporting collaborative knowledge building.

The studies in this book grapple with the problem of how to increase opportunities for effective collaborative working, learning and acting through innovative uses of computer technology. From a technological perspective, the possibilities seem endless and effortless. The ubiquitous linking of computers in local and global networks makes possible the sharing of thoughts by people who are separated spatially or temporally. Brainstorming and critiquing of ideas can be conducted in many-to-many interactions, without being confined by a sequential order imposed by the inherent limitations of face-to-face meetings and classrooms. Negotiation of consensual decisions and group knowledge can be conducted in new ways.

Collaboration of the future will be more complex than just chatting—verbally or electronically—with a friend. The computational power of personal computers can lend a hand here; software can support the collaboration process and help to manage its complexity. It can organize the sharing of communication, maintaining both sociability and privacy. It can personalize information access to different user perspectives and can order knowledge proposals for group negotiation.

Computer support can help us transcend the limits of individual cognition. It can facilitate the formation of small groups engaged in deep knowledge building. It can empower such groups to construct forms of group cognition that exceed what the group members could achieve as individuals. Software functionality can present, coordinate and preserve group discourse that contributes to, constitutes and represents shared understandings, new meanings and collaborative learning that is not attributable to any one person but that is achieved in group interaction.

Initial attempts to engage in the realities of computer-supported knowledge building have, however, encountered considerable technical and social barriers. The transition to this new mode of interaction is in some ways analogous to the passage from oral to literate culture, requiring difficult changes and innovations on multiple levels and over long stretches of time. But such barriers signal opportunities. By engaging in experimental attempts at computer-supported, small-group collaboration and carefully observing where activity breaks down, one can identify requirements for new software.

The design studies below explore innovative functionality for collaboration software. They concentrate especially on mechanisms to support group formation, multiple interpretive perspectives and the negotiation of group knowledge. The various applications and research prototypes reported in the first part of this book span the divide between cooperative work and collaborative learning, helping us to recognize that contemporary knowledge workers must be lifelong learners, and also that collaborative learning requires flexible divisions of labor.

The attempt to design and adopt collaboration software led to a realization that we need to understand much more clearly the social and cognitive processes involved. In fact, we need a multi-faceted theory for computer-supported collaboration, incorporating empirically-based analyses and concepts from many disciplines. This book, in its central part, pivots around the example of an empirical micro-analysis of small-group collaboration. In particular, it looks at how the group constructs intersubjective knowledge that appears in the group discourse itself, rather than theorizing about what takes place in the minds of the individual participants.

The notion that it is important to take the group, rather than the individual, as the unit of analysis ultimately requires developing, from the ground up, a new theory of collaboration in the book’s final part. This theory departs from prevalent cognitive science, grounded as it is on mental representations of individuals. Such a theory builds on related efforts in social-cultural theory, situated cognition and ethnomethodology, as well as their post-Kantian philosophical roots.

Collaboration as Group Cognition

This book does not aspire to the impossible task of describing all the ways that technology does or could impact upon working and learning. I work and I learn in innumerable ways and modes—and everyone else works and learns in additional ways, many different from mine. Working and learning with other people mixes these ways into yet more complex varieties. Technology multiplies the possibilities even more. So this book chooses to focus on a particular form of working and learning; one that seems especially attractive to many people and may be particularly responsive to technological support, but one that is also rather hard to point out and observe in the current world. It is the holy grail of cooperative knowledge work and collaborative learning: the emergence of shared group cognition through effective collaborative knowledge building.

The goal of collaborative knowledge building is much more specific than that of e-learning or distance education generally, where computer networks are used to communicate and distribute information from a teacher to geographically dispersed students. As collaborative knowledge building, it stresses supporting interactions among the students themselves, with a teacher playing more of a facilitating than instructing role. Moreover, knowledge building involves the construction or further development of some kind of knowledge artifact. That is, the students are not simply socializing and exchanging their personal reactions or opinions about the subject matter, but might be developing a theory, model, diagnosis, conceptual map, mathematical proof or presentation. These activities require the exercise of high level cognitive activities. In effective collaborative knowledge building, the group must engage in thinking together about a problem or task, and produce a knowledge artifact such as a verbal problem clarification, a textual solution proposal or a more developed theoretical inscription that integrates their different perspectives on the topic and represents a shared group result that they have negotiated.

We all know from personal experience—or think we know based on our tacit acceptance of prevalent folk theories—that individual people can think and learn on their own. It is harder to understand how a small group of people collaborating online can think and learn as a group, and not just as the sum of the people in the group thinking and learning individually.

Ironically, the counter-intuitive notion of group cognition turns out to be easier to study than individual learning. Whereas individual cognition is hidden in private mental processes, group cognition is necessarily publicly visible. This is because any ideas involved in a group interaction must be displayed in order for the members of the group to participate in the collaborative process. In this book, I try to take advantage of such displays to investigate group cognition without reducing it to an epiphenomenon of individual cognition. This does not mean that I deny that individuals have private thoughts: merely, that I do not rely on our common-sense intuitions and introspections about such thoughts. In the end, consideration focused on the group unit may have implications for understanding individual cognition as a socially grounded and mediated product of group cognition.

How does a group build its collective knowing? A non-cognitivist approach avoids speculating on psychological processes hidden in the heads of individuals and instead looks to empirically observable group processes of interaction and discourse. The roles of individuals in the group are not ignored, but are viewed as multiple interpretive perspectives that can conflict, stimulate, intertwine and be negotiated. The spatio-temporal world in which collaborative interactions are situated is not assumed to be composed of merely physical as opposed to mental ideas, but is seen as a universe filled with meaningful texts and other kinds of artifacts—human-made objects that embody shared meanings in physical, symbolic, digital, linguistic and cultural forms.

The concern with the processes and possibilities of building group knowing has implications for the choice of themes investigated in this book. The software prototypes reported on in part I, for instance, were attempts to support the formation of teams that had the right mix for building knowledge as a group, to represent the multiple perspectives involved in developing group ideas, and to facilitate the negotiation of group knowledge that arose. Certainly, there are other important processes in online collaboration, but these are of particular concern for small-group knowledge building. Similarly, the empirical analysis in part II zooms in on the way in which the participants in an observed group of students constructed knowledge in their discourse that could not be attributed to any simple conjunction of their individual contributions. Finally, the theoretical reflections of part III try to suggest a conceptual framework that incorporates these notions of “interpretive perspectives” or “knowledge negotiation” within a coherent view of how group cognition takes place in a world of discourse, artifacts and computer media.

Rather than centering on practical design goals for CSCW (computer-supported cooperative work) industrial settings or CSCL (computer-supported collaborative learning) classrooms, the following chapters explore foundational issues of how small groups can construct meaning at the group level. The ability of people to engage in effective group cognition in the past has been severely constrained by physical limits of the human body and brain—we can only really relate to a small number of individual people at a time or follow one primary train of thought at a time, and most business meetings or classroom activities are structured, moderated and delimited accordingly. Moreover, we quickly forget many of the details of what was said at such meetings. Collaboration technology has enormous potential to establish many-to-many interactions, to help us manage them, and to maintain logs of what transpired. Figuring out how to design and deploy collaboration technologies and social practices to achieve this still-distant potential is the driving force that is struggling to speak through these essays.

The structure of the book follows the broad strokes of my historical path of inquiry into computer-supported group cognition. Part I reports on several attempts to design online technologies to support the collaborative building of knowing, i.e., computer-mediated group sense making, in which I was involved. Part II shows how I responded to the need I subsequently felt to better understand phenomena of collaboration, such as group formation, perspective sharing and knowledge negotiation through micro-analysis of group interaction, in order to guide such software design. In turn, part III indicates how this led me to formulate a conceptual framework and a research methodology: a theory of collaboration, grounded in empirical practice and exploration. Although theory is typically presented as a solid foundational starting point for practice, this obfuscates its genesis as a conceptual reflection in response to problems of practice and their circumstances; I have tried to avoid such reification by presenting theory at the end, as it emerged as a result of design efforts and empirical inquiry.

The Problematic of CSCL and the Approach of this Book

This book documents my engagement with the issues of CSCL as a research field. Although I believe that much of the group cognition approach presented is also applicable to CSCW, my own research during the decade represented here was more explicitly oriented to the issues that dominated CSCL at the time. In particular, CSCL is differentiated from related domains in the following ways:

·        Group: the focus is not on individual learning, but learning in and by small groups of students.

·        Cognition: the group activity is not one of working, but of constructing new understanding and meaning within contexts of instruction and learning.

·        Computer support: the learning does not take place in isolation, but with support by computer-based tools, functionality, micro-worlds, media and networks.

·        Building: the concern is not with the transmission of known facts, but with the construction of personally meaningful knowledge.

·        Collaborative: the interaction of participants is not competitive or accidental, but involves systematic efforts to work and learn together.

·        Knowledge: the orientation is not to drill and practice of specific elementary facts or procedural skills, but to discussion, debate, argumentation and deep understanding.

The fact that these points spell out the title of this book is an indication that the book consists of an extended reflection upon the defining problems of CSCL.

The history of CSCL research and theory can be schematically viewed as a gradual progression of ever-increasing critical distance from its starting point, consisting of conceptualizations of learning inherited from dominant traditions in the fields of education and psychology. Much of the early work in CSCL started from this individualistic notion of learning and cognition. For instance, the influence of artificial intelligence (AI) on CSCL—which can be seen particularly clearly in my first three studies—often relied on computational cognitive models of individual learners. For me, at least, dramatic shifts away from this tradition came from the following sources:

·        Mediated Cognition: Vygotsky’s work from the 1920’s and 1930’s only became available in English 50 years later, when it proposed a radically different view of cognition and learning as socially and collaboratively mediated.

·        Distributed Cognition: This alternative developed by a number of writers (e.g., Suchman, Winograd, Pea, Hutchins) also stressed the importance of not viewing the mind as isolated from artifacts and other people.

·        Situated Learning: Lave’s work applied the situated perspective to learning, showing how learning can be viewed as a community process.

·        Knowledge building: Scardamalia and Bereiter developed the notion of community learning with a model of collaborative knowledge building in computer-supported classrooms.

·        Meaning making: Koschmann argued for re-conceptualizing knowledge building as meaning making, drawing upon theories of conversation analysis and ethnomethodology.

·        Group Cognition: This book arrives at a theory of group cognition by pushing this progression a bit further with the help of a series of software implementation studies, empirical analyses of interaction and theoretical reflections on knowledge building.

The notion of group cognition emerged out of the trajectory of the research that is documented in this volume. The software studies in the early chapters attempted to provide support for collaborative knowledge building. They assumed that collaborative knowledge building consisted primarily of forming a group, facilitating interaction among the multiple personal perspectives brought together, and then encouraging the negotiation of shared knowledge. When the classroom use of my software resulted in disappointing levels of knowledge building, I tried to investigate in more detail how knowledge building occurs in actual instances of collaborative learning.

The explorative essays in the middle of the book prepare the way for that analysis and then carry out a micro-analysis of one case. The fundamental discovery made in that analysis was that, in small-group collaboration, meaning is created across the utterances of different people. That is, the meaning that is created is not a cognitive property of individual minds, but a characteristic of the group dialog. This is a striking result of looking closely at small-group discussions; it is not so visible in monologues (although retrospectively these can be seen as internalized discourses of multiple voices), in dialogues (where the utterances each appear to reflect the ideas of one or the other member of the dyad) or in large communities (where the joint meaning becomes fully anonymous). I call this result of collaborative knowledge building group cognition.

For me, this discovery—already implied in certain social science methodologies like conversation analysis—led to a conception of group cognition as central to understanding collaboration, and consequently required a re-thinking of the entire theoretical framework of CSCL: collaboration, knowledge, meaning, theory building, research methodology, design of support. The paradigm shift from individual cognition to group cognition is challenging—even for people who think they already accept the paradigms of mediated, distributed and situated cognition. For this reason, the essays in the last part of the book not only outline what I feel is necessary for an appropriate theory, but provide a number of reflections on the perspective of group cognition itself. While the concept of group cognition that I develop is closely related to findings from situated cognition, dialogic theory, symbolic interactionism, ethnomethodology and social psychology, I think that my focus on small-group collaboration casts it in a distinctive light particularly relevant to CSCL. Most importantly, I try to explore the core phenomenon in more detail than other writers, who tend to leave some of the most intriguing aspects as mysteries.

Accomplishing this exposition on group cognition requires spelling out a number of inter-related points, each complex in itself. A single conference or journal paper can only enunciate one major point. This book is my attempt to bring the whole argument together. I have organized the steps in this argument into three major book parts:

Part I, Computer Support for Collaboration, presents eight studies of technology design. The first three apply various AI approaches (abbreviated as DODE, LSA, CBR) to typical CSCL or CSCW applications, attempting to harness the power of advanced software techniques to support knowledge building. The next two shift the notion of computer support from AI to providing collaboration media. The final three try to combine these notions of computer support by creating computational support for core collaboration functions in the computational medium. Specifically, the chapters discuss how to:

1.      Support teacher collaboration for constructivist curriculum development. (written in 1995)

2.      Support student learning of text production in summarization. (1999)

3.      Support formation of effective groups of people to work together. (1996)

4.      Define the notion of personal interpretive perspectives of group members. (1993)

5.      Define the role of computational media for collaborative interactions. (2000)

6.      Support group and personal perspectives. (2001)

7.      Support group work in collaborative classrooms. (2002)

8.      Support negotiation of shared knowledge by small groups. (2002)

Part II, Analysis of Collaborative Knowledge Building, consists of five essays related to research methodology for studying small-group interaction. First, there is a process model of knowledge building showing how utterances from multiple perspectives may be negotiated to produce shared knowledge. Second, methodological considerations are raised, arguing that the most important aspects of collaboration are systematically obscured by the very approach of many leading CSCL studies. A solution is then proposed, by integrating the conception of knowledge building and the idea of merged perspectives with the focus on artifacts from distributed cognition theory and the close interpretation of utterances from conversation analysis. This solution is applied to an empirical case of collaboration. This case reveals how group cognition creates shared meaning through the thick interdependencies of everyone’s utterances. It also shows how the group builds knowledge about meaning in the world. In particular, these chapters provide:

9.      A process model of collaborative knowledge building, incorporating perspectives and negotiation. (2000)

10.  A critique of CSCL research methodologies that obscure the collaborative phenomena. (2001)

11.  A theoretical framework for empirical analysis of collaboration. (2001)

12.  Analysis of five students building knowledge about a computer simulation. (2001)

13.  Analysis of the shared meaning that they built and its relation to the design of the software artifact. (2004)

Part III, Theory of Group Cognition, includes eight chapters that reflect on the discovery of group meaning in chapter 12, as further analyzed in chapter 13. As preliminary context, previous theories of communication are reviewed to see how they can be useful, particularly in contexts of computer support. Then a broad-reaching attempt is made to sketch an outline of a social theory of collaborative knowledge building based on the discovery of group cognition. A number of specific issues are taken up from this, including the distinction between meaning making at the group level versus interpretation at the individual level and a critique of the popular notion of common ground. Chapter 18 develops the alternative research methodology hinted at in chapter 10. Chapters 19 and 20 address philosophical possibilities for group cognition, and the final chapter complements chapter 12 with an initial analysis of computer-mediated group cognition, as an indication of the kind of further empirical work needed. The individual chapters of this final part offer:

14.  A review of traditional theories of communication. (2003)

15.  A sketch of a theory of building collaborative knowing. (2003)

16.  An analysis of the relationship of group meaning and individual interpretation. (2003)

17.  An investigation of group meaning as common ground versus as group cognition. (2004)

18.  A methodology for making group cognition visible to researchers. (2004)

19.  Consideration of the question, “Can groups think?” in parallel to the AI question, “Can computers think?” (2004)

20.  Exploration of philosophical directions for group cognition theory. (2004)

21.  A wrap-up of the book and an indication of future work. (2004)

The discussions in this book are preliminary studies of a science of computer-supported collaboration that is methodologically centered on the group as the primary unit of analysis. From different angles, the individual chapters explore how meanings are constituted, shared, negotiated, preserved, learned and interpreted socially, by small groups, within communities. The ideas these essays present themselves emerged out of specific group collaborations.

Situated Concepts

The studies of this book are revised forms of individual papers, undertaken during the decade between my dissertation at Colorado and my research at Drexel, published on various specific occasions. In bringing them together, I have tried to retain the different voices and perspectives that they expressed in their original situations. They look at issues of online collaboration from different vantage points, and I wanted to retain this diversity as a sort of collaboration of me with myself—a collection of selves that I had internalized under the influences of many people, projects, texts and circumstances. The format of the book thereby reflects the theory it espouses: that knowledge emerges from situated activities involving concrete social interactions and settings, and that such knowledge can be encapsulated in vocabularies and texts that are colored by the circumstances of their origins.

Thus, the main chapters of this book are self-contained studies. They are reproduced here as historical artifacts. The surrounding apparatus—this overview, the part introductions, the chapter lead-ins and the final chapters—has been added to make explicit the gradual emergence of the theme of group cognition. When I started to assemble the original essays, it soon became apparent that the whole collection could be significantly more than the sum of its parts, and I wanted to bring out this interplay of notions and the implications of the overall configuration. The meaning of central concepts, like “group cognition,” are not simply defined; they evolve from chapter to chapter, in the hope that they will continue to grow productively in the future.

Concepts can no longer be treated as fixed, self-contained, eternal, universal and rational, for they reflect a radically historical world. The modern age of the last several centuries may have questioned the existence of God more than the medieval age, but it still maintained an unquestioned faith in a god’s-eye view of reality. For Descartes and his successors, there was an objective physical world, knowable in terms of a series of facts expressible in clear and distinct propositions using terms defined by necessary and sufficient conditions. While individuals often seemed to act in eccentric ways, one could still hope to understand human behavior in general in rational terms.

The twentieth century changed all that. Space and time could henceforth only be measured relative to a particular observer; position and velocity of a particle were in principle indeterminate; observation affected what was observed; relatively simple mathematical systems were logically incompletable; people turned out to be poor judges of their subconscious motivations and unable to articulate their largely tacit knowledge; rationality frequently verged on rationalization; revolutions in scientific paradigms transformed what it meant in the affected science for something to be a fact, a concept or evidence; theories were no longer seen as absolute foundations, but as conceptual frameworks that evolved with the inquiry; and knowledge (at least in most of the interesting cases) ended up being an open-ended social process of interpretation.

Certainly, there are still empirical facts and correct answers to many classes of questions. As long as one is working within the standard system of arithmetic, computations have objective answers—by definition of the operations. Some propositions in natural language are also true, like, “This sentence is declarative.” But others are controversial, such as, “Knowledge is socially mediated,” and some are even paradoxical: “This sentence is false.”

Sciences provide principles and methodologies for judging the validity of propositions within their domain. Statements of personal opinion or individual observation must proceed through processes of peer review, critique, evaluation, argumentation, negotiation, refutation, etc. to be accepted within a scientific community; that is, to evolve into knowledge. These required processes may involve empirical testing, substantiation or evidence as defined in accord with standards of the field and its community. Of course, the standards themselves may be subject to interpretation, negotiation or periodic modification.

Permeating this book is the understanding of knowledge, truth and reality as products of social labor and human interpretation rather than as simply given independently of any history or context. Interpretation is central. The foundational essay of part I (chapter 4) discusses how it is possible to design software for groups (groupware) to support the situated interpretation that is integral to working and learning. Interpretation plays the key analytic role in the book, with the analysis of collaboration that forms the heart of part II (chapter 12) presenting an interpretation of a moment of interaction. And in part III (particularly chapter 16), the concepts of interpretation and meaning are seen as intertwined at the phenomenological core of an analysis of group cognition. Throughout the book, the recurrent themes of multiple interpretive perspectives and of the negotiation of shared meanings reveal the centrality of the interpretive approach.

There is a philosophy of interpretation, known since Aristotle as hermeneutics. Gadamer (1960/1988) formulated a contemporary version of philosophical hermeneutics, based largely on ideas proposed by his teacher, Heidegger (1927/1996). A key principle of this hermeneutics is that one should interpret the meaning of a term based on the history of its effects in the world. Religious, political and philosophical concepts, for instance, have gradually evolved their meanings as they have interacted with world history and been translated from culture to culture. Words like being, truth, knowledge, learning and thought have intricate histories that are encapsulated in their meaning, but that are hard to articulate. Rigorous interpretation of textual sources can begin to uncover the layers of meaning that have crystallized and become sedimented in these largely taken-for-granted words.

If we now view meaning making and the production of knowledge as processes of interpretive social construction within communities, then the question arises of whether such fundamental processes can be facilitated by communication and computational technologies. Can technology help groups to build knowledge? Can computer networks bring people together in global knowledge-building communities and support the interaction of their ideas in ways that help to transform the opinions of individuals into the knowledge of groups?

As an inquiry into such themes, this book eschews an artificially systematic logic of presentation and, rather, gathers together textual artifacts that view concrete investigations from a variety of perspectives and situations. My efforts to build software systems were not applications of theory in either the sense of foundational principles or predictive laws. Rather, the experience gained in the practical efforts of part I motivated more fundamental empirical research on computer-mediated collaboration in part II, which in turn led to the theoretical reflections of part III that attempt to develop ways of interpreting, conceptualizing and discussing the experience. The theory part of this book was written to develop themes that emerged from the juxtaposition of the earlier, empirically-grounded studies.

The original versions of the chapters were socially and historically situated. Concepts they developed while expressing their thoughts were, in turn, situated in the con-texts of those publications. In being collected into the present book, these papers have been only lightly edited to reduce redundancies and to identify cross-references. Consistency of terminology across chapters has not been enforced as much as it might be, in order to allow configurations of alternative terminologies to bring rich complexes of connotations to bear on the phenomena investigated.

These studies strive to be essays in the postmodern sense described by Adorno (1958/1984, p. 160f):

In the essay, concepts do not build a continuum of operations, thought does not advance in a single direction, rather the aspects of the argument interweave as in a carpet. The fruitfulness of the thoughts depends on the density of this texture. Actually, the thinker does not think, but rather transforms himself into an arena of intellectual experience, without simplifying it. … All of its concepts are presentable in such a way that they support one another, that each one articulates itself according to the configuration that it forms with the others.

In Adorno’s book Prisms (1967), essays on specific authors and composers provide separate glimpses of art and artists, but there is no development of a general aesthetic theory that illuminates them all. Adorno’s influential approach to cultural criticism emerged from the book as a whole, implicit in the configuration of concrete studies, but nowhere in the book articulated in propositions or principles. His analytic paradigm—which rejected the fashionable focus on biographical details of individual geniuses or eccentric artists in favor of reflection on social mediations made visible in the workings of the art work or artifacts themselves—was too incommensurable with prevailing habits of thought to persuade an audience without providing a series of experiences that might gradually shift the reader’s perspective. The metaphor of prisms—that white light is an emergent property of the intertwining of its constituent wavelengths—is one of bringing a view into the light by splitting the illumination itself into a spectrum of distinct rays.

The view of collaboration that is expressed in this book itself emerged gradually, in a manner similar to the way that Prisms divulged its theories, as I intuitively pursued an inquiry into groupware design, communication analysis and social philosophy. While I have made some connections explicit, I also hope that the central meanings will emerge for each reader through his or her own interpretive interests. In keeping with hermeneutic principles, I do not believe that my understanding of the connotations and interconnections of this text is an ultimate one; certainly, it is not a complete one, the only valid one, or the one most relevant to a particular reader. To publish is to contribute to a larger discourse, to expose one’s words to unanticipated viewpoints. Words are always open to different interpretations.

The chronology of the studies has generally been roughly maintained within each of the book’s parts, for they document a path of discovery, with earlier essays anticipating what was later elaborated. The goal in assembling this collection has been to provide readers with an intellectual experience open-ended enough that they can collaborate in making sense of the enterprise as a whole—to open up “an arena of intellectual experience” without distorting or excessively delimiting it, so that it can be shared and interpreted from diverse perspectives.

The essays were very much written from my own particular and evolving perspective. They are linguistic artifacts that were central to the intellectual development of that perspective; they should be read accordingly, as situated within that gradually developing interpretation. It may help the reader to understand this book if some of the small groups that incubated its ideas are named.

Collaborating with Groups

Although most of the original papers were published under just my name, they are without exception collaborative products, artifacts of academic group cognition. Acknowledgements in the Notes section at the end of the book just indicate the most immediate intellectual debts. Already, due to collaboration technologies like the Web and email, our ideas are ineluctably the result of global knowledge building. Considered individually, there is little in the way of software features, research methodology or theoretical concept that is completely original here. Rather, available ideas have been assembled as so many tools or intellectual resources for making sense of collaboration as a process of constituting group knowing. If anything is original, it is the mix and the twist of perspectives. Rather than wanting to claim that any particular insight or concept in this book is absolutely new, I would like to think that I have pushed rather hard on some of the ideas that are important to CSCL and brought a unique breadth of considerations to bear. In knowledge building, it is the configuration of existing ideas that counts and the intermingling of a spectrum of perspectives on those ideas.

In particular, the ideas presented here have been developed through the work of certain knowledge-building groups or communities:

·        The very notion of knowledge-building communities was proposed by Scardamalia and Bereiter and the CSILE research group at Toronto. They pioneered CSCL, working on pedagogical theory, system design and evaluation of computer-supported classroom practices.

·        They cited the work of Lave and Wenger on situated learning, a distillation of ideas brewing in an active intellectual community in the San Francisco Bay area that had a formative impact on CSCW in the 1970’s.

·        The socio-cultural theory elaborated there, in turn, had its roots in Vygotsky and his circle, which rose out of the Russian revolution; the activity theory that grew out of that group’s thinking still exerts important influences in the CSCW and CSCL communities.

·        The personal experience behind this book is perhaps most strongly associated with:

o       McCall, Fischer and the Center for LifeLong Learning & Design in Colorado, where I studied, collaborated and worked on Hermes and CIE in the early 1990’s (see chapters 4 & 5);

o       the Computers & Society research group led by Herrmann at the University of Dortmund (now at Bochum), that collaborated on WebGuide and negotiation support (chapters 6 & 9);

o       Owen Research, Inc., where TCA and the Crew software for NASA were developed (chapters 1 & 3);

o       the Institute for Cognitive Science at Boulder, where State the Essence was created (chapter 2);

o       the ITCOLE Project in the European Union (2001-02), in which I designed BSCL and participated as a visiting scientist in the CSCW group at Fraunhofer-FIT (chapters 7 & 8);

o       the research community surrounding the conferences on computer support for collaborative learning, where I was Program Chair in 2002 (chapter 11); and

o       the Virtual Math Teams Project that colleagues and I launched at Drexel University in 2003 (chapter 21).

But today, knowledge building is a global enterprise and, at any rate, most of the foundational concepts—like knowledge, learning and meaning—have been forged in the millennia-long discourse of Western philosophy, whose history is reviewed periodically in the following chapters.

Technology as Mediation

When I launched into software development with a fresh degree in artificial intelligence, I worked eagerly at building cognitive aids—if not directly machine cognition—into my systems, developing rather complicated algorithms using search mechanisms, semantic representations, case-based reasoning, fuzzy logic and an involved system of hypermedia perspectives. These mechanisms were generally intended to enhance the cognitive abilities of individual system users. When I struggled to get my students to use some of these systems for their work in class, I became increasingly aware of the many barriers to the adoption of such software. In reflecting on this, I began to conceptualize my systems as artifacts that mediated the work of users. It became clear that the hard part of software design was dealing with its social aspects. I switched my emphasis to creating software that would promote group interaction by providing a useful medium for interaction. This led me to study collaboration itself, and to view knowledge building as a group effort.

As I became more interested in software as mediator, I organized a seminar on “computer mediation of collaborative learning” with colleagues and graduate students from different fields. I used the software discussed in chapter 6 and began the analysis of the moment of collaboration that over the years evolved into chapter 12. We tried to deconstruct the term mediation, as used in CSCL, by uncovering the history of the term’s effects that are sedimented in the word’s usage today. We started with its contemporary use in Lave & Wenger’s Situated Learning (1991, pp 50f):

Briefly, a theory of social practice emphasizes the relational interdependency of agent and world, activity, meaning, cognition, learning and knowing. … Knowledge of the socially constituted world is socially mediated and open ended.

This theory of social practice can be traced back to Vygotsky. Vygotsky described what is distinctive to human cognition, psychological processes that are not simply biological abilities, as mediated cognition. He analyzed how both signs (words, gestures) and tools (instruments) act as artifacts that mediate human thought and behavior—and he left the way open for other forms of mediation: “A host of other mediated activities might be named; cognitive activity is not limited to the use of tools or signs” (Vygotsky, 1930/1978, p. 55).

Vygotsky attributes the concept of indirect or mediated activity to Hegel and Marx. Where Hegel loved to analyze how two phenomena constitute each other dialectically—such as the master and slave, each of whose identity arises through their relationship to each other—Marx always showed how the relationships arose in concrete socio-economic history, such as the rise of conflict between the capitalist class and the working class with the establishment of commodity exchange and wage labor. The minds, identities and social relations of individuals are mediated and formed by the primary factors of the contexts in which they are situated.

In this book, mediation plays a central role in group cognition, taken as an emergent phenomenon of small-group collaboration. The computer support of collaboration is analyzed as a mediating technology whose design and use forms and transforms the nature of the interactions and their products.

 “Mediation” is a complex and unfamiliar term. In popular and legal usage, it might refer to the intervention of a third party to resolve a dispute between two people. In philosophy, it is related to “media,” “middle” and “intermediate.” So in CSCL or CSCW, we can say that a software environment provides a medium for collaboration, or that it plays an intermediate role in the midst of the collaborators. The contact between the collaborators is not direct or im-mediate, but is mediated by the software. Recognizing that when human interaction takes place through a technological medium the technical characteristics influence—or mediate—the nature of the interaction, we can inquire into the effects of various media on collaboration. For a given task, for instance, should people use a text-based, asynchronous medium? How does this choice both facilitate and constrain their interaction? If the software intervenes between collaborating people, how should it represent them to each other so as to promote social bonding and understanding of each other’s work?

The classic analyses of mediation will reappear in the theoretical part of the book. The term mediation—perhaps even more than other key terms in this book—takes on a variety of interrelated meanings and roles. These emerge gradually as the book unfolds; they are both refined and enriched—mediated—by relations with other technical terms. The point for now is to start to think of group collaboration software as artifacts that mediate the cognition of their individual users and support the group cognition of their user community.

Mediation by Small Groups

Small groups are the engines of knowledge building. The knowing that groups build up in manifold forms is what becomes internalized by their members as individual learning and externalized in their communities as certifiable knowledge. At least, that is a central premise of this book.

The last several chapters of this book take various approaches to exploring the concept of group cognition, because this concept involves such a difficult, counter-intuitive way of thinking for many people. This is because cognition is often assumed to be associated with psychological processes contained in individual minds.

Text Box:  
Figure 0-1. The Thinker. Auguste Rodin. Bronze. 1881.
The usual story, at least in Western culture of the past three hundred years, goes something like this: an individual experiences reality through his senses (sic: the paradigmatic rational thinker in this tradition is often assumed to be male). He thinks about his experience in his mind; “cognition,” stemming from the Latin “cogito” for “I think,” refers to mental activities that take place in the individual thinker’s head (see figure 0-1). He may articulate a mental thought by putting it into language, stating it as a linguistic proposition whose truth value is a function of the proposition’s correspondence with a state of affairs in the world. Language, in this view, is a medium for transferring meanings from one mind to another by representing reality. The recipient of a stated proposition understands its meaning based on his own sense experience as well as his rather unproblematic understanding of the meanings of language.

 

Figure 0-1 goes approximately here

 

The story based on the mediation of group cognition is rather different: here, language is an infinitely generative system of symbolic artifacts that encapsulate and embody the cultural experiences of a community. Language is a social product of the interaction of groups—not primarily of individuals—acting in the world in culturally mediated ways. Individuals who are socialized into the community learn to speak and understand language as part of their learning to participate in that community. In the process, they internalize the use of language as silent self-talk, internal dialog, rehearsed talk, narratives of rational accountability, senses of morality, conflicted dream lives, habits, personal identities and their tacit background knowledge largely preserved in language understanding. In this story, cognition initially takes place primarily in group processes of inter-personal interaction, which include mother-child, best friends, husband-wife, teacher-student, boss-employee, extended family, social network, gang, tribe, neighborhood, community of practice, etc. The products of cognition exist in discourse, symbolic representations, meaningful gestures, patterns of behavior; they persist in texts and other inscriptions, in physical artifacts, in cultural standards and in the memories of individual minds. Individual cognition emerges as a secondary effect, although it later seems to acquire a dominant role in our introspective narratives.

Most people have trouble accepting the group-based story at first, and viewing collaborative phenomena in these terms. Therefore, the group emphasis will emerge gradually in this book, rather than being assumed from the start. Indeed, that is what happened during my decade-long inquiry that is documented in these studies.

Although one can see many examples of the decisive role of small groups in the CSCW and CSCL literature, their pivotal function is rarely explicitly acknowledged and reflected upon. For instance, the two prevailing paradigms of learning in CSCL—which are referred to in chapter 17 as the acquisition metaphor and the participation metaphor—focus on the individual and the community, respectively, not on the intermediate small group. In the former paradigm, learning consists in the acquisition of knowledge by an individual; for instance, a student acquires facts from a teacher’s lesson. In the later, learning consists in knowledgeable participation in a community of practice; for instance, an apprentice becomes a more skilled practitioner of a trade. But if one looks closely at the examples typically given to illustrate each paradigm, one sees that there is usually a small group at work in the specific learning situation. In a healthy classroom there are likely to be cliques of students learning together in subtle ways, even if the lesson is not organized as collaborative learning with formal group work. Their group practices may or may not be structured in ways that support individual participants to learn as the group builds knowledge. In apprenticeship training, a master is likely to work with a few apprentices, and they work together in various ways as a small group; it is not as though all the apprentice tailors or carpenters or architects in a city are being trained together. The community of practice functions through an effective division into small working groups.

Some theories, like activity theory, insist on viewing learning at both the individual and the community level. Although their examples again typically feature small groups, the general theory highlights the individual and the large community, but has no theoretical representation of the critical small groups, in which the individuals carry on their concrete interactions and into which the community is hierarchically structured (see chapter 21).

My own experience during the studies reported here and in my apprenticeships in philosophy and computer science that preceded them impressed upon me the importance of working groups, reading circles and informal professional discussion occasions for the genesis of new ideas and insights. The same can be seen on a world-historical scale. Quantum jumps in human knowledge building emerge from centers of group interaction: the Bauhaus designers at Weimar, the post-impressionist artists in Paris salons, the Vienna Circle, the Frankfurt School—in the past, these communities were necessarily geographic locations where people could come together in small groups at the same time and place.

The obvious question once we recognize the catalytic role of small groups in knowledge building is: can we design computer-supported environments to create effective groups across time and space? Based on my experiences, documented in part I, I came to the conclusion that in order to achieve this goal we need a degree of understanding of small-group cognition that does not currently exist. In order to design effective media, we need to develop a theory of mediated collaboration through a design-based research agenda of analysis of small-group cognition. Most theories of knowledge building in working and learning have focused primarily on the two extreme scales: the individual unit of analysis as the acquirer of knowledge and the community unit of analysis as the context within which participation takes place. We now need to focus on the intermediate scale: the small-group unit of analysis as the discourse in which knowledge actually emerges.

The size of groups can vary enormously. This book tends to focus on small groups of a few people (say, three to five) meeting for short periods. Given the seeming importance of this scale, it is surprising how little research on computer-supported collaboration has focused methodologically on units of this size. Traditional approaches to learning—even to collaborative learning in small groups—measure effects on individuals. More recent writings talk about whole communities of practice. Most of the relatively few studies of collaboration that do talk of groups look at dyads, where interactions are easier to describe, but qualitatively different from those in somewhat larger groups. Even in triads, interactions are more complex and it is less tempting to attribute emergent ideas to individual members than in dyads.

The emphasis on the group as unit of analysis is definitive of this book. It is not just a matter of claiming that it is time to focus software development on groupware. It is also a methodological rejection of individualism as a focus of empirical analysis and cognitive theory. The book argues that software should support cooperative work and collaborative learning; it should be assessed at the group level and it should be designed to foster group cognition.

This book provides different perspectives on the concept of group cognition, but the concept of group cognition as discourse is not fully or systematically worked out in detail. Neither are the complex layers of mediation presented, by which interactions at the small-group unit of analysis mediate between individuals and social structures. This is because it is premature to attempt this—much empirical analysis is needed first. The conclusions of this book simply try to prepare the way for future studies of group cognition.

The Promise of Collaborating with Technology

Online workgroups are becoming increasingly popular, freeing learners and workers from the traditional constraints of time and place for schooling and employment. Commercial software offers basic mechanisms and media to support collaboration. However, we are still far from understanding how to work with technology to support collaboration in practice. Having borrowed technologies, research methodologies and theories from allied fields, it may now be time for the sciences of collaboration to forge their own tools and approaches, honed to the specifics of the field.

This book tries to explore how to create a science of collaboration support grounded in a fine-grained understanding of how people act, work, learn and think together. It approaches this by focusing the discussion of software design, interaction analysis and conceptual frameworks on central, paradigmatic phenomena of small-group collaboration, such as multiple interpretive perspectives, intersubjective meaning making and knowledge building at the group unit of analysis.

The view of group cognition that emerges from the following essays is one worth working hard to support with technology. Group cognition is presented in stronger terms than previous descriptions of distributed cognition. Here it is argued that high-level thinking and other cognitive activities take place in group discourse, and that these are most appropriately analyzed at the small-group unit of analysis. The focus on mediation of group cognition is presented more explicitly than elsewhere, suggesting implications for theory, methodology, design, and future research generally.

Technology in social contexts can take many paths of development in the near future. Globally networked computers provide a promise of a future of world-wide collaboration, founded upon small-group interactions. Reaching such a future will require overcoming the ideologies of individualism in system design, empirical methodology and collaboration theory, as well as in everyday practice.

This is a tall order. Today, many people react against the ideals of collaboration and the concept of group cognition based on unfortunate personal experiences, the inadequacies of current technologies and deeply ingrained senses of competition. Although so much working, learning and knowledge building takes place through teamwork these days, goals, conceptualizations and reward structures are still oriented toward individual achievement. Collaboration is often feared as something that might detract from individual accomplishments, rather than valued as something that could facilitate a variety of positive outcomes for everyone. The specter of “group-think”—where crowd mentality overwhelms individual rationality—is used as an argument against collaboration, rather than as a motivation for understanding better how to support healthy collaboration.

We need to continue designing software functionality and associated social practices; continue analyzing the social and cognitive processes that take place during successful collaboration; and continue theorizing about the nature of collaborative learning, working and acting with technology. The studies in this book are attempts to do just that. They are not intended to provide final answers or to define recipes for designing software or conducting research. They do not claim to confirm the hypotheses, propose the theories or formulate the methodologies they call for. Rather, they aim to open up a suggestive view of these bewildering realms of inquiry. I hope that by stimulating group efforts to investigate proposed approaches to design, analysis and theory, they can contribute in some modest measure to our future success in understanding, supporting and engaging in effective group cognition.


Part I. Design of Computer Support for Collaboration


 

Introduction to Part I: Studies of Technology Design

The 21 chapters of this book were written over a number of years, while I was finding my way toward a conception of group cognition that could be useful for CSCL and CSCW. Only near the end of that period, in editing the essays into a unified book, did the coherence of the undertaking become clear to me. In presenting these writings together, I think it is important to provide some guidance to the readers. Therefore, I will provide brief introductions to the parts and the chapters, designed to re-situate the essays in the book’s mission.

Theoretical Background to Part I

The fact that the theory presented in this book comes at the end, emanating out of the design studies and the empirical analysis of collaboration, does not mean that the work described in the design studies of the first section had no theoretical framing. On the contrary, in the early 1990’s when I turned my full-time attention to issues of CSCL, my academic training in computer science, artificial intelligence (AI) and cognitive science, which immediately preceded these studies, was particularly influenced by two theoretical orientations: situated cognition and domain-oriented design environments.

Situated cognition. As a graduate student, I met with a small reading group of fellow students for several years, discussing the then recent works of situated cognition (Brown & Duguid, 1991; Donald, 1991; Dreyfus, 1991; Ehn, 1988; Lave & Wenger, 1991; Schön, 1983; Suchman, 1987; Winograd & Flores, 1986), which challenged the assumptions of traditional AI. These writings proposed the centrality of tacit knowledge, implicitly arguing that AI’s reliance on capturing explicit knowledge was inadequate for modeling or replacing human understanding. They showed that people act based on their being situated in specific settings with particular activities, artifacts, histories and colleagues. Shared knowledge is not a stockpile of fixed facts that can be represented in a database and queried on all occasions, but an on-going accomplishment of concrete groups of people engaged in continuing communication and negotiation. Furthermore, knowing is fundamentally perspectival and interpretive.

Domain-oriented design environments. I was at that time associated with the research lab of the Center for Life-Long Learning & Design (L3D) directed by Gerhard Fischer, which developed the DODE (domain-oriented design environment) approach to software systems for designers (Fischer et al., 1993; Fischer, 1994; Fischer et al., 1998). The idea was that one could build a software system to support designers in a given domain—say, kitchen design—by integrating such components as a drawing sketchpad, a palette of icons representing items from the domain (stovetops, tables, walls), a set of critiquing rules (sink under a window, dishwasher to the right), a hypertext of design rationale, a catalog of previous designs or templates, a searching mechanism, and a facility for adding new palette items, among others. My dissertation system, Hermes, was a system that allowed one to put together a DODE for a given domain, and structure different professional perspectives on the knowledge in the system. I adapted Hermes to create a DODE for lunar habitat designers. Software designs contained in the studies of part I more or less start from this approach: TCA was a DODE for teachers designing curriculum and CIE was a DODE for computer network designers.

This theoretical background is presented primarily in chapter 4. Before presenting that, however, I wanted to give a feel for the problematic nature of CSCL and CSCW by providing examples of designing software to support constructivist education (chapter 1), computational support for learning (chapter 2) or algorithms for selecting group members (chapter 3).

The Studies in Part I

The eight case studies included in part I provide little windows upon illustrative experiences of designing software for collaborative knowledge building. They are not controlled experiments with rigorous conclusions. These studies hang together rather like the years of a modern-day life, darting off in unexpected directions, but without ever losing the connectedness of one’s identity, one’s evolving, yet enduring personal perspective on the world.

Each study contains a parable: a brief, idiosyncratic and inscrutable tale whose moral is open to—indeed begs for—interpretation and debate. They describe fragmentary experiments that pose questions and that, in their specificity and materiality, allow the feedback of reality to be experienced and pondered.

Some of the studies include technical details that may not be interesting or particularly meaningful to all readers. Indeed, it is hard to imagine many readers with proper backgrounds for easily following in detail all the chapters of this book. This is an unavoidable problem for interdisciplinary topics. The original papers for part I were written for specialists in computer science, and their details remain integral to the argumentation of the specific study, but not necessarily essential to the larger implications of the book.

The book is structured so that readers can feel free to skip around. There is an intended flow to the argument of the book—summarized in these introductions to the three parts—but the chapters are each self-contained essays that can largely stand on their own or be visited in accordance with each reader’s particular needs.

Part I explores, in particular ways, some of the major forms of computer support that seem desirable for collaborative knowledge building, shared meaning making and group cognition. The first three chapters address the needs of individual teachers, students and group members, respectively, as they interact with shared resources and activities. The individual perspective is then systematically matched with group perspectives in the next three chapters. The final chapters of part I develop a mechanism for moving knowledge among perspectives. Along the way, issues of individual, small-group and community levels are increasingly distinguished and supported. Support for group formation, perspectives and negotiation is prototyped and tested.

Study 1, TCA. The book starts with a gentle introduction to a typical application of designing computer support for collaboration. The application is the Teachers Curriculum Assistant, a system for helping teachers to share curriculum that responds to educational research’s recommendation of constructivist learning. It is a CSCW system in that it supports communities of professional teachers cooperating in their work. At the same time, it is a CSCL system that can help to generate, refine and propagate curriculum for collaborative learning by students, either online or otherwise. The study is an attempt to design an integrated knowledge-based system that supports five key functions associated with the development of innovative curriculum by communities of teachers. Interfaces for the five functions are illustrated.

Study 2, Essence. The next study turns to computer support for students, either in groups or singly. The application, State the Essence, is a program that gives students feedback on summaries they compose from brief essays. Significantly increasing students’ or groups’ time-on-task and encouraging them to create multiple drafts of their essays before submitting them to a teacher, the software uses a statistical analysis of natural language semantics to evaluate and compare texts. Rather than focusing on student outcomes, the study describes some of the complexity of adapting an algorithmic technique to a classroom educational tool.

Study 3, CREW. The question in this study is: how can software predict the behavior of a group of people working together under special conditions? Developed for the American space agency to help them select groups of astronauts for the international space station, the Crew software modeled a set of psychological factors for subjects participating in a prolonged space mission. Crew was designed to take advantage of psychological data being collected on outer-space, under-sea and Antarctic winter-over missions confining small groups of people in restricted spaces for prolonged periods. The software combined a number of statistical and AI techniques.

Study 4, Hermes. This study was actually written earlier than the preceding ones, but it is probably best read following them. It describes at an abstract level the theoretical framework behind the design of the systems discussed in the other studies—it is perhaps also critical of some assumptions underlying their mechanisms. It develops a concept of situated interpretation that arises from design theories and writings on situated cognition. These sources raised fundamental questions about traditional AI, based as it was on assumptions of explicit, objective, universal and rational knowledge. Hermes tried to capture and represent tacit, interpretive, situated knowledge. It was a hypermedia framework for creating domain-oriented design environments. It provided design and software elements for interpretive perspectives, end-user programming languages and adaptive displays, all built upon a shared knowledge base.

Study 5, CIE. A critical transition occurs in this study, away from software that is designed to amplify human intelligence with AI techniques. It turns instead toward the goal of software designed to support group interaction by providing structured media of communication, sharing and collaboration. While TCA attempted to use an early version of the Internet to allow communities to share educational artifacts, CIE aimed to turn the Web into a shared workspace for a community of practice. The specific community supported by the CIE prototype was the group of people who design and maintain local area computer networks (LANs), for instance at university departments.

Study 6, WebGuide. WebGuide was a several-year effort to design support for interpretive perspectives, focusing on the key idea proposed by Hermes, computational perspectives, and trying to adapt the perspectivity concept to asynchronous threaded discussions. The design study was situated within the task of providing a shared guide to the Web for small workgroups and whole classrooms of students, including the classroom where Essence was developed. Insights gained from adoption hurdles with this system motivated a push to better understand collaboration and computer-mediated communication, resulting in a WebGuide-supported seminar on mediation, which is discussed in this study. This seminar began the theoretical reflections that percolate through part II and then dominate in part III. The WebGuide system was a good example of trying to harness computational power to support the dynamic selection and presentation of information in accordance with different user perspectives.

Study 7, Synergeia. Several limitations of WebGuide led to the Synergeia design undertaking. The WebGuide perspectives mechanism was too complicated for users, and additional collaboration supports were needed, in particular support for group negotiation. An established CSCW system was re-designed for classroom usage, including a simplified system of class, group and individual perspectives, and a mechanism for groups to negotiate agreement on shared knowledge-building artifacts. The text of this study began as a design scenario that guided development of Synergeia and then morphed into its training manual for teachers.

Study 8, BSCL. This study takes a closer look at the design rationale for the negotiation mechanism of the previous study. The BSCL system illustrates designs for several important functions of collaborative learning: formation of groups (by the teacher); perspectives for the class, small work groups and individuals; and negotiation of shared knowledge artifacts. These functions are integrated into the mature BSCW software system, with support for synchronous chat and shared whiteboard, asynchronous threaded discussion with note types, social awareness features, and shared workspaces (folder hierarchies for documents). The central point of this study is that negotiation is not just a matter of individuals voting based on their preconceived ideas; it is a group process of constructing knowledge artifacts and then establishing a consensus that the group has reached a shared understanding of this knowledge, and that it is ready to display it for others.

The chapters of part I demonstrate a progression that was not uncommon in CSCL and CSCW around the turn of the century. A twentieth century fascination with technological solutions reached its denouement in AI systems that required more effort than expected and provided less help than promised. In the twenty-first century, researchers acknowledged that systems needed to be user-centric and should concentrate on taking the best advantage of human and group intelligence. In this new context, the important thing for groupware was to optimize the formation of effective groups, help them to articulate and synthesize different knowledge-building perspectives, and support the negotiation of shared group knowledge. This shift should become apparent in the progression of software studies in part I.

 


1

Share Globally, Adapt Locally

For this project, I worked with several colleagues in Boulder, Colorado, to apply what we understood of educational theory and approaches to computer support of collaboration to the plight of classroom teachers. Constructivist approaches to learning were well established as being favored by most educational researchers. The problem was to disseminate this to teachers in the actual classrooms. Even when teachers were trained in the theory, they had no practical instructional materials to implement the new approach on a daily basis. There were few textbooks or other resources available; even if materials were located, the teachers would still have to spend vast amounts of time they did not have to integrate them into the classroom practices and the institutional requirements.

The Internet was just starting to reach public schools, so we tried to devise computer-based supports for disseminating constructivist resources and for helping teachers to practically adapt and apply them. We prototyped a high-functionality design environment for communities of teachers to construct innovative lesson plans together, using a growing database of appropriately structured and annotated resources. This was an experiment in designing a software system for teachers to engage in collaborative knowledge building.

This study provides a nice example of a real-world problem confronting teachers. It tries to apply the power of AI and domain-oriented design environment technologies to support collaboration at a distance. The failure of the project to go forward beyond the design phase indicates the necessity of considering more carefully the institutional context of schooling and the intricacies of potential interaction among classroom teachers.

Introduction

Many teachers yearn to break through the confines of traditional textbook-centered teaching and present activities that encourage students to explore and construct their own knowledge. But this requires developing innovative materials and curriculum tailored to local students. Teachers have neither the time nor the information to do much of this from scratch.

The Internet provides a medium for globally sharing innovative educational resources. School districts and teacher organizations have already begun to post curriculum ideas on Internet servers. However, just storing unrelated educational materials on the Internet does not by itself solve the problem. It is too hard to find the resources to meet specific needs. Teachers need software for locating material-rich sites across the network, searching the individual curriculum sources, adapting retrieved materials to their classrooms, organizing these resources in coherent lesson plans and sharing their experiences across the Internet.

In response to these needs, I designed and prototyped a Teacher’s Curriculum Assistant (TCA) that provides software support for teachers to make effective use of educational resources posted to the Internet. TCA maintains information for finding educational resources distributed on the Internet. It provides query and browsing mechanisms for exploring what is available. Tools are included for tailoring retrieved resources, creating supplementary materials and designing innovative curriculum. TCA encourages teachers to annotate and upload successfully used curriculum to Internet servers in order to share their ideas with other educators. In this chapter I describe the need for such computer support and discuss what I have learned from designing TCA.

The Internet’s Potential for Collaboration Support

The Internet has the potential to transform educational curriculum development beyond the horizons of our foresight. In 1994, the process was just beginning, as educators across the country started to post their favorite curriculum ideas for others to share. Already, this first tentative step revealed the difficulties inherent in using such potentially enormous, loosely structured sources of information. As the Internet becomes a more popular medium for sharing curricula, teachers, wandering around the Internet looking for ideas to use in their classrooms, confront a set of problems that will not go away on its own¾on the contrary:

1.      Teachers have to locate sites of curriculum ideas scattered across the network; there is currently no system for announcing the locations of these sites.

2.      They have to search through the offerings at each site for useful items. While some sites provide search mechanisms for their databases, each has different interfaces, tools and indexing schemes that must be learned before the curricula can be accessed.

3.      They have to adapt items they find to the needs of their particular classroom: to local standards, the current curriculum, their own teaching preferences and the needs or learning styles of their various students.

4.      They have to organize the new ideas within coherent curricula that build toward long-term pedagogical goals.

5.      They have to share their experiences using the curriculum or their own new ideas with others who use the resources.

In many fields, professionals have turned to productivity software—like spreadsheets for accountants—to help them manage tasks involving complex sources of information. I believe that teachers should be given similar computer-based tools to meet the problems listed above. If this software is designed to empower teachers¾perhaps in conjunction with their students¾in open-ended ways, opportunities will materialize that we cannot now imagine.

In this chapter, I consider how the sharing of curriculum ideas over the Internet can be made more effective in transforming education. I advance the understanding of specific issues in the creation of software designed to help classroom teachers develop curricula and increase productivity, and introduce the Teacher’s Curriculum Assistant (TCA) that I built for this purpose. First, I discuss the nature of constructivist curriculum, contrasting it with traditional approaches based on behaviorist theory. Then I present an example of a problem-solving environment for high school mathematics students. The example illustrates why teachers need help to construct this kind of student-centered curriculum. I provide a scenario of a teacher developing a curriculum using productivity software like TCA, and conclude by discussing some issues I feel will be important in maximizing the effectiveness of the Internet as a medium for the dissemination of innovative curricula for educational reform.

The Problem of Curriculum in Educational Reform

The distribution of curriculum over the Internet and the use of productivity software for searching and adapting posted ideas could benefit any pedagogical approach. However, it is particularly crucial for advancing reform in education.

The barriers to educational reform are legion, as many people since John Dewey have found. Teachers, administrators, parents and students must all be convinced that traditional schooling is not the most effective way to provide an adequate foundation for life in the future. They must be trained in the new sensitivities required. Once everyone agrees and is ready to implement the new approach there is still a problem: what activities and materials should be presented on a day to day basis? This concrete question is the one that Internet sharing can best address. I generalize the term curriculum to cover this question.

Consider curricula for mathematics. Here, the reform approach is to emphasize the qualitative understanding of mathematical ways of thinking, rather than to stress rote memorization of quantitative facts or “number skills.” Behaviorist learning theory supported the view that one method of training could work for all students; reformers face a much more complex challenge. There is a growing consensus among educational theorists that different students in different situations construct their understandings in different ways (Greeno, 1993). This approach is often called constructivism or constructionism (Papert, 1993). It implies that teachers must creatively structure the learning environments of their students to provide opportunities for discovery and must guide the individual learners to reach insights in their own ways.

Behaviorism and constructivism differ primarily in their views of how students build their knowledge. Traditional, rationalist education assumed that there was a logical sequence of facts and standard skills that had to be learned successively. The problem was simply to transfer bits of information to students in a logical order, with little concern for how students acquire knowledge. Early attempts at designing educational software took this approach to its extreme, breaking down curricula into isolated atomic propositions and feeding these predigested facts to the students. This approach to education was suited to the industrial age, in which workers on assembly lines performed well-defined, sequential tasks.

According to constructivism, learners interpret problems in their environments using conceptual frameworks that they developed in the past (Roschelle, 1996). In challenging cases, problems can require changes in the frameworks. Such conceptual change is the essence of learning: one’s understanding evolves in order to comprehend one’s environment. To teach a student a mathematical method or a scientific theory is not to place a set of propositional facts into her mind, but to give her a new tool that she can make her own and use in her own ways in comprehending her world.

Constructivism does not entail the rejection of a curriculum. Rather, it requires a more complex and flexible curriculum. Traditionally, a curriculum consisted of a textual theoretical lesson, a set of drills for students to practice and a test to evaluate if the students could perform the desired behaviors. In contrast, a constructivist curriculum might target certain cognitive skills, provide a setting of resources and activities to serve as a catalyst for the development of these skills and then offer opportunities for students to articulate their evolving understandings (NCTM, 1989). The cognitive skills in math, for example, might include qualitative reasoning about graphs, number lines, algorithms or proofs.

My colleagues on the project and I believe that the movement from viewing a curriculum as fact-centered to viewing it as cognitive-tool-centered is appropriate for the post-modern (post-industrial, post-rationalist, post-behaviorist) period. Cognitive tools include, importantly, alternative knowledge representations (Norman, 1993). As researchers in artificial intelligence, we know that knowledge representations are key to characterizing or modeling cognition. We have also found that professionals working in typical contemporary occupations focus much of their effort on developing and using alternative knowledge representations that are adapted to their tasks (Sumner, 1995). Curricula to prepare people for the next generation of jobs would do well to familiarize students with the creation and use of alternative conceptual representations.

A Diverse Learning Ecology

Teachers need help to create learning environments that stimulate the construction and evolution of understanding through student exploration using multiple conceptual representations. A stimulating learning environment is one with a rich ecology, in which many elements interact in subtle ways. In this section I present an illustration of a rich ecology for learning mathematical thinking that includes: inductive reasoning, recursive computation, spreadsheet representation, graphing, simultaneous equations and programming languages.

Text Box:  
Figure 1-1. Regions of a circle; n = 8..

A typical curriculum suggestion that might be posted on an educational resources listing on the Internet is the problem of regions of a circle: Given n points on the circumference of a circle, what is the maximum number of regions one can divide the circle into by drawing straight lines connecting the points? (See figure 1-1.) For instance, connecting two points divides the circle into two regions; connecting three points with three lines creates four regions. This is a potentially fascinating problem because its subtleties can be explored at length using just algebra and several varieties of clear thinking.

 

Figure 1-1 goes approximately here

 

The problem with this curriculum offering as an Internet posting is that it has not been placed in a rich setting. To be useful, a fuller curriculum providing a set of conceptual tools is needed. For instance, a discussion of inductive reasoning brings out some of the character of this particular problem. If one counts the number of regions, R(n), for n = 1 to 6, one obtains the doubling series: 1, 2, 4, 8, 16, 31. Almost! One expects the last of these numbers to be 32, but that last region is nowhere to be found. For larger n, the series diverges completely from the powers of 2. Why? Here, inductive reasoning can come to the rescue of the hasty inductive assumption—if, that is, the problem is accompanied by a discussion of inductive reasoning.

Consider the general case of n points. Assume that the answer is known for n-1 points and think about how many new regions are created by adding the n-th point and connecting it to each of the n-1 old points. There is a definite pattern at work here. It may take a couple days of careful thought to work it out. It would also help if the sigma notation for sums of indexed terms is explained as a representational tool for working on the problem. Perhaps a collaborative group effort will be needed to check each step and avoid mistakes.

At this point, a teacher might introduce the notion of recursion and relate it to induction. If the students can program in Logo or Pascal (programming languages that can represent recursive processes), they could put the general formula into a simple but powerful program that could generate results for hundreds of values of n very quickly without the tedious and error-prone process of counting regions in drawings. It would be nice to formalize the derivation of this result with a deductive proof, if the method of formulating proofs has been explained.

Now that students are confident that they have the correct values for many n, they can enter these values in a spreadsheet to explore them. The first representation they might want to see is a graph of R(n) vs. n. On the spreadsheet they could make a column that displays the difference between each R(n) and its corresponding R(n-1). Copying this column several times, they would find that the fourth column of differences is constant. This result means that R(n) follows a fourth order equation, which can be found by solving simultaneous equations.

 

Figure 1-2. A number of multimedia resources related to the “regions of a circle” problem. These include textual documents, drawings, equations, spreadsheets, graphs and computer program source code.

 

The point of this example is that sharing the isolated statement of the problem is not enough. The rich learning experience involves being introduced to alternative representations of the problem: induction, recursion, spreadsheet differences, graphs, computer languages, simultaneous equations, etc. There is not one correct method for tackling a problem like this; a mathematically literate person needs to be able to view the problem’s many facets through several conceptual frameworks.

A curriculum in the new paradigm typically consists of stimulating problems immersed in environments with richly interacting ecologies, including: cognitive skills, knowledge representations, computational tools, related problems and reference materials. Perhaps a creative teacher with unlimited preparation time could put these materials together. However, the reality is that teachers deserve all the support they can get if they are to prepare and present the complex learning ecologies that constructivist reforms call for. Computer support for curriculum development should make the kinds of resources shown in figure 1-2 readily available.

 

Figure 1-2 goes approximately here

From Database to Design Environment

Curriculum planning for learning ecologies is not a simple matter of picking consecutive pages out of a standard textbook or of working out a sequential presentation of material that builds up to fixed learning achievements. Rather, it is a matter of design. To support teachers in developing curriculum that achieves this, we must go beyond databases of isolated resources to provide design environments for curriculum development.

It may seem to be an overwhelming task to design an effective learning environment for promoting the development of basic cognitive skills. However, dozens of reform curricula have already been created. The problem now is to disseminate these in ways that allow teachers to adapt them to their local needs and to reuse them as templates for additional new curricula. It is instructive to look at a recent attempt to make this type of curriculum available. The “MathFinder CD-ROM: a collection of resources for mathematics reform” excerpts materials from thirty new math curricula (Kreindler & Zahm, 1992). Like the posting of curriculum ideas at several Internet sites, this is an important early step at electronic dissemination.

Unfortunately, MathFinder has a number of serious limitations due to its CD-ROM (read-only) format. It relies on a fixed database of resources that allows resources to be located but not expanded or revised. Its indexing is relatively simple¾primarily oriented toward illustrating a particular set of math standards¾yet its search mechanism is cumbersome for many teachers. Because its resources are stored in bitmap images, they cannot be adapted in any way by teachers or students. Moreover, MathFinder provides no facility for organizing resources into curricula¾despite the fact that most of the resources it includes are excerpted from carefully constructed curricula. Because it is sold as a read-only commodity, MathFinder does not allow teachers to share their experiences with annotations or to add their own curricular ideas. Thus, of the five issues listed in the Introduction of this study, MathFinder only provides a partial solution to the issues of location and search.

An alternative approach is suggested by our work on domain-oriented design environments (Fischer et al., 1993; Fischer et al., 1998; Repenning & Sumner, 1995; Stahl, McCall, & Peper, 1992; Stahl, 1993). A software design environment provides a flexible workspace for the construction of artifacts, and places useful design tools and materials close at hand. A design environment for curriculum development goes substantially beyond a database of individual resources. Based on this approach, we built a prototype version of a Teacher’s Curriculum Assistant (TCA). TCA includes a catalog of previously designed curricula that can be reused and modified. It has a gallery of educational resources that can be inserted into partial curriculum designs. There is a workspace, into which curricula from the catalog can be loaded and resources from the gallery inserted. It is also possible for a teacher to specify criteria for the desired curriculum. Specifications are used for searching the case-base of curricula, adapting the resources and critiquing new designs.

TCA allows teachers to download curricular resources from the Internet and to create coherent classroom activities tailored to local circumstances. In particular, TCA addresses the set of five issues identified in the Introduction:

1.      TCA is built on a database of information about educational resources posted to the Internet, so it provides a mechanism for teachers to locate sources of curriculum ideas at scattered Internet sites.

2.      The TCA database indexes each resource in a uniform way, allowing teachers to search for all items meeting desired conditions.

3.      TCA includes tools to help teachers adapt items they find to the needs of their classroom.

4.      TCA provides a design workspace for organizing retrieved ideas into lesson plans that build toward long-term goals.

5.      TCA lets teachers conveniently share their experiences back through the Internet.

The TCA Prototype

Based on preliminary study of these issues, a TCA prototype has been developed. Six interface screens have been designed for teacher support: Profiler, Explorer, Versions, Editor, Planner, and Networker.

 

Figure 1-3. The teacher-client software interface for locating, searching and selecting resources and curricula: the Profiler, Explorer and Versions.

 

The Profiler, Explorer and Versions interfaces work together for information retrieval (figure 1-3). The Profiler helps teachers define classroom profiles and locates curricula and resources that match the profile. The Explorer displays these items and allows the teacher to search through them to find related items. Versions then helps the teacher select from alternative versions that have been adapted by other teachers. Through these interfaces, teachers can locate the available materials that most closely match their personal needs; this makes it easier to tailor the materials to individual requirements.

 

Figure 1-3 goes approximately here

 

The Planner, Editor and Networker help the teacher to prepare resources and curricula, and to share the results of classroom use (figure 1-4). The Planner is a design environment for reusing and reorganizing lesson plans. The Editor allows the teacher to modify and adapt resources. This is a primary means of personalizing a curriculum to individual classroom circumstances. Finally, the Networker supports interactions with the Internet, providing a two-way medium of communication with a global community of teachers. Using the Networker, a teacher can share personalized versions of standard curricula with other teachers who might have similar needs.

 

Figure 1-4. The teacher-client interface for adapting, organizing and sharing resources and curricula: the Planner, Editor and Networker.

 

Figure 1-4 goes approximately here

To illustrate how TCA works, each of the five issues will be discussed in the following sections. These sections present a scenario of a teacher using TCA to locate resources, search through them, adapt selected resources, organize them into a curriculum and share the results with other teachers.

Scenario Step 1: Locating Curriculum

Imagine a high school mathematics teacher using TCA. In the coming year she has to introduce some geometric concepts like Pythagoras’ Theorem and deductive proofs. More generally, she might want to discuss the ubiquity of patterns and ways to represent them mathematically. TCA lets her browse for semester themes and their constituent weekly units and lesson plans related to these topics.

TCA distinguishes four levels of curricula available on the Internet:

·        A theme is a major curriculum, possibly covering a semester or a year of school and optionally integrating several subjects. A theme consists of multiple teaching units.

·        A weekly unit is part of a theme, typically one week of lessons for a single subject. A unit is described by its constituent daily lesson plans.

·        A plan is one day’s lesson for a class. A lesson plan might include a number of resources, such as a lecture, a reading, an exercise or project, and perhaps a quiz and a homework assignment.

·        A resource is an element of a lesson plan. It might be a text, available as a word processing document. It could also be a video clip, a spreadsheet worksheet, a graphic design or a software simulation. Resources are the smallest units of curricula indexed by TCA.

TCA lets the teacher locate relevant curricula by analyzing information stored on her computer about items available on the Internet. Along with the TCA software on her computer there is a case-base of summaries (indexes) of curricula and resources that can be downloaded. These summary records reference curricula and resources that have been posted to Internet nodes around the world. In addition to containing the Internet address information needed for downloading an item, a record contains a description of the item, so that the teacher can decide whether or not it is of interest.

After a set of interesting items has been selected based on the information in the case-base, TCA downloads the items to the teacher’s computer. This happens without her having to know where they were located or how to download them. The items are then available for modification, printing or distribution to her students. If Internet traffic is slow, she may opt to download batches of curriculum and resources overnight and then work with them the next day.[1]

Scenario Step 2: Searching for Resources

TCA provides a combination of query and browsing mechanisms to help a teacher select curricula of interest and to find resources that go with it. She can start in the Profiler (Figure 3) by specifying that she wants a curriculum for ninth grade mathematics. Then she can browse through a list of themes in the Explorer that meet the specification. If the list is too long, she can narrow down her search criteria.

The theme named “A Look at the Greek Mind” is summarized as: “This is an integrated curriculum that explores myth, patterns and abstract reasoning.” It emphasizes patterns and is likely to include Pythagoras’ theorem. The teacher can click on this theme in the list. Her computer now displays summaries of the units that make up the curriculum for that theme. This list shows three weekly units. Select week 1, described as “Abstract thinking: number theory and deductive reasoning.”

She now sees summaries of that week’s five daily lesson plans. She looks at the geometry example for day 3, “Inductive reasoning example: regions of a circle.” She select that one and the screen changes to show the lesson plan in the Planner (Figure 4). It lists all the resources suggested for that period: two lecture topics, a class exercise, several alternative activities for small groups and a homework assignment.

The screenshot of Explorer illustrates how a teacher can browse from a given resource, like “chart of regions on a circle” up to all the lesson plans, units and themes that include that resource and then back down to all the associated units, plans and resources. This is one way to locate related resources within curricular contexts. The teacher can also turn to the Versions component to find variations on a particular resource and comments about the resource and its different versions by teachers who have used it.

Notice resource #2 in the Planner, where students create a spreadsheet chart: “Group activity: Chart of ratios on a circle.” When the teacher selects it with the mouse, the Editor shows the detail for that resource, including its index values.

The description contained in the case-base for each posted resource is organized as a set of 24 indexes and annotations, such as: recommended grade level, content area, pedagogical goal, instructional mode, prerequisites, materials used, required time and the like. Note that total class time and homework time are computed and teacher preparations for the resources are listed below the workspace.

 The TCA Profiler allows a teacher to specify her curricular needs using combinations of these indexes. Resources are also cross referenced so that she can retrieve many different resources that are related to a given one. Thus, once she has found the “problem of regions of a circle”, she can easily locate discussions of inductive reasoning, formal proofs, recursion, simultaneous equations, sample programs in Logo or Pascal, spreadsheet templates for analyzing successive differences and graphing tools. She can also find week-long units that build on geometric problems like this one, with variations for students with different backgrounds, learning styles or interests. TCA allows her to search both top-down from themes to resources and bottom-up from resources to curricula.

Scenario Step 3: Adapting to Local Needs

Adaptation tools are available in TCA for resources that have been downloaded from the Internet. The Planner component provides a design workspace for assembling a custom lesson plan and the Editor helps a teacher to adapt individual resources to her local needs. The TCA system can often make automated suggestions for adapting a resource to the specification given in the search process. For instance, if she retrieves a resource that was targeted for 11th grade when she is looking for 10th grade material, then TCA might suggest allowing her students more time to do the tasks or might provide more supporting and explanatory materials for them. In general, she will need to make the adaptations; even where the software comes up with suggestions, she must use her judgment to make the final decision.

While TCA can automate some adaptation, most tailoring of curricula requires hands-on control by an experienced teacher. Sometimes TCA can support her efforts by displaying useful information. For instance, if she is adapting resources organized by national standards to local standards she might like her computer to display both sets of standards and to associate each local standard with corresponding national standards. In other situations, perhaps involving students whose first language is not English, TCA might link a resource requiring a high level of language understanding to a supplementary visual presentation.

The adaptation process relies on alternative versions of individual resources being posted. The TCA VERSIONS component helps a teacher adjust to different student groups, teaching methods and time constraints by retrieving alternative versions of resources that provide different motivations, use different formats or go into more depth. She can substitute these alternative resources into lesson plans; they can then be modified with multimedia editing software from within TCA.

Included in the Editor is a reduced image of the spreadsheet itself. If a teacher click on this image, TCA brings up the commercial software application in which the document was produced. So she can now edit and modify the copy of this document which appears on her screen. She need not leave TCA to do this. Then she can print out her revised version for her students or distribute it directly to their computers. In this way, she can use her own ideas or those of her students to modify and enhance curricular units found on the Internet.

Just as it is important for teachers to adapt curricula to their needs, it is desirable to have resources that students can tailor. Current software technology makes this possible, as illustrated by a number of simulations in the Agentsheets Exploratorium (Ambach, Perrone, & Reppening, 1995; Stahl, Sumner, & Repenning, 1995).

Scenario Step 4: Organizing Resources into Lesson Plans

The lesson plan is a popular representation for a curriculum. It provides teachers a system for organizing classroom activities. TCA uses the lesson plan metaphor as the basis for its design workspace. A teacher can start her planning by looking at downloaded lesson plans and then modifying them to meet her local needs.

The TCA Planner workspace for designing lesson plans was shown in Figure 4. In addition to summaries of each resource, the workspace lists the time required by each resource, both in class and at home. These times are totaled at the bottom of the list of resources in the Planner. This provides an indication of whether there is too much or too little instructional material to fill the period. The teacher can then decide to add or eliminate resources or adjust their time allowances. The total homework time can be compared to local requirements concerning homework amounts.

TCA incorporates computational critics (Fischer et al., 1993; Fischer et al., 1998). Critics are software rules that monitor the curriculum being constructed and verify that specified conditions are maintained. For instance, critics might automatically alert the teacher if the time required for a one-day curriculum exceeds or falls short of the time available.

Scenario Step 5: Sharing New Experiences

Once a teacher has developed curricula and used them successfully in the classroom, she may want to share her creations with other teachers. This way, the pool of ideas on the Internet will grow and mature. TCA has facilities for her to annotate individual resources and curricular units at all levels with descriptions of how they worked in her classroom. This is part of the indexing of the resource or unit.

Assume that a teacher downloaded and used the “regions of a circle” resource and modified it based on her classroom experience. Now she wants to upload her version back to the Internet. The TCA Networker component automates that process, posting the new resource to an available server and adding the indexes for it to the server used for distributing new indexes. Because the indexing of her revision would be similar to that of the original version of the resource, other teachers looking at the “regions of a circle” resource would also find her version with her comments. In this way, the Internet pool of resources serves as a medium of communication among teachers about the specific resources. It is in such ways that I hope the use of the Internet for curriculum development will go far beyond today’s first steps.

What I Have Learned

I conceptualize the understanding I have reached through my work on TCA in five principles:

1.      Most resources should be located at distributed sites across the Internet, but carefully structured summaries (indexes) of them should be maintained on teachers’ local computers or in centralized catalogs.

2.      The search process should be supported through a combination of query and browsing tools that help teachers explore what is available.

3.      Adaptation of tools and resources to teachers and students is critical for developing and benefiting from constructivist curriculum.

4.      Resources must be organized into carefully designed curriculum units to provide effective learning environments.

5.      The Internet should become a medium for sharing curriculum ideas, not just accessing them.

A system to assist teachers in developing curricula for educational reform has been designed and prototyped. All aspects of the system must now be refined by working further with classroom teachers and curriculum developers. While the approach of TCA appeals to teachers who have participated in its design, its implementation must still be tuned to the realities of the classroom.

The distribution of resources and indexes prototyped in TCA has attractive advantages. Because the actual multimedia resources (text, pictures, video clips, spreadsheet templates, HyperCard stacks, software applications) are distributed across the Internet, there is no limit to the quantity or size of these resources and no need for teachers to have large computers. Resources can be posted on network servers maintained by school districts, regional educational organizations, textbook manufacturers and other agencies. Then the originating agency can maintain and revise the resources as necessary.

However, the approach advocated here faces a major institutional challenge: the standardization of resource indexing. The difficulty with this approach is the need to index every resource and to distribute these indexes to every computer that runs TCA. This involves (a) implementing a distribution and updating system for the case-base index records and (b) establishing the TCA indexing scheme as a standard.

The distribution and updating of indexes can be handled by tools within TCA and support software for major curriculum contributors. However, the standardization requires coordination among interested parties. Before any teachers can use TCA there must be useful indexed resources available on the network, with comprehensive suggested lesson plans. It is necessary to establish cooperation among federally-funded curriculum development efforts, textbook publishers, software publishers and school districts. If successful, this will establish a critical mass of curriculum on the Internet accessible by TCA. Then the Internet can begin to be an effective medium for the global sharing of locally adaptable curriculum.


2

Evolving a Learning Environment

Chapter 2 offers another fairly typical attempt to use the power of computer technology to support learning. Students need iterative practice with timely expert feedback for developing many skills, but computer-based drill and practice is not easy to implement in ways that are fun to use and educationally effective when the task involves interpreting semantics of free text. The State the Essence software used latent semantic analysis (LSA) to solve this problem. It shows how a computer can provide a partial mentoring function, relieving teachers of some of the tedium while increasing personalized feedback to students.

The software evolved through a complex interplay with its user community during classroom testing to provide effective automated feedback to students learning to summarize short texts. It demonstrates the collaboration among researchers, teachers and students in developing educational innovations. It also suggests collaborative group use of such software.

This case study is interesting not only for describing software design, implementation and adoption within a social context involving researchers, teachers and students, but also for its assessment of LSA, which is often proposed as a panacea for automated natural language understanding in CSCW and CSCL systems. It is an idea that at first appears simple and powerful, but turns out to require significant fine-tuning and a very restricted application. Success also depends upon integration into a larger activity context in which the educational issues have been carefully taken into account. In this case, well-defined summarization skills of individual students are fairly well understood, making success possible.

Interactive learning environments promise to significantly enrich the experience of students in classrooms by allowing them to explore information under their own intrinsic motivation and to use what they discover to construct knowledge in their own words. To date, a major limitation of educational technology in pursuing this vision has been the inability of computer software to interpret unconstrained free text by students in order to interact with students without limiting their behavior and expression.

In a project at the University of Colorado’s Institute of Cognitive Science, a research group I worked in developed a system named State the Essence that provides feedback to students on summaries that they compose in their own words from their understanding of assigned instructional texts. This feedback encourages the students to revise their summaries through many drafts, to reflect on the summarization process, to think more carefully about the subject matter, and to improve their summaries prior to handing them in to the teacher. Our software uses a technology called latent semantic analysis (LSA) to compare the student summary to the original text without having to solve the more general problem of computer interpretation of free text.

LSA has frequently been described from a mathematical perspective and the results of empirical studies of its validity are widely available in the psychological literature.[2] This report on our experience with State the Essence is not meant to duplicate those other sources, but to convey a fairly detailed sense of what is involved in adapting LSA for use in interactive learning environments. To do this I describe how our software evolved through a two-year development and testing period.

In this chapter I explain how our LSA-based environment works. There is no magic here. LSA is a statistical method that has been developed by tuning a numeric representation of word meanings to human judgments. Similarly, State the Essence is the result of adapting computational and interface techniques to the performance of students in the classroom. Accordingly, this chapter presents an evolutionary view of the machinery we use to encourage students to evolve their own articulations of the material they are reading.

Section 1 of this chapter discusses the goals and background of our work. Section 2 takes a look at our interactive learning environment from the student perspective: the evolving student-computer interface. A central section 3 “lifts the hood” to see the multiple ways in which LSA is used to assess a student summary and formulate feedback. This raises questions about how LSA’s semantic representation contained in our software itself evolved to the point where it can support decisions comparable to human judgments; these questions are addressed in the concluding section 4, which also summarizes our process of software design in use as a co-evolution, and suggests directions for continuing development.

1. Evolution of Student Articulations

Educational theory emphasizes the importance of students constructing their own understanding in their own terms. Yet most schooling software that provides automatic feedback to the students requires students to memorize and repeat exact wordings. Whereas the new educational standards call for developing the ability of students to engage in high-level critical thinking involving skills such as interpretation and argumentation, current software tools to tutor and test students still look for the correct answer to be given by a particular keyword. In the attempt to assess learning more extensively without further over-burdening the teachers, schools increasingly rely upon computer scoring, typically involving multiple choice or single word answers. While this may be appropriate under certain conditions, it fails to assess more open-ended communication and reflection skills—and may deliver the wrong implicit message about what kind of learning is important. Because we are committed to encouraging learners to be articulate, we have tried to overcome this limitation of computer support.

The underlying technical issue involves, of course, the inability of computer software to understand normal human language. While it is simple for a program to decide if a multiple choice selection or a word entered by a student matches an option or keyword stored in the program as the correct answer, it is in general not possible for software to decide if a paragraph of English is articulating a particular idea. This is known as the problem of “natural language understanding” in the field of artificial intelligence (AI). While some researchers have been predicting since the advent of computers that the solution to this problem is just around the corner (Turing, 1950), others have argued that the problem is in principle unsolvable (Dreyfus, 1972; Searle, 1980).

The software technique we call latent semantic analysis (LSA) promises a way to finesse the problem of natural language understanding in many situations. LSA has proven to be almost as good as human graders in judging the similarity of meaning of two school-related texts in English in a number of restricted contexts. Thus, we can use LSA to compare a student text to a standard text for semantic similarity without having to interpret the meaning of either text explicitly.

The technique underlying LSA was originally developed in response to the “vocabulary problem” in information retrieval (Furnas et al., 1987). The retrieval problem arises whenever information may be indexed using different terms that mean roughly the same thing. When one does a search using one term, it would be advantageous to retrieve the information indexed by that term’s synonyms as well. LSA maintains a representation of what words are similar in meaning to each other, so it can retrieve information that is about a given topic regardless of which related index terms were used. The representation of what words are similar in meaning may be extended to determine what texts (sentences, paragraphs, essays) are similar in topic. The way that LSA does all this should become gradually clearer as this chapter unfolds.

Because LSA has often proven to be effective in judging the similarity in meaning between texts, it occurred to us that it could be used for judging student summaries. The idea seemed startlingly simple: Submit two texts to LSA—an original essay and a student attempt to summarize that essay. The LSA software returns a number whose magnitude represents how “close” the two texts are semantically (how much they express what humans would judge as similar meanings). All that was needed was to incorporate this technique in a motivational format where the number is displayed as a score. Students would see the score and try to revise their summaries to increase their scores.

In 1996, we (see Notes at end of book) were a group of cognitive scientists who had been funded to develop educational applications of LSA to support articulate learners. We were working with a team of two teachers at a local middle school. We recognized that summarization skills were an important aspect of learning to be articulate and discovered that the teachers were already teaching these skills as a formal part of their curriculum. We spent the next two years trying to implement and assess this simple sounding idea. We initially called our application “State the Essence” to indicate the central goal of summarization.

A companion paper (Kintsch et al., 2000) reports on the learning outcomes of middle school students using our software during two years of experimentation. Here I will just give one preliminary result of a more recent experiment I conducted informally, namely, to indicate the potential of this approach in a different context: collaborative learning at the college level. This experiment was conducted in an undergraduate computer science course on AI. The instructor wanted to give the students a hands-on feel for LSA so we held a class in a computer lab with access to State the Essence. Prior to class, the students were given a lengthy scholarly paper about LSA (Landauer, Foltz, & Laham, 1998) and were asked to submit summaries of two major sections of the paper as homework assignments. Once in the lab, students worked both individually and in small teams. First they submitted their homework summary to State the Essence, and then revised it for about half an hour. The students who worked on part I individually worked on part II in groups for the second half hour, and vice versa.

Of course, I cannot compare the number of drafts done on-line with the original homework summaries because the latter were done without feedback and presumably without successive drafts. Nor have I assessed summary quality or student time-on-task. However, informal observation during the experiment suggests that engagement with the software maintained student focus on revising the summaries, particularly in the collaborative condition. In writing summaries of part I, collaborative groups submitted 71% more drafts than individual students—an average of 12 compared to 7. In part II (which was more difficult and was done when the students had more experience with the system) collaborative groups submitted 38% more drafts—an average of 22 drafts as opposed to 16 by individuals. Interaction with the software in the collaborative groups prompted stimulating discussions about the summarization process and ways of improving the final draft—as well as the impressive number of revisions. Computer support of collaboration opens up a new dimension for the evolution of student articulations beyond what we have focused on in our research to date. It would be important to develop interface features, feedback mechanisms and communication supports for collaboration to exploit the potential of collaborative learning.

2. Evolution of the Student-Computer Interface

What did the students view on the computer screen that was so motivating that they kept revising their summaries? The companion paper discusses in detail our shifting rationale for the design of the State the Essence interface. However, it may be useful to show here what the screen looked like after a summary draft was submitted. In the first year of our testing we built up a fairly elaborate display of feedback. Figure 2-1 shows a sample of the basic feedback.

 

Figure 2-1 goes approximately here

 

Note that the main feedback concerns topic coverage. The original text was divided into five sections with headings. The feedback indicates which sections the students’ summaries cover adequately or inadequately. A link points to the text section that needs the most work. Other indications show which sentences are considered irrelevant (off topic for all sections), and which are redundant (repeating content covered in other sentences of the student summary). In addition, spelling problems are noted. Finally, warnings are given if the summary is too long or too short. The focus of the feedback is an overall score, with a goal of getting 10 points.

The evolution of the interface was driven primarily by the interplay of two factors:

1.      Our ideas for providing helpful feedback (see next section).

2.      The students’ cognitive ability to take advantage of various forms of feedback (see the companion paper).

We found that there was a thin line between feedback that provides too little help and feedback that is overwhelming. The exact location of this line depends heavily upon such factors as student maturity, level of writing skills, class preparations for summarization tasks, classroom supports, and software presentation styles.

Text Box:  
Figure 2-1. View of the early interface showing feedback from a draft summary at the bottom of the screen.

 

 

Text Box:  
Figure 2-2. View of the later interface showing feedback from a draft summary.

For our second year, we simplified the feedback, making it more graphical and less detailed. Following a student suggestion, we renamed the system SummaryStreet. Figure 2-2 is a sample of feedback to a student summary: here the dominant feature is a series of bars, whose length indicates how well the summary covers each of the original text’s sections. The solid vertical line indicates the goal to be achieved for coverage of each section. Dashed lines indicate the results of the previous trial, to show progress. Spelling errors are highlighted within the summary text for convenient correction. The detailed information about irrelevant and redundant sentences has been eliminated and the length considerations are not presented until a student has achieved the coverage goals for every section (these different forms of feedback will be described in the next section).

 

Figure 2-2 goes approximately here

 

Naturally, the AI college students in our recent experiment were curious about how the system computed its feedback. They experimented with tricks to tease out the algorithms and to try to foil LSA. What is surprising is that many of the sixth graders did the same thing. In general, learning to use the system involves coming to an understanding of what is behind the feedback. Interacting across an interface means attributing some notion of agency to one’s communication partner. Even sixth graders know that there is no little person crouching in their computer and that it is somehow a matter of manipulating strings of characters.

3. Evolution of Feedback Techniques

So how does State the Essence figure out such matters as topic coverage? In designing the software we assumed that we had at our disposal a technology—the LSA function—that could judge the similarity in meaning between any two texts about as well as humans can agree in making such judgments. Let us accept that assumption for this section of the chapter; in the following section I will investigate the primary factors underlying this technology. When given any two texts of English words the function returns a number between –1.0 and 1.0, such that the more similar the meaning of the two texts, the higher the result returned. For instance, if we submit two identical copies of the same essay, the function will return 1.0. If we submit an essay and a summary of that essay, the function will return a number whose value is closer to 1.0 the better the summary expresses the same composite meaning as the essay itself. This section will report on how our use of the LSA function in State the Essence evolved during our research. This provides a detailed example of how the LSA technology can be adapted to an educational application.

In the course of our research we had to make a number of key strategic design decisions—and revise them periodically: (a) one was how to structure the software’s feedback to provide effective guidance to the students. The feedback had to be useful to students in helping them to think critically about their summaries, recognize possible weaknesses and discover potential improvements to try. (b) Another decision was how to measure the overlap in meaning between a summary and the original essay. For this we had to somehow represent the essence of the essay that we wanted the summaries to approach; (c) this led to the issue of determining “thresholds,” or standards of cut-off values for saying when a summary had enough overlap to be accepted. (d) Then we had to define a feedback system to indicate clearly for the students how good their summaries were and how much they were improving. I will now review each of these design decisions and discuss how they affected the student process of refining the summary.

a. Providing Guidance

Given the LSA function, we could have developed a simple form on the Web that accepts the text of a student’s summary, retrieves the text of the original essay, submits the two texts to the function, multiplies the result of the function by 10 and returns that as the student’s score. Unfortunately, such a system would not be of much help to a student who is supposed to be learning how to compose summaries. True, it would give the student an objective measure of how well the summary expressed the same thing as the essay, but it would not provide any guidance on how to improve the summary. Providing guidance—scaffolding the novice student’s attempt to craft a summary—is the whole challenge to the educational software designer.

To design our software, we had to clearly define our pedagogical focus. We operationalized the goal of summary writing to be “coverage.” That is, a good summary is one that faithfully captures the several major points of an essay. Secondarily, a summary should cover these points concisely: in perhaps a quarter the number of words of the original.

There are other factors that we considered and tried in various versions of the software. For instance, students should progress beyond the common “copy and delete” strategy where they excerpt parts of the original verbatim and then erase words to be more concise; learning to be articulate means saying things in your own words. However, even learning to manipulate someone else’s words can be valuable. We generally felt that the most important thing was for students to be able to identify the main points in an essay. It is also necessary that students learn to use the words that they come across in an essay. For instance a technical article on the heart and lungs has many medical terms that must be learned and that should probably be used in writing a summary. So avoiding plagiarism and reducing redundancy were less primary goals in a system for sixth graders than focusing on coverage.

Spelling is always a concern, although we would not want a focus on spelling to inhibit articulation and creativity. In a software feedback system, correct spelling is necessarily required, if only because misspelled words will not be recognized by the software. Other issues of composition had to be ignored in our software design. We made no attempt to provide feedback on logic or coherence of argument, literary or rhetorical style, and other aspects of expository writing. These were left for the teacher. Our system focused on helping students to “state the essence” of a given text by optimizing their coverage of the main points prior to submitting their compositions to a teacher for more refined and personal feedback. The power of LSA is limited and the limitations must be taken into account when designing its use context, balancing automated and human feedback appropriately.

b. Representing the Essence

The first question in defining our system algorithm was how to represent the main points of an essay so that we would have a basis for comparison with student summaries. Most educational essays are already fairly well structured: pages are divided into paragraphs, each of which expresses its own thought; an essay that is a couple pages long is generally divided into sections that discuss distinct aspects of the topic. For our classroom interventions, we worked closely with the teachers to select or prepare essays that were divided into four or five sections, clearly demarcated with headings. We avoided introduction or conclusion sections and assumed that each section expressed one or more of the major points of the essay as a whole. This allowed us to have the software guide the students by telling them which sections were well covered by their summaries and which were not. That is the central heuristic of our design.

So the idea is to compare a student summary with the main points of each section of the original text and then provide feedback based on this. The question is how to formulate the main points of a section for LSA comparison. There are several possible approaches:

(1) Use previously graded student summaries of text sections and determine how close a new summary is to any of the high-ranked old summaries. This method obviously only works when a text has been previously summarized by comparable students and has been carefully graded. This was not possible for most of our experiments.

(2) Have adults (researchers and/or teachers) laboriously hand-craft a “golden” summary of each section. This was our original approach. Typically, we had two summaries by the teachers and a couple by researchers; we then created one golden summary for each section that synthesized all of the ideas contained in the adult summaries. We would then use this summary as a section target. In addition, each adult’s complete set of section summaries was conglomerated for use as a target for the summary as a whole. The software compared the entire student summary to the “golden” target summary for each section and selected the highest LSA score to determine how well the student covered that section’s points. Similarly, it also compared the entire student summary to each of the expert whole summaries to compute the student’s score. That gave students a number of alternative adult summaries to target. This approach worked well. However, it required too much preparatory work. Each time we wanted to use a new essay in a classroom we would have to carefully prepare between a dozen and two dozen section summaries. This used too much teacher and researcher time and clearly would not scale up.

(3) Use the original text for comparison. This did not allow for feedback on coverage of each section.

(4) Use each section of the original text for a series of comparisons. The problem with this was setting thresholds. It is much easier to write a summary that gets a high LSA rating for some texts than it is for others. How do we know what score to consider good enough to praise or bad enough to criticize? Where adults hand-crafted expert target summaries we understood roughly what a 0.75 versus a 0.30 LSA score meant, but this was not the case for an arbitrary page of text. This led to our other major challenge: how to set standards of achievement in cases where we did not have a large base of experience.

c. Setting Standards

Setting thresholds is always an issue. The easier the method of defining comparison texts, the harder it is to set effective thresholds for them.

One idea we considered was to use past student summaries as a statistical basis for scoring new attempts. But that only worked for essays that had been used in past trials, and most of our experiments introduced new texts. So as an alternative to past summaries, we tried comparing hundreds of randomly selected short texts to the essay section to gain a measure of how hard the essay is to summarize (the random texts were selected from the corpus used for the LSA scaling space—see next section). We found that if a student summary does, say, four or five standard deviations better than a random text, it is probably fairly good. This approach was easy to automate and we adopted it. However, there were sometimes significant discrepancies between how hard it is for students to reach these thresholds for one essay section compared to another. We could adopt the attitude that life is just that way, and students need to learn that some things are harder to say than others. But we have some ideas on how to address this issue and we will revisit the issue in section 4 as part of our plans for future work.

d. Computing the Basic Feedback

Whatever approach we use to represent the sections and the whole text for LSA comparisons and whatever method we use to set the thresholds for what is considered an adequate or an inadequate comparison, we always compare a given student draft to each section and to the whole text in order to derive a score.

In our early version of State the Essence, we took the best LSA result from comparing the student summary to each expert whole summary. We multiplied this by 10 to give a score from 0 to 10. In addition to calculating this score, we computed feedback on coverage of individual sections. For each essay section, we took the best LSA result from comparing the student summary to each expert section summary. We compared this to thresholds to decide whether to praise, accept, or require more work on the section. Praised sections increased the student’s score; criticized sections decreased it. We made additional adjustments for problems with summary length, redundancy, irrelevance, and plagiarism.

In the later version of the system, SummaryStreet, we compared the student summary draft with each section of the original text, as well as with the whole essay. The results of the LSA evaluations of the sections are compared to the automatically generated thresholds for the sections and the results are displayed graphically.

e. Refining the Summary

For a human, constructing a summary is a complex design problem with manifold constraints and sub-goals. Sixth graders vary enormously in their ability to do this and to respond to standardized guidance feedback. Telling a student that a particular section has not been covered adequately provides some guidance, but does not specify very clearly what has to be done. How does the student identify the main points of the section that are not yet covered in the summary? Primarily, the feedback points the student back to a confined part of the text for further study. The system even provides a hypertext link to that section so the student can reread it on the computer screen. The student can then try adding new sentences to the summary and resubmitting to see what happens. By comparing the results of subsequent trials, the student can learn what seems to work and what does not. The principle here is that instant and repeated feedback opportunities allow for learning through student-directed trial, with no embarrassing negative social consequences to the student for experimenting.

Repeated additions of material by a student, driven by the coverage requirement, inevitably lead to increasing length, soon exceeding the boundaries of a concise summary. In our early system, we continuously gave length feedback: a word count and a warning if the maximum length was being approached or exceeded. The composite score was also affected by excessive length, so it fluctuated in complex ways as more material was added. Dealing with the trade-off that was implicitly required between coverage and conciseness seemed to be more than most sixth graders could handle—although it might be appropriate for older students. So in our later system, SummaryStreet, we withheld the length feedback until the coverage thresholds were all met, letting the students pursue one goal at a time.

To help with the conciseness goal, we gave additional, optional feedback on relevance and repetition at the sentence level. This provided hints for the students about individual sentences in their summaries. They could view a list of sentences—or see them highlighted in their summary—that were considered irrelevant to the original essay or were considered redundant with other sentences in the summary. These lists were computed with many more LSA comparisons.

For the relevance check, each sentence in the student draft summary was compared (using LSA) with each section of the essay. A sentence whose comparison was well above the threshold for a section was praised as contributing significantly to the summary of that section. A sentence whose comparison was below the thresholds for all the sections was tagged as irrelevant.

To check for overlapping, redundant content, each sentence in the student draft summary was compared with each other sentence of the summary. Where two sentences were very highly correlated they are declared redundant. Similarly, one could compare summary sentences with each sentence in the original to check for plagiarism, where the correlation approached 1.0. Again, this detailed level of feedback is very difficult for most sixth graders to use effectively.

A final form of feedback concerns spelling. This does not make use of the LSA function, but merely checks each word to see if it is in the lexicon that LSA uses. Because the LSA vocabulary combines a general K-12 textual corpus with documents related to the essay being summarized, most correctly spelled words used in student summaries are included in it.

As the preceding review indicates, the techniques for computing feedback in State the Essence evolved considerably over a two-year period. Perhaps most interesting is the variety of LSA computations that can be integrated into the feedback. From the original idea of doing a single LSA comparison of student summary to original essay, the system evolved to incorporate hundreds or even thousands of LSA computations. These comparisons are now used to automatically set a variety of system thresholds and to evaluate summaries at the sentence, section and holistic levels.

At least at the current state of the technology, testing and fine tuning of many factors are always necessary. The final product is an opaque system that returns reasonable feedback in about a second and seems simple. But to get to that point each component of the system had to be carefully crafted by the researchers, reviewed by the teachers and tested with students. This includes the style of the text, its division into sections, the representation of the essence of each section, the values of multiple thresholds, the presentation of the feedback and various factors discussed in the next section, including the composition of the scaling space and the choice of its dimensionality.

Another conclusion to be drawn from the history of the evolution of our techniques is the importance of tuning system feedback to the needs and abilities of the audience, rather than trying to exploit the full power that is computationally possible. I will reflect on this process in the next section as well as taking a closer look at how it is that the LSA function can do what it does in the computations just described.

4. Co-Evolution of the Software in Use

This chapter adopts an evolutionary view of software development. The experience of our project with State the Essence can be summed up by saying that a co-evolution has taken place among the various participants. The research goals, the software features, the teacher pedagogy, the student attitudes and the classroom activities have changed remarkably over the two years. They have each changed in response to the other factors so as to adapt to each other effectively. Such an effective structural coupling (Maturana & Varela, 1987) between the development of the software and the changing behavior of the user community may constitute a significant indicator for a successful research effort.

Some of these changes and interactions among the researchers, teachers and students were documented elsewhere (Kintsch et al., 2000). The present chapter focuses more on the software development process in relation to student cognition. Section 1 argued that the educational point of the project is to promote evolution at the level of the individual student’s ability to articulate his or her understanding of instructional texts. Preliminary impressions from an experiment discussed in that section suggest that collaborative uses of the software may be even more powerful than individual uses. At a larger scale, significant changes in the classroom as a community were informally observed in the interactions during single classroom interventions as well as during the school year, even when the software use was nominally being conducted by students on an individual basis. Students tended to interact with friends around use of the software, helping each other and sharing experiences or insights. Section 2 reviewed the evolution of the software interface as it adjusted to student difficulties, and section 3 traced this back to shifts in approaches at the level of the underlying algorithms. One can go a step deeper and see the use of the basic LSA technology in our software as a product of a similar evolutionary adaptation.

Evolution of the Semantic Representation

At one level, the semantic representation at the heart of LSA is the result of a learning process. It is equivalent to the connections in AI neural networks that learn to adjust their values based on experience with training data. It can be argued that an LSA analysis of a corpus of text has learned from that corpus much of what a child learns from the corpus of text that the child is exposed to (Landauer & Dumais, 1997). One difference is that LSA typically analyses the corpus all at once rather than sequentially, but that does not make an essential difference. In certain applications it might be important for LSA to continually revise its values—to continue learning. For instance, in State the Essence it might be helpful to add new student summaries to the corpus of analyzed text as the system is used, to take into account the language of the user community as it becomes available.

The mathematical details of LSA have been described elsewhere, as have the rigorous evaluations of its effectiveness. For purposes of understanding the workings of State the Essence in a bit more depth and for appreciating both the issues that we addressed as well as those issues that remain open, it is necessary to review some of the central concepts of LSA at a descriptive level. These concepts include: scaling space, co-occurrence, dimensionality reduction, cosine measure and document representation.

Scaling space. The representation of meaning in LSA consists of a large matrix or high-dimensionality mathematical space. Each word in the vocabulary is defined as a point in this space—typically specified by a vector of about 300 coordinates. The space is a “semantic” space in the sense that words which people would judge to have similar meanings are located proportionately near to each other in the space. This space is what is generated by LSA’s statistical analysis of a corpus of text. For State the Essence, we use a large corpus of texts similar to what K-12 students encounter in school. We supplement this with texts from the domain of the essays being summarized, such as encyclopedia articles on the heart or on Aztec culture. The semantic space is computed in advance and then used as a “scaling space” for determining the mathematical representations of the words, sentences and texts of the student summaries. It may seem counter-intuitive that a mathematical analysis of statistical relations among words in written texts could capture what people understand as the meaning of those words—akin to learning language from circular dictionary definitions alone. Yet experiments have shown that across a certain range of applications, LSA-based software produces results comparable to those of foreign students, native speakers or even expert graders.

Co-occurrence. The computation of semantic similarity or nearness (in the space) of two words is based on an analysis of the co-occurrence of the two words in the same documents. The corpus of texts is defined as a large number of documents, usually the paragraphs in the corpus. Words that co-occur with each other in a large number of these documents are considered semantically related, or similar. The mathematical analysis does not simply count explicit co-occurrences, but takes full account of “latent” semantic relationships—such as two words that may never co-occur themselves but that both co-occur with the same third word or set of words. Thus, synonyms, for instance, rarely occur together but tend to occur in the same kinds of textual contexts. The LSA analysis not only takes full advantage of latent relationships hidden in the corpus as a whole, but scales similarities based on relative word frequencies. The success of LSA has shown that co-occurrence can provide an effective measure of semantic similarity for many test situations, when the co-occurrence relationships are manipulated in sophisticated ways.

Dimensionality reduction. The raw matrix of co-occurrences has a column for every document and a row for every unique word in the analyzed corpus. For a small corpus this might be 20,000 word rows x 2,000 document columns. An important step in the LSA analysis is dimensionality reduction. The representation of the words is transformed into a matrix of, say 20,000 words x 300 dimensions. This compression is analogous to the use of hidden units in AI neural networks. That is, it eliminates a lot of the statistical noise from the particular corpus selection and represents each word in terms of 300 abstract summary dimensions. The particular number 300 is somewhat arbitrary and is selected by comparing LSA results to human judgments. Investigations show that about 300 dimensions usually generate significantly better comparisons than either higher or lower numbers of dimensions. This seems to be enough compression to eliminate noise without losing important distinctions.

Cosine measure. If one visualizes the LSA representation of words as a high-dimensionality mathematical space with 300 coordinate axes, then the vector representing each word can be visualized as a line from the origin to a particular point in the space. The semantic similarity of any two words can be measured as the angle between their vectors. In LSA applications like State the Essence, this angle is measured by its cosine. For two points close to each other with a very small angle between their vectors, this cosine is about 1.0. The larger the angle between the two words, the lower the cosine. While it might seem that nearness in a multi-dimensional space should be measured by Euclidean distance between the points, experience with LSA has shown that the cosine measure is generally the most effective. In some cases, vector length is also used (the combination of cosine and vector length is equivalent to Euclidean distance). We are considering adopting vector length measures in State the Essence as well, to avoid problems we have encountered—discussed in the next section.

Document representation. In our software, LSA is not used to compare the meanings of individual words but to assess content overlap between two documents (sentences, summaries, essay sections, whole essays). It is standard practice in LSA applications to represent the semantics of a document with the vector average of the representations of the words in the document, massaged by some factors that have proven effective empirically. Thus the two documents we are comparing are taken to be at the centroid (vector average) of their constituent words within the same scaling space as their individual words. We then use the cosine between these two centroid points as the measure of their semantic content similarity. On language theoretic grounds this may be a questionable way to compute sentence semantics. One might, for instance, argue that “there is no way of passing from the word as a lexical sign to the sentence by mere extension of the same methodology to a more complex entity” (Ricoeur, 1976, p. 7), because while words may just have senses defined by other words, sentences refer to the world outside text and express social acts. In response to such an argument, one might conjecture that the confines of our experiment protect us from the theoretical complexities. State the Essence is only looking for overlapping topic coverage between two documents. Because of this operational focus, one might speculate that it is the simple similar inclusion of topical words (or their synonyms) that produces the desired experimental effect. However, we have done some informal investigations that indicate that it is not just a matter of topical words that influences LSA’s judgments; the inclusion of the proper mix of “syntactic glue” words is important as well. Nevertheless, it may be that the LSA-computed centroid of a well-formed sentence performs on average adequately for practical purposes in the tasks we design for them because these tasks need not take into account external reference (situated deixis) or interactional social functions. For instance, we do not expect LSA to assess the rhetorical aspects of a summary.

This overview of the key concepts of the LSA technology suggests that LSA is not an approach that came ready-made based on some a priori principle and that can be applied automatically to every situation. Quite to the contrary, the method itself has evolved through iterative refinement, under the constant criterion of successful adaptation to comparison with human judgment. The force driving the evolution of the LSA technology as well as that of our application has always been the statistical comparison with human judgments at a performance level comparable to inter-human reliability. In its application to summarization feedback, our use of LSA has significantly evolved to a complex use of many LSA-based measures, blended into an interaction style carefully tuned to the intended audience through repeated user trial.

Evolution into the Future

Nor is the use of LSA in State the Essence fixed now as a result of our past work. There are a number of technical issues that must be further explored. There are also practical improvements needed if this software is to be deployed for classroom use beyond the research context.

At least four technical issues that have already been mentioned in passing need further attention: space composition, threshold automation, vector length measurement and plagiarism flagging.

Space composition. As noted, our scaling spaces for the middle school students were based on a corpus of documents that included both generic K-12 texts and domain-specific texts related to the essay being summarized. It is still not clear what the optimal mix of such texts is and the best way of combining them. Clearly, it is important to include some domain-specific material so that the space includes meaningful representations of technical terms in the essay. It is also important to have the general vocabulary of the students well represented in order to give valid feedback when they express things in their own words. The problem is that two distinct corpora of text are likely to emphasize different senses of particular words, given the considerable polysemy of English words. Mathematical techniques have been proposed for combining two LSA spaces without disrupting the latent relationships determined for each space, and we must explore these techniques under experimental conditions. The creation and testing of an LSA scaling space is the most computationally intensive and labor intensive part of preparing an intervention with State the Essence. If we are to make this learning environment available for a wide range of essays in classrooms, we must find a way of preparing effective scaling spaces more automatically.

Threshold automation. The other technical aspect that needs to be further automated is the setting of reasonable thresholds for a diversity of texts and for different age levels of students. We have already experimented with some approaches to this as described above. Yet we still find unacceptable divergences in how easy it is for students to exceed the automatically generated thresholds of different texts. We have noticed that some texts lend themselves to high LSA cosines when compared to a very small set of words—sometimes even a summary a couple of words long. These are texts whose centroid representation is very close to the representation of certain key words from the text. For instance, a discussion of Aztecs or solar energy might include primarily terms and sentences that cluster around the term “Aztec” or “solar energy.” According to LSA measurements, these texts are well summarized by an obvious word or two.

Vector length measurement. We suspect that the use of both vector lengths and cosines to measure overlapping topic coverage between two texts will address the threshold problem just discussed—at least partially. But we need to experiment with this. The rationale for this approach is that vector length corresponds to how much a text has to say on a given topic, whereas cosine corresponds to what the topic is. Thus, a document consisting of the single word “Aztec” might be close to the topic of an essay on the Aztecs and therefore have a high cosine, but it would not be saying much about the topic and thus would have a small vector length. The inclusion of vector lengths within LSA-based judgments would allow State the Essence to differentiate between a quick answer and a more thoughtful or complete summary. Here, again, the software must evolve in response to tricks that students might use to achieve high scores without formulating quality summaries.

Plagiarism flagging. Of course, the simplest way to get a good LSA score is to just copy the whole essay as one’s summary. This is a winning strategy for topic coverage. The length is too long, so one must then cut the unnecessary details. Here, the sixth grader faces a task that requires distinguishing essential points from inessential details—a task that many sixth graders must still learn. A related alternative approach is to copy topic sentences from each section or paragraph of the original and use them for one’s summary. Again, this requires an important skill that State the Essence is intended to help teach: identifying topic ideas. So, it is probably a decision best left to the teacher to decide how much copying of vocabulary, phrases and even whole sentences is acceptable in a given exercise. Perhaps for older students, such as college undergraduates, the system should object to any significant level of plagiarism. It is still necessary to define the boundaries of what one considers to be plagiarism, such as reusing and/or reordering sentence clauses. In the end, such matters may have to be automatically flagged for subsequent teacher review and judgment.

Of course, there is still much else to do before State the Essence is ready for widespread deployment. In addition to wrapping up these open research issues and continuing to refine the system’s functionality and interface, there is the whole matter of packaging the software for easy use by teachers and of integration with curriculum. Another possibility is to include State the Essence as a tool within larger interactive learning environments like CSILE (van Aalst et al., 1999) or WebGuide (see chapter 6). Perhaps all that can be said now is that we have taken State the Essence far enough to suggest its potential educational utility and to demonstrate how LSA technology can be integrated into an interactive, constructivist, student-centered approach to facilitating student articulation.

 

 


3

Armchair Missions to Mars

The matching of people to form groups that will work together closely over periods of time is a subtle task. The Crew software described in this study aimed to advise NASA planners on the selection of teams of astronauts for long missions. The problem of group formation is an important one for computer support of collaboration in small groups, but one that has not been extensively investigated.

This study explores the application of case-based reasoning to this task. This software adapted a variety of AI techniques in response to this complex problem entailing high levels of uncertainty. Like the previous chapter’s task of analyzing student writing and the following chapter’s task of managing intertwined hypertext perspectives, this involved tens of thousands of calculations—illustrating how computers can provide computational support that would not otherwise be conceivable.

1. Modeling a Team of Astronauts

The prospect of a manned mission to Mars has been debated for 25 years—since the first manned landing on the moon (American Astronomical Society, 1966). It is routinely argued that this obvious next step in human exploration is too costly and risky to undertake, particularly given our lack of experience with lengthy missions in space (McKay, 1985).

During the period of space exploration around 1993, planners at NASA (National Aeronautics and Space Administration—the US space agency) were concerned about interpersonal issues in astronaut crew composition. The nature of astronaut crews was beginning to undergo significant change. In the past, astronauts had been primarily young American males with rigorous military training; missions were short, crews were small. Prior to a mission, a crew trained together for about a year, so that any interpersonal conflicts could be worked out in advance. The future, however, promised crews that would be far less homogeneous and regimented: international crews speaking different languages, mixed gender, inter-generational, larger crews and longer missions. This was the start of Soviet-American cooperation and planning for an International Space Station. While there was talk of a manned expedition to Mars, the more likely scenario was the creation of an international Space Station with six-month crew rotations.

There was not much experience with the psychology of crews confined in isolated and extreme conditions for months at a time. Social science research to explore issues of the effects of such a mission on crew members had focused on experience in analog missions under extreme conditions of isolation and confinement, such as Antarctic winter-overs, submarine missions, orbital space missions and deep sea experiments (Harrison, Clearwater, & C., 1991). This research had produced few generalized guidelines for planning a mission to Mars or an extended stay aboard a space station (Collins, 1985).

The data from submarines and Antarctic winter-overs was limited, inappropriately documented and inconsistent. NASA was beginning to conduct some experiments where they could collect the kinds of data they needed. But they required a way of analyzing such data, generalizing it and applying it to projected scenarios.

Computer simulation of long missions in space can provide experience and predictions without the expense and risk of actual flights. Simulations are most helpful if they can model the behavior of key psychological factors of the crew over time, rather than simply predicting overall mission success. Because of the lack of experience with interplanetary trips and the problems of generalizing and adapting data from analog missions, it was not possible to create a set of formal rules adequate for building an expert system to model extended mission such as this.

NASA wanted a way of predicting how a given crew—with a certain mix of astronauts—might respond to mission stress under different scenarios. This would require a complex model with many parameters. There would never be enough relevant data to derive the parameter values statistically. Given the modest set of available past cases, the method of case-based reasoning suggested itself (Owen, Holland, & Wood, 1993). A case-based system requires (1) a mechanism for retrieving past cases similar to a proposed new case and (2) a mechanism for adapting the data of a retrieved case to the new case based on the differences between the two (Riesbeck & Schank, 1989).

For the retrieval mechanism, my colleagues at Owen Research and I defined a number of characteristics of astronauts and missions. The nature of our data and these characteristics raised several issues for retrieval and we had to develop innovative modifications of the standard case-based reasoning algorithms, as described in detail below.

For the adaptation mechanism, I developed a model of the mission based on a statistical approach known as interrupted time series analysis (McDowall et al., 1980). Because there was too little empirical data to differentiate among all possible options, the statistical model had to be supplemented with various adaptation rules. These rules of thumb were gleaned from the social science literature on small-group interactions under extreme conditions of isolation and confinement. The non-quantitative nature of these rules lends itself to formulation and computation using a mathematical representation known as fuzzy logic (Cox, 1994).

The application domain presented several technical issues for traditional case-based reasoning: there is no natural hierarchy of parameters to use in optimizing installation and retrieval of cases, and there are large variations in behavior among similar missions. These problems were addressed by custom algorithms to keep the computations tractable and plausible. Thus, the harnessing of case-based reasoning for this practical application required the crafting of a custom, hybrid system.

We developed a case-based reasoning software system named Crew. Most of the software code consisted of the algorithms described in this chapter. Because Crew was intended to be a proof-of-concept system, its data entry routines and user interface were minimal. The user interface consisted of a set of pull-down menus for selecting a variety of testing options and a display of the results in a graph format (see figure 3-1). Major steps in the reasoning were printed out so that one could study the automated reasoning process.

 

Figure 3-1 goes approximately here

 

We were working with staff at the psychology labs of NASA’s astronaut support division, so we focused on psychological factors of the crew members, such as stress, morale and teamwork. NASA had begun to collect time series psychological data on these factors by having crew members in space and analog missions fill out a survey on an almost daily basis. As of the conclusion of our project (June 1995), NASA had analyzed data from an underwater mission designed to test their data collection instrument, the IFRS (Individualized Field Recording System) survey, and was collecting data from several Antarctic traverses. The IFRS survey was scheduled to be employed on a joint Soviet-American shuttle mission. Its most likely initial use would be as a tool for helping to select crews for the international Space Station.

Our task was to design a system for incorporating eventual IFRS survey results in a model of participant behavior on long-term missions. Our goal was to implement a proof-of-concept software system to demonstrate algorithms for combining AI techniques like case-based reasoning and fuzzy logic with a statistical model of IFRS survey results and a rule-base derived from the existing literature on extreme missions.

By the end of the project, we successfully demonstrated that the time series model, the case-based reasoning and the fuzzy logic could all work together to perform as designed. The system could be set up for specific crews and projected missions and it would produce sensible predictions quickly. The next step was to enter real data that NASA was just beginning to collect. Because of confidentiality concerns, this had to be done within NASA, and we turned over the software to them for further use and development.

This chapter reports on our system design and its rationale. After (1) this introduction, I present (2) the time series model, (3) the case-based reasoning system, (4) the case retrieval mechanism, (5) the adaptation algorithm, (6) the fuzzy logic rules and (7) our conclusions. The Crew system predicts how crew members in a simulated mission would fill out their IFRS survey forms on each day of the mission; that is, how they would self-report indicators of stress, motivation, etc. As NASA collects and analyzes survey data, the Crew program can serve as a vehicle for assembling and building upon the data­­—entering empirical cases and tuning the rule-base. Clearly, the predictive power of Crew will depend upon the eventual quantity and quality of the survey data.

 

Figure 3-1. A view of the Crew interface. Upper left allows selection of mission characteristics. Menu allows input of data. Lower left shows magnitude of a psychological factor during 100 points in the simulated mission. To the right is a listing of some of the rules taken into account.

2. Modeling the Mission Process

NASA is interested in how psychological factors such as those tracked in the IFRS surveys evolve over time during a projected mission’s duration. For instance, it is not enough to know what the average stress level will be of crew members at the end of a nine-month mission; we need to know if any crew member is likely to be particularly stressed at a critical point in the middle of the mission, when certain actions must be taken. To obtain this level of prediction detail, I created a time series model of the mission.

The model is based on standard statistical time series analysis. McDowall, et al. (1980) argue for a stochastic ARIMA (Auto Regressive Integrated Moving Average) model of interrupted time series for a broad range of phenomena in the social sciences. The most general model takes into account three types of considerations: (1) trends, (2) seasonality effects and (3) interventions. An observed time series is treated as a realization of a stochastic process; the ideal model of such a process is statistically adequate (its residuals are white noise) and parsimonious (it has the fewest parameters and the greatest number of degrees of freedom among all statistically equivalent models).

(1) Trends. The basic model takes into account a stochastic component and three structural components. The stochastic component conveniently summarizes the multitude of factors that produce the variation observed in a series, which cannot be accounted for by the model. At each time t there is a stochastic component at which cannot be accounted for any more specifically. McDowall, et al. claim that most social science phenomena are properly modeled by first-order ARIMA models. That is, the value, Yt of the time series at time t may be dependent on the value of the time series or of its stochastic component at time t-1, but not (directly) on the values at any earlier times. The first-order expressions for the three structural components are:

autoregressive: Yt = at + f Yt-1

differenced :                              Yt = at + Yt-1   

moving average :                       Yt = at + q at-1

I have combined these formulae to produce a general expression for all first-order ARIMA models:               

                                                Yt = at  + f Yt-1 + q at-1

This general expression makes clear that the model can take into account trends and random walks caused by the inertia (or momentum) of the previous moment’s stochastic component or by the inertia of the previous moment’s actual value.

(2) Seasonality. Many phenomena (e.g., in economics or nature) have a cyclical character, often based on the 12-month year. It seems unlikely that such seasonality effects would be significant for NASA missions; the relevant cycles (daily and annual) would be too small or too large to be measured by IFRS time series data.

(3) Interventions. External events are likely to impact upon modeled time series. Their duration can be modeled as exponential decay, where the nth time period after an event at time e will have a continuing impact of Ye+n = dn w where 0 <= d <= 1. Note that if d = 0 then there is no impact and if d = 1 then there is a permanent impact. Thus, d is a measure of the rate of decay and w is a measure of the intensity of the impact.

I have made some refinements to the standard time series equations, in order to tune them to our domain and to make them more general. First, the stochastic component, ai(t), consists of a mean value, mi(t), and a normal distribution component governed by a standard deviation, si(t). Second, mission events often have significant effects of anticipation. In general, an event j of intensity wij at time tj will have a gradual onset at a rate eij during times t < tj as well as a gradual decay at a rate dij during times t > tj. The following equation incorporates these considerations:

where:

Yi(t) = value of factor i for a given actor in a given mission at mission time t

tj = time of occurrence of the jth of n intervening events in the mission

a = noise: a value is generated randomly with mean m and standard deviation s

m = mean of noise value                                    0 <= m <= 10

s = standard deviation of noise                         0 <= s <= 10

f = momentum of value                                    -1 <= f <= 1

q = momentum of noise                                    -1 <= q <= 1

e = rise rate of interruption                                0 <= e <= 1

d = decay rate of interruption                            0 <= d <= 1

w = intensity of interruption                               -10 <= w <= 10

The model works as follows: using IFRS survey data for a given question answered by a given crew member throughout a given mission, and knowing when significant events occurred, one can use standard statistical procedures to derive the parameters of the preceding equation: m, s, f and q as well as e, d and w for each event in the mission. Then, conversely, one can use these parameters to predict the results of a new proposed mission. Once one has obtained the parameters for a particular psychological factor, a crew member and each event, one can predict the values that crew member would enter for that survey question i at each time period t of the mission by calculating the equation with those parameter values.

This model allows us to enter empirical cases into a case base by storing the parameters for each factor (i.e., a psychological factor for a given crew member during a given mission) or event (i.e., an intervention event in the given factor time series) with a description of that factor or event. To make a time series prediction of a proposed factor with its events, I retrieve a similar case, adapt it for differences from the proposed case, and compute its time series values from the model equation.

3. Using Case-Based Reasoning

The time series model is quite complex in terms of the number of variables and factors. It must produce different results for each time period, each kind of mission, each crew member personality, each question on the IFRS survey and each type of intervention event. To build a rule-based expert system, we would need to acquire thousands of formal rules capable of computing predictive results for all these combinations. But there are no experts on interplanetary missions who could provide such a set of rules. Nor is there data that could be analyzed to produce these rules. So we took a case-based reasoning approach. We take actual missions—including analog missions—and compute the parameters for their time series.

Each survey variable requires its own model (values for parameters m, s, f and q), as does each kind of event (values for parameters e, d and w). Presumably, the 107 IFRS survey questions can be grouped into several factors—although this is itself an empirical question. We chose six psychological factors that we thought underlay the IFRS questionnaire: crew teamwork, physical health, mental alertness, psychological stress, psychological morale and mission effectiveness. In addition, we selected a particular question from the survey that represented each of these factors. The Crew system currently models these twelve factors: six composites and six specific IFRS questions.

There is no natural taxonomy of events. Our approach assumes that there are categories of events that can be modeled consistently as interventions with exponential onsets and decays at certain impact levels and decay rates. Based on the available data, we decided to model eight event types: start of mission, end of mission, emergency, conflict, contact, illness, discovery and failure.

The case-base consists of instances of the 12 factors and the 8 event types. Each instance is characterized by its associated mission and crew member, and is annotated with its parameter values. Missions are described by 10 characteristics (variables), each rated from 0 to 10. The mission characteristics are: harshness of environment, duration of mission, risk level, complexity of activities, homogeneity of crew, time of crew together, volume of habitat, crew size, commander leadership and commander competence. Crew member characteristics are: role in crew, experience, professional status, commitment, social skills, self reliance, intensity, organization, sensitivity, gender, culture and voluntary status. In addition, events have characteristics: event type, intensity and point in mission.

Because there are only a small handful of cases of actual IFRS data available at present, additional cases are needed to test and to demonstrate the system. Approximate models of time series and interventions can be estimated based on space and analog missions reported in the literature, even if raw time series data is not available to derive the model statistically. Using these, we generated and installed supplemental demo cases by perturbating the variables in these cases and adjusting the model parameters in accordance with rules of thumb gleaned from the literature on analog missions. This data base is not rigorously empirical, but it should produce plausible results during testing and demos. Of course, the database can be recreated at a later time when sufficient real data is available. At that point, NASA might change which factor and event types to track in the database, or the set of variables to describe them. Then the actual case data would be analyzed using interrupted time series analysis to derive empirical values for m, s, f and q for the factors.

Users of Crew enter a scenario of a proposed mission, including crew composition and mission characteristics. They also enter a series of n anticipated events at specific points in the mission period. From the scenario, the system computes values for m, s, f and q for each behavioral factor. For events j = 1 through n, it computes values for dj, ej and wj. The computation of parameters is accomplished with case-based reasoning rather than statistically. The missions or events in the case-base that most closely match the hypothesized scenario are retrieved. The parameters associated with the retrieved cases are then adjusted for differences between the proposed and retrieved cases, using rules of thumb formulated in a rule-base for this purpose. Then, using the model equation, Crew computes values of Yt for each behavioral factor at each time slice t in the mission. These values can be graphed to present a visual image of the model’s expectations for the proposed mission. Users can then modify their descriptions of the crew, the mission scenario and/or the sequence of events and re-run the analysis to test alternative mission scenarios.

Crew is basically a database system, with a system of relational files storing variable values and parameter values for historical cases and rules for case adaptation. For this reason it was developed in the FoxPro database management system, rather than in Lisp, as originally planned. FoxPro is extremely efficient at retrieving items from indexed database files, so that Crew can be scaled up to arbitrarily large case-bases with virtually no degradation in processing speed. Crew runs on Macintosh and Windows computers.

4. The Case Retrieval Mechanism

A key aspect of case-based reasoning (CBR) is its case retrieval mechanism. The first step in computing predictions for a proposed new case is to retrieve one or more similar cases from the case base. According to Schank (1982), CBR adopts the dynamic memory approach of human recall.

As demonstrated in exemplary CBR systems (Riesbeck & Schank, 1989), this involves a hierarchical storage and retrieval arrangement. Thus, to retrieve the case most similar to a new case, one might, for instance, follow a tree of links that begins with the mission characteristic “harshness of environment.” Once the link corresponding to the new case’s environment was chosen, the link for the next mission characteristic would be chosen, and so on until one arrived at a particular case. The problem with this method is that not all domains can be meaningfully organized in such a hierarchy. Kolodner (1993) notes that some CBR systems need to define non-hierarchical retrieval systems. In the domain of space missions, there is no clear priority of characteristics for establishing similarity of cases.

A standard non-hierarchical measure of similarity is the n-dimensional Euclidean distance, which compares two cases by adding the squares of the differences between each of the n corresponding variable values. The problem with this method is that it is intractable for large case-bases because one must compare a new case with every case in the database.

Crew adopts an approach that avoids the need to define a strict hierarchy of variables as well as the ultimately intractable inefficiency of comparing a new case to each historic case. It prioritizes which variables to compare initially in order to narrow down to the most likely neighbors using highly efficient indices on the database files. But it avoids strict requirements even at this stage.

The retrieval algorithm also responds to another problem of the space mission domain that is discussed in the section on adaptation below; namely, the fact that there are large random variations among similar cases. This problem suggests finding several similar cases instead of just one to adapt to a new case. The case retrieval algorithm in Crew returns n nearest neighbors, where n is a small number specified by the user. Thus, parameters for new cases can be computed using adjusted values from several near neighbors, rather than just from the one nearest neighbor as is traditional in CBR. This introduces a statistical flavor to the computation in order to soften the variability likely to be present in the empirical case data.

The case retrieval mechanism consists of a procedure for finding the n most similar factors and a procedure for finding the n most similar events, given a proposed factor or event, a number n and the case-base file. These procedures, in turn, call various sub-procedures. Each of the procedures is of computational order n, where n is the number of neighbors sought, so it will scale up with no problem for case bases of arbitrary size. Here are outlines of typical procedures:

 

nearest_factor(new_factor, n, file)

1. find all factor records with the same factor type, using a database index

2. of these, find the 4n with the nearest_mission

3. of these, find the n with the nearest_actor

 

nearest_mission (new_mission, n, file)

1. find all mission records with environment = new mission’s environment ± 1 using an index

2. if less than 20n results, then find all mission records with environment = new mission’s environment ± 2 using an index

3. if less than 20n results, then find all mission records with environment = new mission’s environment ± 3 using an index

4. of these, find the 3n records with minimal |mission’s duration - new mission’s duration| using an index

5. of these, find the n records with minimal Σ difi2

 

nearest_actor (new_actor, n, file)

1. find up to n actor records with minimal Σ difi2

 

Note that in these procedures there is a weak sense of hierarchical ordering. It is weak in that it includes only a couple of levels and usually allows values that are not exactly identical, depending on how many cases exist with identical matches. Note, too, that the n-dimensional distance approach is used (indicated by “minimal ∑ difi2”), but only with 3*n cases, where n is the number of similar cases sought. The only operations that perform searches on significant portions of the database are those that can be accomplished using file indexes. These operations are followed by procedures that progressively narrow down the number of cases. Thereby, a balance is maintained that avoids both rigid prioritizing and intractable computations.

Case-based reasoning often imposes a hierarchical priority to processing that is hidden behind the scenes. It makes case retrieval efficient without exposing the priorities to scrutiny. The preceding algorithms employ a minimum of prioritizing. In each instance, priorities are selected that make sense in the domain of extreme missions based on our understanding of the relevant literature and discussions with domain experts at NASA. Of course, as understanding of the domain evolves with increased data and experience, these priorities will have to be reviewed and adjusted.

5. The Adaptation Algorithm

Space and analog missions exhibit large variations in survey results due to the complexity and subjectivity of the crew members’ perceptions as recorded in survey forms. Even among surveys by different crew members on relatively simple missions with highly homogeneous crews, the recorded survey ratings varied remarkably. To average out these effects, Crew retrieves n nearest neighbors for any new case, rather than the unique nearest one as is traditional in CBR. The value of n is set by the user.

The parameters that model the new case are computed by taking a weighted average of the parameters of the n retrieved neighbors. The weight used in this computation is based on a similarity distance of each neighbor from the new case. The similarity distance is the sum of the squares of the differences between the new and the old values of each variable. So, if the new case and a neighbor differed only in that the new case had a mission complexity rating of 3 while the retrieved neighbor had a mission complexity rating of 6, then the neighbor’s distance would be (6-3)2 = 9.

The weighting actually uses a term called importance that is defined as (sum - distance)/(sum * (n-1)), where distance is the distance of the current neighbor as just defined, and sum is the sum of the distances of the n neighbors. This weighting gives a strong preference to neighbors that are very near to the new case, while allowing all n neighbors to contribute to the adaptation process.

6. Rules and Fuzzy Logic

Once n similar cases have been found, they must be adapted to the new case. That is, we know the time series parameters for the similar old cases and we now need to adjust them to define parameters for the new case, taking into account the differences between the old and the new cases. Because the database is relatively sparse, it is unlikely that we will retrieve cases that closely match a proposed new case. Adaptation rules play a critical role in spanning the gap between the new and the retrieved cases.

The rules have been generated by our social science team, which has reviewed much of the literature on analog missions and small-group interactions under extreme conditions of isolation and confinement, e.g., (Radloff & Helmreich, 1968). They have determined what variables have positive, negligible or negative correlations with which factors. They have rated these correlations as either strong or weak. The Crew system translates the ratings into percentage correlation values. For instance, the rule, “teamwork is strongly negatively correlated with commander competence” would be encoded as a -80% correlation between the variable commander competence and the factor teamwork.

What follow are examples of the general way that the rules function in Crew. One rule, for instance, is used to adjust predicted stress for a hypothetical mission of length new-duration from the stress measured in a similar mission of length old-duration. Suppose that the rule states that the correlation of psychological stress to mission duration is +55%. All mission factors, such as stress, are coded on a scale of 0 to 10. Suppose that the historic mission had its duration variable coded as 5 and a stress factor rating of 6, and that the hypothetical mission has a duration rating of 8. We use the rule to adapt the historic mission’s stress rating to the hypothetical mission given the difference in mission durations (assuming all other mission characteristics to be identical). Now, the maximum that stress could be increased and still be on the scale is 4 (from 6 to 10); the new-duration is greater than the old by 60% (8 - 5 = 3 of a possible 10 - 5 = 5); and the rule states that the correlation is 55%. So the predicted stress for the new case is greater than the stress for the old case by: 4 x 60% x 55% = 1.32—for a predicted stress of 6 + 1.32 = 7.32. Using this method of adapting outcome values, the values are proportional to the correlation value, to the difference between the new and old variable values and to the old outcome value, without ever exceeding the 0 to 10 range.

There are many rules needed for the system. Rules for adapting the four parameters (m, s, f and q) of the 12 factors are needed for each of the 22 variables of the mission and actor descriptions, requiring 1056 rules. Rules for adapting the three parameters (e, d and w) of the 8 event types for each of the 12 factors are needed for each of the 24 variables of the mission, actor and intervention descriptions, requiring 6912 rules. Many of these 7968 required rules have correlations of 0, indicating that a difference in the given variable has no effect on the particular parameter.

The rules gleaned from the literature are rough descriptions of relationships rather than precise functions. Because so many rules are applied in a typical simulation, it was essential to streamline the computations. We therefore made the simplifying assumption that all correlations were linear from zero difference between the old and new variable values to a difference of the full 10 range, with only the strength of the correlation varying from rule to rule.

However, it is sometimes the case that such rules apply more or less depending on values of other variables. For instance, the rule “teamwork is strongly negatively correlated with commander competence” might be valid only if “commander leadership is very low and the crew member’s self reliance is low.” This might capture the circumstance where a commander is weak at leading others to work on something, while the crew is reliant on him and where the commander can do everything himself. It might generally be good for a commander to be competent, but problematic under the special condition that he is a poor leader and that the crew lacks self reliance.

Note that the original rule has to do with the difference of a given variable (commander competence) in the old and the new cases, while the condition on the rule has to do with the absolute value of variables (commander leadership, crew member’s self-reliance) in the new case. Crew uses fuzzy logic (Cox, 1994) to encode the conditions. This allows the conditions to be stated in English language terms, using values like low, medium, or high, modifiers like very or not, and the connectives and or or. The values like low are defined by fuzzy set membership functions, so that if the variable is 0 it is considered completely low, but if it is 2 it is only partially low. Arbitrarily complex conditions can be defined. They compute to a numeric value between 0 and 1. This value of the condition is then multiplied by the value of the rule so that the rule is only applied to the extent that the condition exists.

The combination of many simple linear rules and occasional arbitrarily complex conditions on the rules provides a flexible yet computationally efficient system for implementing the rules found in the social science literature. The English language statements by the researchers are translated reasonably into numeric computations by streamlined versions of the fuzzy logic formalism, preserving sufficient precision considering the small effect that any given rule or condition has on the overall simulation.

7. Conclusions and Future Work

The domain of space missions poses a number of difficulties for the creation of an expert system:

·        Too little is known to generalize formal rules for a rule-based system.

·        A model of the temporal mission process is needed more than just a prediction of final outcomes.

·        The descriptive variables cannot be put into a rigid hierarchy to facilitate case-based retrieval.

·        The case-base is too sparse and too variable for reliable adaptation from one nearest neighbor case.

·        The rules that can be gleaned from available data or relevant literature are imprecise.

Therefore, we have constructed a hybrid system that departs in several ways from traditional rule-based as well as classic case-based systems. Crew creates a time series model of a mission, retrieving and adapting the parameters of the model from a case base. The retrieval uses a multi-stage algorithm to maintain both flexibility and computational tractability. An extensive set of adaptation rules overcomes the sparseness of the case base, with the results of several nearest neighbors averaged together to avoid the unreliability of individual cases.

Our proof-of-concept system demonstrates the tractability of our approach. For testing purposes, Crew was loaded with descriptions of 50 hypothetical missions involving 62 actors. This involved 198 intervention parameters, 425 factor parameters and 4,047 event parameters. Based on our reading of the relevant literature, 7,968 case adaptation rule correlation figures were entered. A number of fuzzy logic conditions were also included for the test cases. Given a description of a crew member and a mission, the Crew system predicts a series of one hundred values of a selected psychological factor in a minute or two on a standard desktop computer.

Future work includes expanding the fuzzy logic language syntax to handle more subtle rules. Our impression from conflicting conclusions within the literature is that it is unlikely that many correlation rules hold uniformly across entire ranges of their factors.

We would also like to enhance the explanatory narrative provided by Crew in order to increase its value as a research assistant. We envision our system serving as a tool to help domain experts select astronaut crews, rather than as an automated decision maker. People will want to be able to see and evaluate the program’s rationale for its predictions. This would minimally involve displaying the original sources of cases and rules used by the algorithms. The most important factors should be highlighted. In situations strongly influenced by case adaptation rules or fuzzy logic conditions derived from the literature, it would be helpful to display references to the sources of the rules if not the relevant excerpted text itself.

Currently, each crew member is modeled independently; it is undoubtedly important to take into account interactions among them as well. While crew interactions indirectly affect survey results of individual members (especially to questions like: How well do you think the crew is working together today?), additional data would be needed to model interactions directly. Two possible approaches suggest themselves: treating crew interaction as a special category of event or subjecting data from crew members on a mission together to statistical analyses to see how their moods, etc. affect one another. Taking interactions into account would significantly complicate the system and would require data that is not currently systematically collected.

Use of the system by NASA personnel will suggest changes in the variables tracked and their relative priority in the processing algorithms; this will make end-user modifiability facilities desirable. In order to quickly develop a proof-of-concept system, we hard-coded many of the algorithms described in this chapter. However, some of these algorithms make assumptions about, for instance, what are the most important factors to sort on first. As the eventual system users gain deeper understanding of mission dynamics, they will want to be able to modify these algorithms. Future system development should make that process easier and less fragile.

Data about individual astronauts, about group interactions and about mission progress at a detailed level is not public information. For a number of personal and institutional reasons, such information is closely guarded. Combined with the fact that NASA was just starting to collect the kind of time series data that Crew is based on, that made it impossible for us to use empirical data in our case base. Instead, we incorporated the format of the IFRS surveys and generated plausible data based on the statistical results of completed IFRS surveys and the public literature on space and analog missions. When NASA has collected enough empirical cases to substitute for our test data, they will have to enter the new parameters, review the rule base, and reconsider some of the priorities embedded in our algorithms based on their new understanding of mission dynamics. However, they should be able to do this within the computational framework we have developed, and remain confident that such a system is feasible. As NASA collects more time series data, the Crew database will grow and become increasingly plausible as a predictive tool that can assist in the planning of expensive and risky interplanetary missions.

 


4

Supporting Situated Interpretation

This chapter opens up themes of computer support for collaboration, design theory and situated cognition. It also introduces the importance of interpretive perspectives as stressed in the hermeneutic tradition. Anticipating the book’s recurrent discussion of perspectives, it argues that collaboration software should support multiple interpretive design perspectives, as well as representing the context of work and providing shared language elements. The Hermes software that illustrates these principles was part of my dissertation research on CSCW support for NASA lunar habitat designers.

Specifically, the chapter discusses the role of interpretation in innovative design, and proposes an approach to provid­ing computer support for interpretation in design. According to situated cognition theory, most of a designer’s knowledge is normally tacit. Situated interpretation is the process of explicating something that is tacitly understood, within its larger context.

The centrality of interpretation to non-routine design is demonstrated by a review of the design methodology of Alexander, Rittel and Schön; a protocol analysis of a lunar habitat design session; and a summary of Heidegger’s philosophy of interpretation. These show that the designer’s articulation of tacit knowledge takes place on the basis of an understanding of the design situation, a focus from a particular perspective and a shared language.

As knowledge is made explicit through the interpretive processes of design, it can be captured for use in computer-based design support systems. A prototype software system is described for representing design situations, interpretive perspectives, and domain terminology to support interpretation by designers.

This chapter introduces the concept of interpretation, which will play a central role in each part of this book: (I) software support for interpretive perspectives, (II) interpretation as a rigorous methodology for CSCL and (III) interpretation as integral to collaborative meaning making and knowledge building. The hermeneutic philosophy of interpretation introduced here reappears in the later, more theoretical essays.

The Need for Computer Support

The volume of information available to people is increasing rapidly. For many professionals this means that the execution of their jobs requires taking into account far more information than they can possibly keep in mind. Consider lunar habitat designers, who serve as a key example in this chapter. In working on their high-tech design tasks, they must take into account architectural knowledge, ergonomics, space science, NASA regulations and lessons learned in past missions. Computers seem necessary to store these large amounts of data. However, the problem is how to capture and encode information relevant to novel future tasks and how to present it to designers in formats that support their mode of work.

A framework for clarifying the respective roles for computers and people in tasks like lunar habitat design is suggested by the theory of situated cognition. Several influential recent books (Dreyfus, 1991; Ehn, 1988; Schön, 1983; Suchman, 1987; Winograd & Flores, 1986) argue that human cognition is fundamentally different from computer manipulations of formal symbol systems. These differences imply that people need to retain control of the processes of non-routine design, although computers can provide valuable computational, visualization and external memory aids for the designers, and support interpretation by them.

From the viewpoint of situated cognition, the greatest impediment to computer support of innovative design is that designers make extensive use of tacit knowledge while computers can only use explicit representations of information. This chapter discusses the role of tacit understanding in design, in order to motivate an approach to computer support of design tasks. It focuses on three themes: (a) the need to represent novel design situations; (b) the importance of viewing designs from multiple perspectives; and (c) the utility of formulating tacit knowledge in explicit language.

The following sections discuss how these three themes figure prominently in analyses of interpreta­tion in design methodology and in a study of interpreta­tion in lunar habitat design. Following a discussion of the tacit basis of understanding, the philosophy of interpretation defines interpretation as the articulation of tacit understanding. Then consequences for computer support for interpretation are drawn, and they are illustrated by the Hermes system, a prototype for supporting interpretation in the illustrative task of lunar habitat design.

Interpretation in Design Methodology

The centrality of interpretation to design can be seen in seminal writings of design methodologists. The following summaries highlight the roles of appropriate representations of the design situation, alternative perspectives and linguistic explications of tacit understanding within the processes of interpretation in design.

Alexander (1964) pioneered the use of computers for designing. He used them to compute diagrams or patterns that decomposed the structural dependencies of a given problem into relatively independent substructures. In this way, he developed explicit interpretations for understanding a task based on an analysis of the unique design situation.

For Rittel & Webber (1973), the heart of design is the deliberation of issues from multiple perspectives. Interpretation in design is “an argumentative process in the course of which an image of the problem and of the solution emerges gradually among the participants, as a product of incessant judgment, subjected to critical argument” (p. 162). Rittel’s idea of using computers to keep track of the various issues at stake and alternative positions on those issues led to the creation of issue-based information systems.

Schön (1983) argues that designers constantly shift perspectives on a problem by bringing various professionally trained tacit skills to bear, such as visual perception, graphical sketching and vicarious simulation. By experimenting with tentative design moves within the tacitly understood situation, the designer discovers consequences and makes aspects of the structure of the problem explicit. Certain features of the situation come into focus and can be named or characterized in language. As focus subsequently shifts, what has been interpreted may slip back into an understanding that is once more tacit, but is now more developed.

Interpretation in Lunar Habitat Design

As part of an effort at developing computer support for lunar habitat designers working for NASA (the National Aeronautics and Space Administration—the US space agency), I videotaped thirty hours of design sessions (Stahl, 1993). The specified task was to accommodate four astronauts for 45 days on the moon in a cylindrical module 23 feet long and 14 feet wide.

Analysis of the designers’ activities shows that much of the design time consisted of processes of interpretation, i.e., the explication of previously tacit understanding. As part of this interpretation, representations were developed for describing pivotal features of the design situation that had not been included in the original specification; perspectives were evolved for looking at the task; and terminology was defined for explicitly naming, describing and communicating shared understandings.

The designers felt that a careful balance of public and private space would be essential given the crew’s long-term isolation in the habitat. An early design sketch proposed private crew areas consisting of a bunk above a workspace for each astronaut. Space constraints argued against this. The traditional conception of private space as a place for one person to get away was made explicit and criticized as taking up too much room. As part of the interpretive designing process, this concept was revised into a reinterpretation of privacy as a gradient along the habitat from quiet sleep quarters to a public activity area. This notion of degrees of privacy permitted greater flexibility in designing.

In another interchange related to privacy, the conventional American idea of a bathroom was subjected to critical deliberation when it was realized that the placement of the toilet and that of the shower were subject to different sets of constraints based on life in the habitat. The tacit acceptance of the location of the toilet and shower together was made explicit by comparing it to alternative European perspectives. The revised conception, permitting a separation of the toilet from the shower, facilitated a major design reorganization.

In these and other examples, the designers needed to revise their representations for understanding the design situation. They went from looking at privacy as a matter of individual space to reinterpreting the whole interior space as a continuum of private to public areas.

The conventional American notion of a bathroom was compared with other cultural models and broken down into separable functions that could relate differently to habitat usage patterns. Various perspectives were applied to the problem, suggesting new possibilities and considerations. Through discussion, the individual perspectives merged and novel solutions emerged.

In this interpretive process, previously tacit features of the design became explicit by being named and described in the language that developed. For instance, the fact that quiet activities were being grouped toward one end of the habitat design and interactive ones at the other became a topic of conversation at one point and the term “privacy gradient” was proposed to clarify this emergent pattern.

The Tacit Basis of Understanding

Situated cognition theory disputes the prevalent view that all human cognition is based on explicit mental representations such as goals and plans. Winograd & Flores (1986) hold that “experts do not need to have formalized representations in order to act” (p. 99). Although manipulation of such representations is often useful, there is a background of preunderstanding that cannot be fully formalized as explicit symbolic representations subject to rule-governed manipulation. This tacit preunderstanding underlies people’s ability to understand representations when they do make use of them. Suchman (1987) concurs that goals and plans are secondary phenomena in human behavior, usually arising only after action has been initiated: “when situated action becomes in some way problematic, rules and procedures are explicated for purposes of deliberation and the action, which is otherwise neither rule-based nor procedural, is then made accountable to them” (p. 54).

Philosophers like Polanyi (1962), Searle (1980), and Dreyfus (1991) suggest a variety of reasons why tacit preunderstanding cannot be fully formalized as data for computation. First, it is too vast: background knowledge includes bodily skills and social practices that result from immense histories of life experience and that are generally transparent to us. Second, it must be tacit to function: we cannot formulate, understand or use explicit knowledge except on the basis of necessarily tacit pre-understandings.

This is not to denigrate conceptual reasoning and rational planning. Rather, it is to point out that the manipulation of formal representations alone cannot provide a complete model of human understanding. Rational thought is an advanced form of cognition that distinguishes humans from other organisms. Accordingly, an evolutionary theorist of consciousness such as Donald (1991) traces the development of symbolic thought from earlier developmental stages of tacit knowing, showing how these earlier levels persist in rational human thought as the necessary foundation for advanced developments, including language, writing and computer usage.

The most thorough formulation of a philosophical foundation for situated cognition theory is given by Heidegger (1927/1996), the first to point out the role of tacit pre-understanding and to elaborate its implications. For Heidegger, we are always knowledgeably embedded in our world; things of concern in our situations are already meaningful in general before we engage in cognitive activity. We know how to behave without having to think about it. For instance, without having to actively think about it, an architect designing a lunar habitat knows how to lift a pencil and sketch a line, or how to look at a drawing and see the rough relationships of various spaces pictured there. The architect understands what it is to be a designer, to critique a drawing, to imagine being a person walking through the spaces of a floor plan.

Heidegger defines the situation as the architect’s context—the physical surround­ings, the available tools, the circumstances surrounding the task at hand, the architect’s own personal or professional aims, etc. The situation constitutes a network of signifi­cance in terms of which each part of the situation is already meaningful (Stahl, 1975a). That is, the architect has tacit knowledge of the situation as a whole; if something becomes a focus for the architect, it is perceived as already understood and its meaning is defined by its relation to the rest of the situation.

To the architect, a rectangular arrangement of lines on a piece of paper is not perceived as meaningless lines, but, given the design situation, it is already understood as a bunk for astronauts. The bunk is implicitly defined as such by the design task, the shared intentions of the design team, the other elements of the design, the sense of space conveyed by the design, and so on indefinitely. This network of significance is background knowledge that allows the architect to think about features of the design, to make plans for changes and to discover problems or opportunities in the evolving design. At any given moment, the background is already tacitly understood and does not need to be an object of rational thought manipulating symbolic representations.

At some point the architect might realize that the bunk is too close to a source of potential noise, like the flushing of the toilet. The explicit concern about this physical adjacency arises and becomes something important against the background of relationships of the pre-understood situation. Whereas a commonsensi­cal view might claim that the bunk and toilet were already present and therefore their adjacency was always there by logical implication, Heidegger proposes a more complex reality in which things are ordinarily hidden from explicit concern. In various ways, they can become uncovered and discovered, only to re-submerge soon into the background as our focus moves on.

In this way, our knowledge of the world does not consist primarily in mental models that represent an objective reality. Rather, our understanding of things presupposes a tacit pre-understanding of our situation. Only as situated in our already interpreted world can we discover things and construct meaningful representations of them. Situated cognition is not a simplistic theory that claims our knowledge lies in our physical environment like words on a sign post: it is a sophisticated philoso­phy of interpretation.

The Philosophy of Interpretation

Human understanding develops through interpretive explication. According to Heidegger, interpretation provides the path from tacit, uncritical pre-understandings to reflection, refinement and creativity. The structure of this process of interpretation reflects the inextricable coupling of the interpreter with the situation, i.e., of people with their worlds. Our situation is not reducible to our pre-understanding of it; it offers untold surprises, which may call for reflection, but which can only be discovered and comprehended thanks to our pre-understanding. Often, these surprise occasions signal breakdowns in our skillful, transparent behavior, although we can also make unexpected discoveries in the situation through conversation, exploration, natural events and other occurrences.

A discovery breaks out of the pre-understood situation because it violates or goes beyond the network of tacit meanings that make up the pre-understanding of the situation. To understand what we have discovered, we must explicitly interpret it as something, as having a certain significance, as somehow fitting into the already understood background. Then it can merge into our comprehension of the meaningful situation and become part of the new background. Interpretation of something as something is always a reinterpretation of the situated context.

For instance, the lunar habitat designers discovered problems in their early sketches that they interpreted as issues of privacy. Although they had created the sketches themselves, they were completely surprised to discover certain conflicts among the interactions of adjacent components, like the bunks and the toilet. Of course, the discoveries could only occur because of their understanding of the situation, represented in their drawings. The designers paused in their sketching to discuss the new issues. First they debated the matter from various perspectives: experiences of previous space missions, cultural variations in bathroom designs, technical acoustical considerations. Then they considered alternative conceptions of privacy, gradually developing a shared vocabulary that guided their revisions and became part of their interpretation of their task. They reinterpreted their understanding of privacy and represented their new view as a “privacy gradient.”

These themes of representing the situation, changing perspectives and using explicit language correspond to the three-fold structure of interpretation in Heidegger’s philosophy. He articulates the preconditions of interpretation as: (a) prepossession of the situation as a network of pre-understood significance; (b) preview or expectations of things in the world as being structured in certain ways; and (c) preconception, a language for expressing and communicating.

In other words, interpretation never starts from scratch or from an arbitrary assignment of representations, but is an evolution of tentative pre-understandings and anticipations. One necessarily starts with sets of “prejudices” that have been handed down historically; the interpretive process allows one to reflect upon these pre-understandings methodically and to refine new meanings, perspectives and terminologies for understanding things more appropriately.

Computer Support for Interpretation

The theory of situated cognition and the philosophy of interpretation stress how different human understanding is from computer manipulations of arbitrary symbols. These theories suggest the approach of augmenting (rather than automating) human intelligence. According to this approach, software can at best provide computer representations for people to interpret based on their tacit understanding of what is represented.

Representations used in computer programs must be carefully structured by human programmers who thoroughly understand the task being handled, because the computer itself simply follows the rules it has been given for manipulating symbols, with no notion of what these symbols represent. People who understand the domain must sufficiently codify their background knowledge into software rules in order to make the computer algorithms generate results that will be judged correct when interpreted by people. Only if a domain can be strictly delimited and its associated knowledge exhaustively reduced to rules can it be completely automated.

Many tasks, like lunar habitat design, that call for computer support do not have such strictly delimited domains with fully catalogued and formalized knowledge bases. These domains may require exploration of problems never before considered, assumption of creative viewpoints or formulation of innovative concepts. Software to support designers in such tasks should provide facilities for the creation of new representations and flexible modification of old ones. As the discussion of Alexander emphasized, the ability to develop appropriate representations dynamically is critical. Because they capture understandings of the situation that evolve through processes of interpretation, representations need to be modifiable during the design process itself and cannot adequately be anticipated in advance or provided once and for all.

The concept of an objective, coherent body of domain knowledge is misleading. As Rittel said, non-routine design is an argumentative process involving the interplay of unlimited perspectives, reflecting differing and potentially conflicting technical concerns, personal idiosyncrasies and political interests. Software to support design should capture these alternative deliberations on important issues, as well as document specific solutions. Furthermore, because all design knowledge may be relative to perspectives, the computer should be used to define a network of over-lapping perspectives by which to organize issues, rationale, sketches, component parts and terminology.

As Schön emphasized, interpretive design relies on moving from tacit skills to explicit conceptualizations. Additionally, design work is inherently communicative and increasingly collaborative, with high-tech designs requiring successive teams of designers, implementers and maintainers. Software to support collaborative design should provide a language facility for designers to develop a formal vocabulary for expressing their ideas, for communicating them to future collaborators, and for formally representing them within computer-executable software. An end-user language is needed that provides an extensible domain vocabulary, is usable by non-programmers and encourages reuse and modification of expressions.

Heidegger’s analysis of interpretation suggests that most of the information that would be useful to designers may be made explicit at some moment of interpretation during designing. One strategy for accumulating a useful knowledge base is to have the software capture knowledge that becomes explicit while the software is being used. As successive designs are developed on a system, issues and alternative deliberations can accumulate in its issue base; new perspectives can be defined containing their own modifications of terminology and critic rules; the language can be expanded to include more domain vocabulary, conditional expressions and query formulations. In this way, potentially relevant information is captured in formats useful for designers, because it is a product of human interpretation.

This is an evolutionary, bootstrap approach, where the software can not only support individual design projects, but simultaneously facilitate the accumulation of expertise and viewpoints in open-ended, exploratory domains. This means that the software should make it easy for designers to formalize their knowledge as it becomes explicit, without requiring excessive additional effort. The software should reward its users for increasing the computer knowledge base by performing useful tasks with the new information, like providing documentation, communicating rationale and facilitating reuse or modification of relevant knowledge.

The Hermes System

In Greek mythology, Hermes supported human interpretation by providing the gift of spoken and written language and by delivering the messages of the gods. A prototype software system named Hermes has been designed to support the preconditions of interpretation (a) by represent­ing the design construction situation for prepossession, (b) by provid­ing alternative perspec­tives for preview and (c) by including an end-user language for preconception.

It supports tacit knowing by encapsu­lat­ing (a) mechanisms for analyzing design situations using interpretive critics (Fischer et al., 1993), (b) alternative sets of informa­tion organized in named perspectives (Stahl, 1993), and (c) hypermedia computations expressed in language terms (Stahl et al., 1992). In each of these cases, the hidden complexities can be made explicit upon demand, so the designer can reflect upon the information and modify (reinterpret) it.

Hermes is a knowledge-representation substrate for building computer-based design assistants (like that in figure 4-1). It provides various media for designers to build formal representations of design knowledge. The hypermedia network of knowledge corresponds to the design situation. Nodes of the knowledge representation can be textual statements for the issue base, CAD graphics for sketches, or language expressions for critics and queries.

 

Figure 4-1 goes approximately here

 

Figure 4-1. A view of the Hermes design environment, showing (left to right) a dialogue for browsing, a view of the issue base, a critic message, a construction area and a button for changing interpretive perspectives.

 

Hermes supports the collaborative nature of design by multiple teams through its perspectives mechanism. This allows users to organize knowledge in the system into over-lapping collections. Drawings, definitions of domain terms in the language, computations for critic rules, and annotations in the issue base can all be grouped together for a project, a technical specialty, an individual, a team or an historical version. Every action in Hermes takes place within some defined perspective, which determines what versions of information are currently accessible.

The Hermes language pervades the system, defining mechanisms for browsing, displaying and critiquing all information. This means that designers can refine the representations, views and expressions of all forms of domain knowledge in the system. Vocabulary in the language is modifiable and every expression can be encapsulated by a name. The syntax is English-like, in an effort to make statements in the language easily interpretable. The language is declarative, so users need not be bothered with explicit sequential programming concerns. Combined with the perspectives mechanism, the language permits designers to define and refine their own interpretations. This allows the Hermes substrate to support multiple situated interpretations.

 

Conclusion

The theory of situated cognition argues that only people’s tacit pre-understanding can make data meaningful in context. Neither people nor computers alone can take advantage of huge stores of data; such information is valueless unless designers use it in their interpretations of design situations. The data handling capabilities of computers should be used to support the uniquely human ability to understand. The philosophy of interpretation suggests that several aspects of human understanding and collaboration can be supported with mechanisms like those in Hermes, such as refining representations of the design situation, creating alternative perspectives on the task and sharing linguistic expressions. Together, situated cognition theory and Heidegger’s philosophy of interpretation provide a theoretical framework for a principled approach to computer support for designers’ situated interpretation in the information age.

 


5

Collaboration Technology for Communities

In the age of information-overload, lifelong learning and collaboration are essential aspects of most innovative work. Fortunately, the computer technology that drives the information explosion also has the potential to help individuals and groups learn, on demand, much of what they need to know. In particular, applications on the Internet can be designed to capture knowledge as it is generated within a community of practice and to deliver relevant knowledge when it is useful.

Computer-based design environments for skilled domain workers have recently graduated from research prototypes to commercial products, supporting the learning of individual designers. Such systems do not, however, adequately support the collaborative nature of work or the evolution of knowledge within communities of practice. If innovation is to be supported within collaborative efforts, these domain-oriented design environments (DODEs) must be extended to become collaborative information environments (CIEs), capable of providing effective community memories for managing information and learning within constantly evolving collaborative contexts. In particular, CIEs must provide functionality that facilitates the construction of new knowledge and the shared understanding necessary to use this knowledge effectively within communities of practice.

This chapter reviews three stages of work on artificial (computer-based and Web-based) systems that augment the intelligence of people and organizations. NetSuite illustrates the DODE approach to supporting the work of individual designers with learning-on-demand. WebNet extends this model to CIEs that support collaborative learning by groups of designers. Finally, WebGuide shows how a computational perspectives mechanism for CIEs can support the construction of knowledge and of shared understanding within groups. According to recent theories of cognition, human intelligence is the product of tool use and of social mediations as well as of biological development; CIEs are designed to enhance this intelligence by providing computationally powerful tools that are supportive of social relations.

Thereby, this chapter carries out a transition from systems that use AI techniques and computational power to computer-based media that support communication and collaboration. In part, this is a difference of emphasis, as the media may still incorporate significant computation. However, it is also a shift in the locus of intelligence from clever software to human group cognition.

1. Introduction: The Need for Computer Support of Lifelong Collaborative Learning

The creation of innovative artifacts and helpful knowledge in our complex world—with its refined division of labor and its flood of information—requires continual learning and collaboration. Learning can no longer be conceived of as an activity confined to the classroom and to an individual’s early years. Learning must continue while one is engaged with other people as a worker, a citizen and an adult learner for many reasons:

·        Innovative tasks are ill-defined; their solution involves continual learning and the creative construction of knowledge whose need could not have been foreseen (Rittel & Webber, 1984).

·        There is too much knowledge, even within specific subject areas, for anyone to master it all in advance or on one’s own (Zuboff, 1988).

·        The knowledge in many domains evolves rapidly and often depends upon the context of one’s task situation, including one’s support community (Senge, 1990).

·        Frequently, the most important information has to do with a work group’s own structure and history, its standard practices and roles and the details and design rationale of its local accomplishments (Orr, 1990).

·        People’s careers and self-directed interests require various new forms of learning at different stages as their roles in communities change (Argyris & Schön, 1978).

·        Learning—especially collaborative learning—has become a new form of labor, an integral component of work and organizations (Lave & Wenger, 1991).

·        Individual memory, attention and understanding are too limited for today’s complex tasks; divisions of labor are constantly shifting, and learning is required to coordinate and respond to the changing demands on community members (Brown & Duguid, 1991).

·        Learning necessarily includes organizational learning: social processes that involve shared understandings across groups. These fragile understandings are both reliant upon and in tension with individual learning, although they can also function as the cultural origin of individual comprehension (Vygotsky, 1930/1978).

The pressure on individuals and groups to continually construct new knowledge out of massive sources of information strains the abilities of unaided human cognition. Carefully designed computer software promises to enhance the ability of communities to construct, organize and share knowledge by supporting these processes. However, the design of such software remains an open research area.

The contemporary need to extend the learning process from schooling into organizational and community realms is known as lifelong learning. Our past research at the University of Colorado’s Center for LifeLong Learning and Design explored the computer support of lifelong learning with what we call domain-oriented design environments (DODEs). This chapter argues for extending that approach to support work within communities of practice with what it will term collaborative information environments (CIEs) applied both to design tasks and to the construction of shared knowledge. This chapter illustrates three stages that our efforts with illustrative software systems have evolved through during the 1990s.

Section 2 of this chapter highlights how computer support for lifelong learning has already been developed for individuals such as designers. It argues, however, that DODEs—such as the commercial product NetSuite—that deliver domain knowledge to individuals when it is relevant to their task are not sufficient for supporting innovative work within collaborative communities. Section 3 sketches a theory of how software productivity environments for design work by individuals can be extended to support organizational learning in collaborative work structures known as communities of practice; a scenario of a prototype system called WebNet illustrates this. Section 4 of this chapter discusses the need for mechanisms within CIEs to help community members construct knowledge in their own personal perspectives while also negotiating shared understanding about evolving community knowledge; this is illustrated by the perspectives mechanism in WebGuide, discussed in terms of three learning applications. A concluding section locates this discussion within the context of broader trends in computer science.

2. Augmenting the Work of Individual Designers

In this section I discuss how our DODE approach, which has now emerged in commercial products, provides support for individual designers. However, because design (such as the layout, configuration and maintenance of computer networks) now typically takes place within communities of practice, it is desirable to provide computer support at the level of these communities as well as at the individual designer’s level and to include local community knowledge as well as domain knowledge. Note that much of what is described in this section about our DODE systems applies to a broad family of design critiquing systems developed by others for domains such as medicine (Miller, 1986), civil engineering (Fu, Hayes, & East, 1997) and software development (Robbins & Redmiles, 1998).

2.1 Domain-Oriented Design Environments

Many innovative work tasks can be conceived of as design processes: elaborating a new idea, planning a presentation, balancing conflicting proposals or writing a visionary report, for example. While designing can proceed on an intuitive level based on tacit expertise, it periodically encounters breakdowns in understanding where explicit reflection on new knowledge may be needed (Schön, 1983). Thereby, designing entails learning.

For the past decade, we have explored the creation of DODEs to support workers as designers. These systems are domain-oriented: they incorporate knowledge specific to the work domain. They are able to recognize when certain breakdowns in understanding have occurred and can respond to them with appropriate information (Fischer et al., 1993). They support learning-on-demand.

To go beyond the power of pencil-and-paper representations, software systems for lifelong learning must “understand” something of the tasks they are supporting. This is accomplished by building knowledge of the domain into the system, including capturing design objects and design rationale. A DODE typically provides a computational workspace within which a designer can construct an artifact and represent components of the artifact being constructed. Unlike a CAD system, in which the software only stores positions of lines, a DODE maintains a representation of objects that are meaningful in the domain. For instance, an environment for local-area network (LAN) design (a primary example in this chapter) allows a designer to construct a network’s design by selecting items from a palette representing workstations, servers, routers, cables and other devices from the LAN domain, and configuring these items into a system design. Information about each device is represented in the system.

A DODE can contain domain knowledge about constraints, rules of thumb and design rationale. It uses this information to respond to a current design state with active advice. Our systems use a mechanism we call critiquing (Fischer et al., 1998). The system maintains a representation of the semantics of the design situation: usually the two-dimensional location of palette items representing design components. Critic rules are applied to the design representation; when a rule “fires,” it posts a message alerting the designer that a problem might exist. The message includes links to information such as design rationale associated with the critic rule.

For instance, a LAN DODE might notice that the length of a cable in a design exceeds the specifications for that type of cable; that a router is needed to connect two subnets; or that two connected devices are incompatible. At this point, the system could signal a possible design breakdown and provide domain knowledge relevant to the cited problem. The evaluation of the situation and the choice of action is up to the human designer, but now the designer has been given access to information relevant to making a decision (Fischer et al., 1996).

2.2 NetSuite: A Commercial Product

Many of the ideas in our DODEs are now appearing in commercial products, independently of our efforts. In particular, there are several environments for designing LANs. As an example, consider NetSuite, a highly rated system that illustrates current best practices in LAN design support. This is a high-functionality system for skilled domain professionals who are willing to make the effort required to learn to use its rich set of capabilities (see Figure 5-1). NetSuite contains a wealth of domain knowledge. Its palette of devices, which can be placed in the construction area, numbers over 5,000, with more available for download from the vendor every month. Each device has associated parameters defining its characteristics, limitations and compatibilities—domain knowledge used by the critics that validate designs.

 

Figure 5-1 goes approximately here

 

In NetSuite, one designs a LAN from scratch, placing devices and cables from the palette. As the design progresses, the system validates it, critiquing it according to rules and parameters stored in its domain knowledge. The designer is informed about relevant issues in a number of ways: lists of devices to substitute into a design are restricted by the system to compatible choices, limited design rationale is displayed with the option of linking to further details and technical terms are defined with hypertext links. In addition to the construction area, there are LAN tools, such as an automated IP address generator and utilities for reporting on physically existing LAN configurations. When a design is completed, a bill-of-materials can be printed out and an HTML page of it can be produced for display on the Internet. NetSuite is a knowledgeable, well-constructed system to support an individual LAN designer.

2.3 The Need to Go Further

Based on our understanding of organizational learning and our investigation of LAN design communities, we believe that in a domain like LAN management no closed system will suffice. The domain knowledge required to go beyond the functionality of NetSuite is too open-ended, too constantly changing and too dependent upon local Text Box:          
Figure 5-1. Two views of NetSuite. In the top view, the system has noted that a cable length specification for a FDDI network has been exceeded in the design, and the system has delivered information about the specification and affected devices. In the lower view, parts of the network viewed in physical and logical representations are connected. 
circumstances. The next generation of commercial DODEs will have to support extensibility by end-users and collaboration within communities of practice. While a system like NetSuite has its place in helping to design complex networks from scratch, most work of LAN managers involves extending existing networks, debugging breakdowns in service and planning for future technologies.

Many LAN management organizations rely on home-grown information systems because they believe that critical parts of their local information are unique. Each community of practice has its own ways of doing things. Generally, these local practices are understood tacitly and are propagated through apprenticeship (Lave & Wenger, 1991). This causes problems when the old-timer who set things up is gone and when a newcomer does not know who to ask or even what to ask. A community memory is needed that captures local knowledge when it is generated (e.g., when a device is configured) and delivers knowledge when it is needed (when there is a problem with that device) without being explicitly queried.

The burden of entering all this information in the system must be distributed among the people doing the work and must be supported computationally to minimize the effort required. This means:

·        The DODE knowledge base should be integrated with work practices in ways that capture knowledge as it is created.

·        The benefits of maintaining the knowledge base have to be clearly experienced by participants.

·        There may need to be an accepted distribution of roles related to the functioning of the organizational memory.

·        The software environment must be thoroughly interactive so that users can easily enter data and comments.

·        The information base should be seeded with basic domain knowledge so that users do not have to enter everything and so that the system is useful from the start.

·        As the information space grows, there should be ways for people to restructure it so that its organization and functionality keep pace with its evolving contents and uses (Fischer et al., 1999).

DODEs must be extended in these ways to support communities of practice, and not just isolated designers. This reflects a shift of emphasis from technical domain knowledge to local, socially-based community knowledge.

3. Supporting Communities of Practice

In this section, I briefly define “community of practice”—a level of analysis increasingly important within discussions of computer-supported cooperative work (CSCW)—and suggest that these communities need group memories to carry on their work. The notion of DODEs must be extended to support the collaborative learning that needs to take place within these communities. A scenario demonstrates how a CIE prototype named WebNet can do this.

3.1 Community Memories

3.1.1 Communities of Practice

All work within a division of labor is social (Marx, 1867/1976). The job that one person performs is also performed similarly by others and relies upon vast social networks. That is, work is defined by social practices that are propagated through socialization, apprenticeship, training, schooling and culture (Bourdieu, 1972/1995; Giddens, 1984b; Lave & Wenger, 1991), as well as by explicit standards. Often, work is performed by collaborating teams that form communities of practice within or across organizations (Brown & Duguid, 1991). These communities evolve their own styles of communication and expression, or genres (Bakhtin, 1986a; Yates & Orlikowski, 1992).

For instance, interviews we conducted showed that computer network managers in different departments at our university work in concert. They need to share information about what they have done and how it is done with other team members and with other LAN managers elsewhere. For such a community, information about their own situation and local terminology may be even more important than generic domain knowledge (Orr, 1990). Support for LAN managers must provide memory about how individual local devices have been configured, as well as offer domain knowledge about standards, protocols, compatibilities and naming conventions.

Communities of practice can be co-located within an organization (e.g., at our university) or across a discipline (e.g., all managers of university networks). Before the World Wide Web existed, most computer support for communities of practice targeted individuals with desktop applications. The knowledge in the systems was mostly static domain knowledge. With intranets and dynamic Web sites, it is now possible to support distributed communities and also to maintain interactive and evolving information about local circumstances and group history. Communities of practice need to be able to maintain their own memories. The problem of adoption of organizational memory technologies by specific communities involves complex social issues beyond the scope of this chapter. For a review of common adoption issues and positive and negative examples of responses, see (Grudin, 1990; Orlikowski, 1992; Orlikowski et al., 1995).

3.1.2 Digital Memories for Communities of Practice

Human and social evolution can be viewed as the successive development of increasingly effective forms of memory for learning, storing and sharing knowledge. Biological evolution gave us episodic, mimetic and mythical memory; then cultural evolution provided oral and written (external and shared) memory; finally modern technological evolution generates digital (computer-based) and global (Internet-based) memories (Donald, 1991; Norman, 1993).

At each stage, the development of hardware capabilities must be followed by the definition and adoption of appropriate skills and practices before the potential of the new information technology can begin to be realized. External memories, incorporating symbolic representations, facilitated the growth of complex societies and sophisticated scientific understandings. Their effectiveness relied upon the spread of literacy and industrialization. Similarly, while the proliferation of networked computers ushers in the possibility of capturing new knowledge as it is produced within work groups and delivering relevant information on demand, the achievement of this potential requires the careful design of information systems, software interfaces and work practices. New computer-based organizational memories must be matched with new social structures that produce and reproduce patterns of organizational learning (Giddens, 1984b; Lave & Wenger, 1991).

Community memories are to communities of practice what human memories are to individuals. They embody organizational memory in external repositories that are accessible to community members. They make use of explicit, external, symbolic representations that allow for shared understanding within a community. They make organizational learning possible within the group (Ackerman & McDonald, 1996; Argyris & Schön, 1978; Borghoff & Parechi, 1998; Buckingham Shum & Hammond, 1994; Senge, 1990).

3.1.3 Integrative Systems for Community Memory

Effective community memory relies on integration. Tools for representing design artifacts and other work tasks must be related to rich repositories of information that can be brought to bear when needed. Communication about artifacts under development should be tied to that artifact so they retain their context of significance and their association with each other. Also, members of the community of practice must be integrated with each other in ways that allow something one member learned in the past to be delivered to other members when they need it in the future. One model for such integration—on an individual level—is the human brain, which stores a wealth of memories over a lifetime of experience, thought and learning in a highly inter-related associative network that permits effective recall based on subjective relevance. This—and not the traditional model of computer memory as an array of independent bits of objective information—is the model that must be extended to community memories.

Of course, we want to implement community memories using computer memory. Perhaps the most important goal is integration, in order to allow the definition of associations and other inter-relationships. For instance, in a system using perspectives, like those to be discussed in section 4, it is necessary for all information to be uniformly structured with indications of perspective and linking relationships. A traditional way to integrate information in a computer system is with a relational database. This allows associations to be established among arbitrary data. It also provides mechanisms like SQL queries to retrieve information based on specifications in a rather comprehensive language. Integrating all the information of a design environment in a unified database makes it possible to build bridges from the current task representation to any other information. Certainly, object-oriented or hybrid databases and distributed systems that integrate data on multiple computers can provide the same advantages. Nor does an underlying query language like SQL have to be exposed to users; front-end interfaces can be much more graphical and domain-oriented (Buckingham Shum, 1998).

Communities themselves must also be integrated. The Web provides a convenient technology for integrating the members of a community of practice, even if they are physically dispersed or do not share a homogeneous computer platform. In particular, intranets are Web sites designed for communication within a specific community rather than world-wide. WebNet, for instance, is intranet-based software that we prototyped for LAN management communities. It includes a variety of communication media as well as community memory repositories and collaborative productivity tools. It will be discussed later in this section.

Dynamic Web pages can be interactive in the sense that they accept user inputs through selection buttons and text entry forms. Unlike most forms on the Web that only provide information (like product orders, customer preferences, or user demographics) to the webmaster, intranet feedback may be made immediately available to the user community that generated it. For instance, the WebNet scenario below includes an interactive glossary. When someone modifies a glossary definition, the new definition is displayed to anyone looking at the glossary. Community members can readily comment on the definitions or change them. The history of the changes and comments made by the community is shared by the group. In this way, intranet technology can be used to build systems that are CIEs in which community members deposit knowledge as they acquire it so that other members can learn when they need or want to, and can communicate with others about their learning. This model illustrates computer support for collaborative learning with digital memories belonging to communities of practice.

3.2 Extending the DODE Approach to CIEs for Design

To provide computer support for collaborative learning with CIEs, we first have to understand the process of collaborative learning. Based on this analysis, we can see how to extend the basic characteristics of a DODE to create a CIE.

3.2.1 The Process of Collaborative Learning

The ability of designers to proceed based on their existing tacit expertise (Polanyi, 1962) periodically breaks down and they have to rebuild their understanding of the situation through explicit reflection (Schön, 1983). This reflective stage can be helped if they have good community support and effective computer support to bring relevant new information to bear on their problem. When they have comprehended the problem and incorporated the new understanding in their personal memories, we say they have learned. The process of design typically follows this cycle of breakdown and reinterpretation in learning (see Figure 5-2, cycle on left).

 

Figure 5-2 goes approximately here

 

Text Box:   
Figure 5-2. Cycles of design, computer support and organizational learning. Adapted from (Stahl, 1993).
When design tasks take place in a collaborative context, the reflection results in articulation of solutions in language or in other symbolic representations. The articulated new knowledge can be shared within the community of practice. Such knowledge, created by the community, can be used in future situations to help a member overcome a breakdown in understanding. This cycle of collaboration is called organizational learning (see Figure 5-2, upper cycle). The personal reflection and the collaborative articulation of shared perspectives interacting together make innovation possible (Boland & Tenkasi, 1995; Tomasello, Kruger, & Ratner, 1993).

Organizational learning can be supported by computer-based systems of organizational memory if the articulated knowledge is captured in a digital symbolic representation. The information must be stored and organized in a format that facilitates its subsequent identification and retrieval. In order to provide computer support, the software must be able to recognize breakdown situations when particular items of stored information might be useful to human reflection (see Figure 5-2, lower cycle). DODEs provide computer support for design by individuals. They need to be extended to collaborative information environments (CIEs) to support organizational learning in communities of practice.

3.2.2 Extending the DODE Approach to CIEs for Design

The key to active computer support that goes significantly beyond printed external memories is to have the system deliver the right information at the right time in the right way (Fischer et al., 1998). To do this, the software must be able to analyze the state of the work being undertaken, identify likely breakdowns, locate relevant information and deliver that information in a timely manner.

Systems like NetSuite and our older prototypes used critics based on domain knowledge to deliver information relevant to the current state of a design artifact being constructed in the design environment work space (see Figure 5-3, left).

 

Figure 5-3 goes approximately here

 

One can generalize from the critiquing approach of these DODEs to arrive at an overall architecture for organizational memories. The core difference between a DODE and a CIE is that a DODE focuses on delivering domain knowledge, conceived of as relatively static and universal, while a CIE is built around forms of community memory, treated as constantly evolving and largely specific to a particular community of practice. Where DODEs relied heavily on a set of critic rules predefined as part of the domain knowledge, CIEs generalize the function of the critiquing mechanisms.

In a CIE, it is still necessary to maintain some representation of the task as a basis for the software to take action. This task representation plays the role of the design artifact in a DODE, triggering critics and generally defining the work context in order to decide what is relevant. This is most naturally accomplished if work is done within the software environment. For instance, if communication about designs takes place within the system where the design is constructed, then annotations and email messages can be linked directly to the design elements they discuss. This reduces problems of deixis (comments referring to “that” object “over there”). It also allows related items to be linked together automatically. In an information-rich space, there may be many relationships of interest between new work artifacts and items in the organizational memory. For instance, when a LAN manager debugs a network, links between network diagrams, topology designs, LAN diary entries, device tables and an interactive glossary of local terminology can be browsed to discover relevant information.Text Box:  
 Figure 5-3. Generalization of the DODE architecture (left) to a CIE (right).

The general problem for a CIE is to define analysis mechanisms that can bridge the gap from task representation to relevant community memory information items in order to support learning on demand (see Figure 5-3, right).

To take a very different example, suppose a student is writing a paper within a software environment that includes a digital library of papers written by her and her colleagues. An analysis mechanism to support her learning might compare sentences or paragraphs in her draft (which functions as a task representation) to text from other papers and from email discussions (the community memory) to find excerpts of potential interest to her. We use latent semantic analysis (Landauer & Dumais, 1997) to mine our email repository (Lindstaedt & Schneider, 1997), and are exploring similar uses of this mechanism to link task representations to textual information to support organizational learning. Other retrieval mechanisms might be appropriate for mining catalogs of software agents or components, design elements and other sorts of organizational memories.

Using our example of LAN design, I next show how a CIE might function in this domain. I present a scenario of use of WebNet, a prototype I developed to extend our DODE concept to explicitly support communities of LAN designers.

3.3 WebNet: Scenario of a CIE for Design

3.3.1 Critiquing and Information Delivery

Kay is a graduate student who works part-time to maintain her department’s LAN. The department has a budget to extend its network and has asked Kay to come up with a design. Kay brings up WebNet in her Web browser. She opens up the design of her department’s current LAN in the LAN Design Environment, an Agentsheets (Repenning, 1994) simulation applet. Kay starts to add a new subnet. Noticing that there is no icon for an Iris graphics workstation in her palette, Kay selects the WebNet menu item for the Simulations Repository Web page (see Figure 5-4, left frame). This opens a Web site that contains simulation agents that other Agentsheets users have programmed. WebNet opens the repository to display agents that are appropriate for WebNet simulations. Kay locates a simulation agent that someone else has created with the behavior of an Iris workstation. She adds this to her palette and to her design.

 

Figure 5-4 goes approximately here

 

When Kay runs the LAN simulation, WebNet proactively inserts a router (see Figure 5-4, upper right) and informs Kay that a router is needed at the intersection of the two subnets. WebNet displays some basic information about routers and suggests several Web sites with details about different routers from commercial vendors (see Figure 5-4, lower right). Here, WebNet has signaled a breakdown in Kay’s designing and provided easy access to sources of information for her to learn what she needs to know on demand. This information includes generic domain knowledge like definitions of technical terms, current equipment details like costs and community memory from related historical emails.

WebNet points to several email messages from Kay’s colleagues that discuss router issues and how they have been handled locally. The Email Archive includes all emails sent to Kay’s LAN management workgroup in the past. Relevant emails are retrieved and ordered by the Email Archive software (Lindstaedt, 1996) based on their semantic relatedness to a query. In Kay’s situation, WebNet automatically generates a query describing the simulation context, particularly the need for a router. The repository can also be browsed, using a hierarchy of categories developed by the user community.

Text Box:  
Figure 5-4. The WebNet LAN design and simulation workspace (upper-right frame) and information delivered by a critic (lower-right frame). Note table of contents to the Web site (left frame).

Kay reviews the email to find out which routers are preferred by her colleagues. Then she looks up the latest specs, options and costs on the Web pages of router suppliers. Kay adds the router she wants to the simulation and re-runs the simulation to check it. She saves her new design in a catalog of local LAN layouts. Then she sends an email message to her co-workers telling them to take a look at the new design in WebNet’s catalog. She also asks Jay, her mentor at Network Services, to check her work.

3.3.2 Interactive and Evolving Knowledge

Jay studies Kay’s design in his Web browser. He realizes that the Iris computer that Kay has added is powerful enough to perform the routing function itself. He knows that this knowledge has to be added to the simulation in order to make this option obvious to novices like Kay when they work in the simulation. Agentsheets includes an end-user programming language that allows Jay to reprogram the Iris workstation agent (Repenning, 1994). To see how other people have programmed similar functionality, Jay finds a server agent in the Simulations Repository and looks at its program. He adapts it to modify the behavior of the Iris agent and stores this agent back in the repository. Then he redefines the router critic rule in the simulation. He also sends Kay an email describing the advantages of doing the routing in software on the Iris; WebNet may make this email available to people in situations like Kay’s in the future.

When he is finished, Jay tests his changes by going through the process that Kay followed. This time, the definition of router supplied by WebNet catches his eye. He realizes that this definition could also include knowledge about the option of performing routing in workstation software. The definitions that WebNet provides are stored in an interactive glossary. Jay goes to the WebNet glossary entry for “router” and clicks on the “Edit Definition” button. He adds a sentence to the existing definition, noting that routing can sometimes be performed by server software. He saves this definition and then clicks on “Make Annotations.” This lets him add a comment suggesting that readers look at the simulation he has just modified for an example of software routing. Other community members may add their own comments, expressing their views of the pros and cons of this approach. Any glossary user can quickly review the history of definitions and comments—as well as contribute their own thoughts.

3.3.3 Community Memory

It is now two years later. Kay has graduated and been replaced by Bea. The subnet that Kay had added crashed last night due to print queue problems. Bea uses the LAN Management Information component of WebNet to trace back through a series of email trouble reports and entries in LAN diaries. The LAN Management Information component of WebNet consists of four integrated information sources: a Trouble Queue of reported problems, a Host Table listing device configurations, a LAN Diary detailing chronological modifications to the LAN and a Technical Glossary defining local hardware names and aliases. These four sources are accessed through a common interface that provides for interactivity and linking of related items.

The particular problem that Bea is working on was submitted to her through the Trouble Queue. Bea starts her investigation with the Host Table, reviewing how the printer, routers and servers have been configured. This information includes links to LAN Diary entries dating back to Kay’s work and providing the rationale for how decisions were made by the various people who managed the LAN. Bea also searches the Trouble Queue for incidents involving the print queue and related device configurations. Many of the relevant entries in the four sources are linked together, providing paths to guide Bea on an insightful path through the community history. After successfully debugging the problem using the community memory stored in WebNet, Bea documents the solution by making entries and new cross links in the LAN Management Information sources: the Trouble Queue, Host Table, LAN Diary and Glossary.

In this scenario, Kay, Jay and Bea have used WebNet as a design, communication and memory system to support both their immediate tasks and the future work of their community. Knowledge has been constructed by people working on their own, but within a community context. Their knowledge has been integrated within a multi-component community memory that provides support for further knowledge building. This scenario—in which simulations, various repositories, electronic diaries, communication media and other utilities are integrated with work processes—suggests how complexly integrated CIEs can support communities of practice.

4. Perspectives on Shared, Evolving Knowledge Construction

In this section I propose a mechanism designed to make a CIE, like WebNet, more effective in supporting the interactions between individuals and groups in communities of practice. I call this mechanism “perspectives.” The perspectives mechanism permits a shared repository of knowledge to be structured in ways that allow for both individual work and the negotiation of shared results. To illustrate this approach to collaboration, I describe a CIE called WebGuide, which is an example of computer-supported collaborative learning (CSCL) (Crook, 1994; Koschmann, 1996b; O’Malley, 1995). The approach of interpretive, computational perspectives was proposed in chapter 4; the description of WebGuide continues in chapter 6.

4.1 Perspectives: A Collaboration Support Mechanism

The concept of perspectives comes from the hermeneutic philosophy of interpretation of Heidegger (1927/1996) and Gadamer (1960/1988). According to this philosophy, all understanding is situated within interpretive perspectives: knowledge is fundamentally perspectival. This is in accord with recent work in cognitive science that argues for theories of socially situated activity (Lave & Wenger, 1991; Winograd & Flores, 1986). These theories extend the hermeneutic approach to take into account the role of social structures in contributing to molding the construction of knowledge (Vygotsky, 1930/1978). Communities of practice play an important role in the social construction of knowledge (Brown & Duguid, 1991).

Knowledge here is the interpretation of information as meaningful within the context of personal and/or group perspectives. Such interpretation by individuals is typically an automatic and tacit process of which people are not aware (see chapter 4). It is generally supported by cultural habits (Bourdieu, 1972/1995) and partakes of processes of social structuration (Giddens, 1984b). This tacit and subjective personal opinion evolves into shared knowledge primarily through communication and argumentation within groups (Habermas, 1981/1984).

Collaborative work typically involves both individual and group activities. Individuals engage in personal perspective-making and also collaborate in perspective-taking (Boland & Tenkasi, 1995). That is, individuals construct not only elements of domain knowledge, but also their own “take” on the domain, a way of understanding the network of knowledge that makes up the domain. An essential aspect of creating one’s perspective on a domain of knowledge is to take on the perspectives of other people in the community. Learning to interpret the world through someone else’s eyes and then adopting this view as part of one’s own intellectual repertoire is a fundamental mechanism of learning. Collaborative learning can be viewed as a dialectic between these two processes of perspective making and perspective taking. This interaction takes place at both the individual and group units of analysis—and it is a primary mode of interchange between the two levels.

While the Web provides an obvious medium for collaborative work, it provides no support for the interplay of individual and group understanding that drives collaboration. First, we need ways to find and work with information that matches our personal needs, interests and capabilities. Then we need means for bringing our individual knowledge together to build shared understanding and collaborative products. Enhancing the Web with perspectives may be an effective way to accomplish this.

As a mechanism for computer-based information systems, the term perspective means that a particular, restricted segment of an information repository is being considered, stored, categorized and annotated. This segment consists of the information that is relevant to a particular person or group, possibly personalized in its display or organization to the needs and interests of that individual or team. Computer support for perspectives allows people in a group to interact with a shared community memory; everyone views and maintains their own perspective on the information without interfering with content displayed in the perspectives of other group members.

One problem that typically arises is that isolated perspectives of group members tend to diverge instead of converge as work proceeds. Structuring perspectives to encourage perspective-taking, sharing and negotiation offers a solution to this by allowing members of a group to communicate about what information to include as mutually acceptable. The problem with negotiation is generally that it delays work on information while potentially lengthy negotiations are underway. Here, a careful structuring of perspectives provides a solution, allowing work to continue within personal perspectives while the contents of shared perspectives are being negotiated. I believe that perspectives structured for negotiation is an important approach that can provide powerful support for collaborative use of large information spaces on the Web.

The idea of computer-based perspectives traces its lineage to hypertext ideas like “trail blazing” (Bush, 1945), “transclusion” (Nelson, 1981) and “virtual copies” (Mittal, Bobrow, & Kahn, 1986)—techniques for defining and sharing alternative views on large hypermedia spaces. At the University of Colorado, we have been building desktop applications with perspectives for the past decade (see (McCall et al., 1990) and chapters 1 and 4) and are now starting to use perspectives on the Web.

Earlier versions of the perspectives mechanism defined different contexts associated with items of information. For instance, in an architectural DODE, information about electrical systems could be grouped in an “electrical context” or “electrician’s perspective.” In a CIE, this mechanism is used to support collaboration by defining personal and group perspectives in which collaborating individuals can develop their own ideas and negotiate shared positions. These informational contexts can come to represent perspectives on knowledge. While some collaboration support systems provide personal and/or group workspaces (Scardamalia & Bereiter, 1996), the perspectives implementation described below is innovative in supporting hierarchies or graphs of perspective inheritance.

This new model of perspectives has the important advantage of letting team members inherit the content of their team’s perspective and other information sources without having to generate it from scratch. They can then experiment with this content on their own without worrying about affecting what others see. This is advantageous as long as one only wants to use someone else’s information to develop one’s own perspective. It has frequently been noted in computer science literature (Boland & Tenkasi, 1995; Floyd, 1992) that different stakeholders engaged in the development and use of a system (e.g., designers, testers, marketing, management, end-users) always think about and judge issues from different perspectives and that these differences must be taken into account.

However, if one wants to influence the content of team members’ perspectives, then this approach is limited because one cannot change someone else’s content directly. It is of course important for supporting collaborative work that the perspectives maintain at least a partial overlap of their contents in order to reach successful mutual understanding and coordination. The underlying subjective opinions must be intertwined to establish intersubjective understanding (Habermas, 1981/1984; Tomasello et al., 1993). In the late 1990’s, our research has explored how to support the intertwining of perspectives using the perspectives mechanism for CIEs.

4.2 Designing a System for Collaborative Knowledge Construction

We designed a system of computational support for interpretive perspectives in which content of one perspective can be automatically inherited into perspectives connected in a perspective hierarchy or graph. This sub-section recounts the motivation and history of the design of our integration of the perspectives mechanism into a CIE named WebGuide. It discusses a context in which student researchers in middle school learn how to engage in collaborative work and how to use computer technologies to support their work.

In summer 1997 we decided to apply our vision of intertwining personal and group perspectives to a situation in middle school (12-year-old 6th graders) classrooms. The immediate presenting problem was that students could not keep track of website addresses they found during their Web research. The larger issue was how to support team projects. We focused on a project-based curriculum (Blumenfeld et al., 1991) on ancient civilizations of Latin America (Aztec, Inca, Maya) used at the school.

In compiling a list of requirements for WebGuide, we focused on how computer support can help structure the merging of individual ideas into group results. Such support should begin early and continue throughout the student research process. It should scaffold and facilitate the group decision-making process so that students can learn how to build consensus. WebGuide combines displays of individual work with the emerging group view. Note that the topic on Aztec Religion in figure 5-5 was added to the team perspective by another student (Bea). Also note that Kay has made a copy of a topic from Que’s perspective so she can keep track of his work related to her topic. The third topic is an idea that Kay is preparing to work on herself. Within her personal electronic workspace, Kay inherits information from other perspectives (such as her team perspective) along with her own work.

 

Figure 5-5 goes approximately here

 

It soon became clear to us that each student should be able to view the notes of other team members as they work on common topics, not only after certain notes are accepted by the whole team and copied to the team perspective. Students should be able to adopt individual items from the work of other students into their own perspective, in order to start the collaboration and integration process. From Text Box:  
Figure 5-5. Part of Kay’s personal perspective. There are three topics visible in this view. Within each topic are short subheadings or comments, as well as Web bookmarks and search queries. At the bottom is access to search engines.

early on, they should be able to make proposals for moving specific items from their personal perspective (or from the perspective of another) into the team perspective, which will eventually represent their team product, the integration of all their work.

The requirement that items of information can be copied, modified and rearranged presupposes that information can be collected and presented in small pieces—at the granularity of a paragraph or an idea. This is also necessary for negotiating which pieces should be accepted, modified, or deleted. We want the CIE to provide extensive support for collecting, revising, organizing and relating ideas as part of the collaborative construction of knowledge.

The Web pages of a student’s personal perspective should not only contain live link bookmarks and search queries, but also categories, comments and summaries authored by the student. Comments can optionally be attached to any information item. Every item is tagged with the name of the person who created or last modified it. Items are also labeled with perspective information and time stamps.

Students each enter notes in their personal perspectives using information available to them: the Web, books, encyclopedia, CD-ROM, discussions, or other sources. Students can review the notes in the class perspective, their team perspective and the personal perspectives of their team mates. All of these contents are collected in comparison perspectives, where they are labeled by their perspective of origin. Students extract from the group research those items which are of interest to them. Then, within their personal perspectives they organize and develop the data they have collected by categorizing, summarizing, labeling and annotating. The stages of investigating, collecting and editing can be repeated as many times as desired. Team members then negotiate which notes should be promoted to the team perspective to represent their collaborative product.

The class project ends with each team producing an organized team perspective on one of the civilizations. These perspectives can be viewed by members of the other teams to learn about the civilizations that they did not personally research. The team perspectives can also provide a basis for additional class projects, like narrative reports and physical displays. Finally, this year’s research products can be used to create next year’s class perspective starting point, so new researchers can pick up where the previous generation left off—within a Web information space that will have evolved substantially in the meantime.

4.3 Supporting Perspective-Making

The application of a CIE to the problem of supporting middle school students conducting Web research on the Aztec, Maya and Inca civilizations drove the original concept of WebGuide. Since then, the basic functionality of the CIE has been implemented as a Java applet and applied in two other applications: (1) Gamble Gulch: a set of middle school teams constructing conflicting perspectives on a local environmental problem and (2) Readings ‘99: a university research group exploring cognitive science theories that have motivated the WebGuide approach. These two applications further illustrate how perspective-making and perspective-taking can be supported within a CIE. They are briefly discussed here, but will be described in more detail in chapter 6.

We first used an early implementation of WebGuide in a classroom at the Logan School for Creative Learning in Denver (see figure 5-6). For the previous five years, this class of middle school students had researched the environmental damage done to mountain streams by “acid mine drainage” from deserted gold mines in the Rocky Mountains above Denver. They actually solved the problem at the source of a stream coming into Boulder from the Gamble Gulch mine site by building a wetlands area to filter out heavy metals. Now they were investigating the broader ramifications of their past successes; they were looking at the issue of acid mine drainage from various alternative—and presumably conflicting—perspectives. The students interview adult mentors to get opinions from specific perspectives: environmental, governmental, mine-owner and local landowners.

 

Text Box:  
Figure 5-6. WebGuide for negotiating environmental perspectives.
Figure 5-6 goes approximately here

 

As an initial field test of the WebGuide system, this trial resulted in valuable experience in the practicalities of deploying such a sophisticated program to young students over the Web. The students were enthusiastic users of the system and offered (through WebGuide) many ideas for improvements to the interface and the functionality. Consequently, WebGuide benefited from rapid cycles of participatory design. The differing viewpoints, expectations and realities of the software developers, teachers and students provided a dynamic field of constraints and tensions within which the software, its goals and the understanding of the different participants co-evolved within a complex structural coupling.

The Readings ‘99 application of WebGuide the following year stressed the use of perspectives for structuring collaborative efforts to build shared knowledge. The goal of the graduate seminar was to evolve sophisticated theoretical views on computer mediation within a medium that supports the sharing of tentative positions and documents the development of ideas and collaboration over time. A major hypothesis to be explored by the course was that software environments with perspectives—like WebGuide—can provide powerful tools for coordinated intellectual work and collaborative learning. For instance, it explored how the use of a shared persistent knowledge construction space can support more complex discussions than ephemeral face-to-face conversations.

This is not the place to evaluate the effectiveness of the WebGuide perspective mechanism. The story of its development will be continued in chapter 6. Here, I wanted simply to suggest the possibility of computational support for collaboration that goes beyond what is now commercially available. The perspectives mechanism allows people to work collaboratively by intertwining their personal and group perspectives on shared ideas.

5. Extending Human Cognition

Our early work on domain-oriented design environments (DODEs)—reviewed in section 2 of this chapter—was an effort to augment human intelligence within the context of professional design activities. At a practical level, our focus on building systems for experts (rather than expert systems) contrasted with much research at the time that emphasized either (1) artificial intelligence heuristics intended to automate design tasks or (2) user-friendly, idiot-proof, walk-up-and-use systems that were oriented toward novices. In theoretical terms, we acted upon the view that human intelligence is not some biologically fixed system that can be modeled by and possibly even replaced by computationally analogous software systems. Rather, human intelligence is an open-ended involvement in the world that is fundamentally shaped by the use of tools (Donald, 1991; Heidegger, 1927/1996; Vygotsky, 1930/1978). In this view, computer-based systems can extend the power of human cognition. Like any effective tools, software systems like DODEs mediate the cognitive tasks, transforming both the task and the cognitive process (Norman, 1993; Winograd & Flores, 1986). In addition, computer-based systems enhance the capabilities of their users by encapsulating the derived human intentionality of their developers (Stahl, 1993). In this light, we saw the emergence of the Web as offering an enabling technology for allowing communities of DODE users to embed their own collective experience in the critics and design rationale components of DODE knowledge bases.

The movement in our work from DODEs to collaborative information environments (CIEs)—reviewed in section 3—was not only driven by the potential of Web technology. It was also motivated by the increasing awareness of the socially situated character of contemporary work, including the important role of communities of practice (Brown & Duguid, 1991; Lave & Wenger, 1991; Orr, 1990). The fact that much work and learning is overtly collaborative these days is not accidental (Marx, 1867/1976). Just as the cognitive processes that are engaged in work and learning are fundamentally mediated by the tools that we use to acquire, store and communicate knowledge, they are equally mediated by social phenomena (Giddens, 1984b; Habermas, 1981/1984). In fact, tools, too, have a social origin, so that the mediation of human cognition results from complex interactions between the artifactual and the social (Orlikowski et al., 1995; Vygotsky, 1930/1978). CIEs are designed to serve as socially-imbued, computationally powerful tools. They make the social character of knowledge explicit and they support collaborative knowledge building.

The notion of a perspectives mechanism such as the one prototyped in WebGuide—reviewed in section 4—is to provide tool affordances that support the social nature of mediated cognition. Collaborative work and learning involve activities at two units of analysis: the individual and the group (Boland & Tenkasi, 1995; Orlikowski, 1992). Personal perspectives and team perspectives provide a structure for distinguishing these levels and create workspaces in which the different activities can take place. Of course, the crux of the problem is to facilitate interaction between these levels: the perspectives mechanism lets individuals and teams copy notes from one space to another, reorganize the ideas and modify the content. Communities of practice are not simple, fixed structures, and so the graph of perspective inheritance must be capable of being interactively extended to include new alliances and additional levels of intermediate sub-teams.

The perspectives mechanism (more fully discussed in chapter 6) has not been proposed as a complete solution; it is meant to be merely suggestive of computationally intensive facilities to aid collaboration. Systematic support for negotiating consensus building and for the promotion of agreed upon ideas up the hierarchy of sub-teams is an obvious next step (see chapters 7 & 8). Collaborative intelligence places a heavy cognitive load on participants; any help from the computer in tracking ideas and their status would free human minds for the tasks that require interpretation of meaning (see chapter 16).

The concept of intelligence underlying the work discussed in this chapter views human cognition, software processing and social contexts as complexly and inseparably intertwined. In today’s workplaces and learning milieus, neither human nor machine intelligence exists independently of the other. Social concerns about AI artifacts are not secondary worries that arise after the fact, but symptoms of the fundamentally social character of all artifacts and of all processes of material production and knowledge creation (Marx, 1867/1976; Vygotsky, 1930/1978). I am trying to explore the positive implications of this view by designing collaborative information environments to support knowledge construction by small groups within communities.

 


6

Perspectives on Collaborative Learning

After the exploration of computer support for personal and small-group perspectives described in chapters 4 and 5, I tried to push as hard as I could a model of threaded discussion with perspectives. I developed a Web-based tool called WebGuide, designed to mediate and structure collaborative learning. This software defined a flexible system of perspectives on a shared knowledge construction space. WebGuide provides an electronic and persistent workspace for individuals and teams to develop and share distinctive points of view on a topic. The software and associated usage practices were designed by trials in a middle school classroom and in an advanced graduate seminar. Experience in these use-situations raised a range of questions concerning theoretical and practical issues, which drove further research. This chapter is a reflection on what was collaboratively learned about how software artifacts can mediate learning and shared cognition.

This chapter’s multi-faceted discussion of the design of the WebGuide collaboration system reflects on the intricate relationship between theory, design and usage evaluation. It demonstrates the interplay of considerations required in designing support for the intertwining of personal and group perspectives, arguably that aspect of collaboration most in need of computational support. This design study reflects on the emergence of abstract theory from practical implementation issues.

WebGuide was probably my most intensive software development effort. I tried to create a system that would support group cognition in the sense of collaborative knowledge building. Threaded discussion seemed to be an appropriate medium, but its use always tended to be limited to the exchange of personal opinions, at best. I developed a computational system to support the sharing of interpretive perspectives, but it turned out to be too complicated to use fluidly, despite repeated attempts to make its interface more intuitive. Theoretical reflections related to WebGuide led me to bring in communication specialists and to undertake the analyses of part II of this book and the reflections of part III.

The main section of this chapter was written for the April 1999 AERA Conference. A year later, it was peer-reviewed in the online Journal of Interactive Media in Education; materials from the interaction with the reviewers have been appended to the paper.

1. Introductory Narrative

For some years now I have been interested in how to personalize the delivery of information from knowledge repositories to people based on their preferred perspectives on the information (Stahl, 1995, 1996). For instance, designers often critique an evolving design artifact from alternative technical points of view; different designers have different personal concerns and styles, requiring considerations based upon access to different rules of thumb, rationale, constraints, standards and other forms of domain knowledge. Computer design environments should support these important interpretive perspectives. I am now primarily interested in applying similar mechanisms of perspectival computer support within contexts of collaborative learning.

In 1997, Ted Habermann—an information architect at the National Oceanic and Atmospheric Administration (NOAA) who makes geophysical data available to school children over the Web—suggested to me that we try to develop some computer support for a project at his son’s middle school. Dan Kowal, the environmental sciences teacher at the Logan School for Creative Learning in Denver, was planning a year-long investigation of alternative perspectives on the issue of “acid mine drainage” (AMD)—the pollution of drinking water supplies by heavy metals washed out of old gold mines. The fact that Dan and I were interested in “perspectives” from different perspectives seemed to provide a basis for fruitful collaboration. Ted obtained NSF funding for the project and we all spent the summer of 1998 planning the course and its perspectives-based software. Each of us brought in colleagues and worked to create a Java application (WebGuide), a set of auxiliary web pages and to put together a group of adult mentors representing different perspectives on AMD and a course curriculum.

The class started in September and the software was deployed in October. The students in Dan’s class were aware of the experimental nature of the software they were using and were encouraged to critique it and enter their ideas into WebGuide. Feedback from these twelve-year-old students provided initial experience with the usability of WebGuide and resulted in a re-implementation of the interface and optimization of the algorithms over the school’s Christmas vacation.

Text Box:  
Figure 6-1. The Gamble Gulch version of WebGuide viewed in a Web browser. The top part is a Java applet displaying an outline view of note titles. The content of the selected note is displayed in an HTML frame below. To the right are buttons for navigating the outline and changing the content in the shared knowledge space. The view shown is from the personal perspective of one student.
In January 1999, I organized an interdisciplinary seminar of doctoral students from cognitive, educational and computational sciences to study theoretical texts that might provide insight into how to support collaborative learning with perspectives-based software. The seminar used WebGuide as a major medium for communication and reflection, including reflection on our use of the software. This provided a second source of experience and raised a number of issues that needed to be addressed in software redesign.

In this chapter I would like to begin a reflection on the issues that have arisen through our WebGuide experiences because I think they are critical to the ability to support collaborative learning with computer-based environments. The potential for computer mediation of collaboration seems extraordinary, but our experience warns us that the practical barriers are also enormous. Certainly, our experiences are not unique, and similar projects at the universities of Toronto, Michigan, Berkeley, Northwestern, Vanderbilt, Georgia Tech, etc. have run into significant obstacles for years. Indeed, we observed many of these issues in a seminar in the year prior to the implementation of WebGuide (dePaula, 1998; Koschmann & Stahl, 1998). However, I believe that perspectives-based software addresses or transforms some of the issues and raises some of its own.

Let me describe how computer support for perspectives has evolved in WebGuide. I will first discuss the preliminary implementation as used in Dan’s middle school environmental course and explain how perspectives are supported in that version. A number of design issues led to an extended attempt to bring theory to the aid of reflection on practice. This included the graduate seminar that used a revised version of WebGuide. Finally, following the original part of this chapter is a condensed version of the dialog that took place between the Journal of Interactive Media in Education (JIME) reviewers and me, where responses from winter 2000 and spring 2001 bring in reflections from subsequent design iterations.

2. Practice I: Environmental Perspectives

An early implementation of WebGuide was in use in Dan’s classroom at the Logan School. For the previous five years, his class had researched the environmental damage done to mountain streams by the Gamble Gulch mine site. Then they investigated the social issue of acid mine drainage from various perspectives: environmental, governmental, mine-owner and local landowner. Working in teams corresponding to each of these perspectives, they articulated the position of their perspective on a set of shared questions.

The “Gamble Gulch” application of WebGuide served as the medium through which the students collaboratively researched these issues with their mentors and with teammates. Each student and mentor had their personal display perspective, and their display perspectives each inherited from one of the content-based team perspectives (environmental protection, governmental regulation, etc.), depending upon which intellectual perspective they were working on constructing.

Figure 6-1 shows one student’s (Blake) personal perspective on the class discourse. The tree of discussion threads was “seeded” with question categories, such as “Environmental Analysis Questions.” Within these categories, the teacher and I posted specific questions for the students to explore, like, “Do you believe that AMD is a serious threat to the environment?” Here, Blake has sent an email to a mentor asking for information related to this question. Email interactions happen through WebGuide and are retained as notes in its display perspectives. When replies are sent back, they are automatically posted to the discussion outline under the original email. When someone clicks on a title, the contents of that note are displayed in an HTML frame below the applet (as is the body of the student’s email in figure 6-1).

 

Figure 6-1 goes approximately here

 

Text Box:  
Figure 6-2. The web of perspectives in Gamble Gulch. Information is automatically inherited downward in the diagram. Blake’s perspective includes all the notes entered in the Gulch class, Landowner and Student perspectives. His notes also show up in the Landowner, Student and Gulch class comparison perspectives.

Blake is working in his personal perspective, which inherits from the Class, Student team and Landowner team perspectives (see the dashed red arrows in figure 6-2). Note that the display of his personal perspective (in figure 6-1) includes notes that Dan and I entered in the Student perspective to structure the work of all the students. Blake can add, edit and delete ideas in his perspective, as well as sending email in it. Because he is a member of the landowner team and the student group as well as the class, he can browse ideas in the Student comparison, the Landowner comparison and the Gamble Gulch class comparison perspectives (see list of perspectives accessible to him on the right of figure 6-1).

 

Figure 6-2 goes approximately here

 

For this application, the teacher has decided that perspective comparing and negotiation will take place in live classroom discussions, rather than in WebGuide. After a team or the whole class reaches a consensus, the teacher will enter the statements that they have agreed upon into the team or class perspective.

The goal of the year-long course is not only to negotiate within teams to construct the various positions, but also to negotiate among the positions to reach consensus or to clarify differences. Dan designed this class—with its use of WebGuide—to teach students that knowledge is perspectival, that different people construct different views, and that compilations of facts and arguments differ depending upon the social situation from which they arise. He hopes that his students will not only learn to evaluate statements as deriving from different perspectives, but also learn to negotiate the intertwining of perspectives to the extent that this is possible.

3. Computer Support of Perspectives

The term “perspectives” is over-loaded with meanings; this frequently produces confusion even when it is intended to tacitly exploit in one domain aspects of the perspectives metaphor from a different domain. It may be helpful at this point to distinguish three types of perspectives: literal, figurative and computational.

·        Literal perspectives are optical or perceptual orientations: one sees objects from the specific angle or vantage point of the physical location of one’s eyes.

·        Figurative perspectives take metaphorical license and refer to, for instance, different ways of conceptualizing a theme, as in adopting a skeptical view of a conversational claim.

·        Computational perspectives are the result of software mechanisms that classify elements in a database for selective display. In WebGuide, for example, if I enter a note in my personal perspective then that note will be displayed whenever my perspective is displayed but not when someone else’s personal perspective is displayed.

WebGuide implements a system of computational (i.e., computer-supported, automated) perspectives designed to utilize the perspective metaphor in order to support characteristics of collaboration and collaborative learning. It is unique in a number of ways that distinguish it from other software systems that may use the term “perspectives”:

·        Other systems refer to different representations of information as perspectives. They might have a graphical and a textual view of the same data. In WebGuide, different data is literally displayed in different perspectives while using the same representation—hierarchically structured titles of textual notes.

·        In WebGuide, the perspectives mechanism is neither a simple tagging of data nor a database view, but is a dynamic computation that takes into account a web of inheritance among perspectives. Thus, Blake’s perspective includes not only information that he entered in his perspective, but also information inherited from the Class, Student and Landowner perspectives.

·        The web of perspectives can be extended by users interactively, and the inheritance of information is always computed on-the-fly, based on the current configuration of this web.

·        The information in a perspective has a user-maintained structure in which each note has one or more “parent” notes and may have “child” notes, creating a web of notes within each perspective. The order of child notes displayed under a parent note is user defined and maintained so that WebGuide can be used to organize ideas within outline structures.

The computational perspectives mechanism we have been exploring incorporates the following features:

·        Individual community members have access to their own information source. This is called their personal perspective. It consists of notes from a shared central information repository that are tagged for display within that particular perspective (or in any perspective inherited from that perspective).

·        Notes can be created, edited, rearranged, linked together or deleted by users within their own personal perspective without affecting the work of others.

·        Another student, Annie, can integrate a note from Blake’s perspective into her own personal perspective by creating a link or virtual copy of the note. If Blake modifies the original note, then it changes in Annie’s perspective as well. However, if Annie modifies the note, a new note is actually created for her, so that Blake’s perspective is not changed. This arrangement generally makes sense because Annie wants to view (or inherit) Blake’s note, even if it evolves. However, Blake should not be affected by the actions of someone who copied one of his notes.

·        Alternatively, Annie can physically copy the contents of a note from Blake’s perspective. In this case, the copies are not linked to each other in any way. Since Annie and Blake are viewing physically distinct notes now, either can make changes without affecting the other’s perspective.

·        There is an inheritance web of perspectives; descendent perspectives inherit the contents of their ancestor perspectives. Changes (additions, edits, deletions) in the ancestor are seen in descendent perspectives, but not vice versa. New perspectives can be created by users. Perspectives can inherit from existing perspectives. Thus, a team comparison perspective can be created that inherits and displays the contents of the perspectives of the team members. A hierarchy of team, sub-team, personal and comparison perspectives can be built to match the needs of a particular community (figure 2, above).

This model of computational perspectives has the important advantage of letting team members inherit the content of their team’s perspective and other information sources without having to generate it from scratch. They can then experiment with this content on their own without worrying about affecting what others see.

4. Types of Perspectives

WebGuide provides several levels of perspectives (see figure 2) within a web of perspective inheritance to help students compile their individual and joint research:

·        The class perspective is created by the teacher to start each team off with an initial structure and some suggested topics. It typically establishes a framework for classroom activities and defines a space used to instantiate the goal of collecting the products of collaborative intellectual work.

·        The team perspective contains notes that have been accepted by a team. This perspective can be pivotal; it gradually collects the products of the team effort.

·        The student’s personal perspective is an individual’s work space. It inherits a view of everything in the student’s team’s perspective. Thus, it displays the owner’s own work within the context of notes proposed or negotiated by the team and class—as modified by the student. Students can each modify (add, edit, delete, rearrange, link) their virtual copies of team notes in their personal perspectives. They can also create completely new material there. This computational perspective provides a personal workspace in which a student can construct his or her own figurative perspective on shared knowledge. Other people can view the student’s personal perspective, but they cannot modify it.

·        The comparison perspective combines all the personal perspectives of team members and the team perspective, so that anyone can compare all the work that is going on in the team. It inherits from personal perspectives and, indirectly, from the team and class perspectives. Students can go here to get ideas and copy notes into their own personal perspective or propose items for the team perspective.

Of course, there is not really a duplication of information in the community memory. The perspectives mechanism merely displays the information differently in the different perspectival views, in accordance with the relations of inheritance.

5. Issues for Perspectives

The first issues to hit home when we deployed WebGuide were the problems of response time and screen real estate. The student computers were slower, had smaller monitors, lacked good Internet connections and were further from the server than the computers of the developers. We were, of course, already familiar with these issues from other Web applications, but one never knows quite how things will work out and how they will be accepted until one tests them under classroom conditions.

A pre-release prototype of WebGuide used dynamic HTML pages. This meant that each time a student expanded a different part of the outline of titles it was necessary to wait for a new page to be sent across the Internet. The dynamic HTML pages also greatly constrained the interface functionality. However, when we moved to a Java applet, we had to wait several minutes to download the applet code to each student computer. Furthermore, it entailed running all the perspectives computations on the slow student computers. In order to reduce the download time significantly, we first rewrote the interface using standard Java Swing classes that can be stored on the student machines. Then we split the applet into a client (the interface) and a server (the perspectives computations and database access). By downloading only the client part to the classroom, we not only reduced the download time further, but also ran the time-consuming computations on our faster server computers.

Such technical problems can be solved relatively easily by optimizing algorithms or by adjusting tradeoffs based on local conditions. Issues of social practice are much more intransigent. There seem to be two major problems for software for threaded discussions and collaborative knowledge construction like WebGuide:

  1. Lack of convergence among the ideas developed in the supported discussions.
  2. Avoidance of system use in favor of email, face-to-face conversation or inaction.

WebGuide introduces its computational perspectives mechanism as a structural feature to facilitate the articulation of convergent ideas, and it even incorporates email. In attempting to address the problems posed above, it raises a new set of issues:

  1. Is the perspectives metaphor a natural one (or can it be made natural) so that people will use computational perspectives to construct their figurative perspectives?
  2. Can the web of perspectives be represented in a convenient and understandable format?

In our trials of WebGuide we have tried to create learning situations that would encourage the use of the software, yet we have observed low levels of usage and under-utilization of the system’s full functionality. This raises the following additional issues:

  1. How can learning situations be structured to take better advantage of the presumed advantages of the software?
  2. How can the system’s various capabilities be distinguished, such as its support for threaded discussions and for perspective-making?

In order to answer questions of this magnitude it was necessary to gather more experience, to be more closely involved in the daily usage of the system and to develop a deeper theoretical understanding of collaborative learning and of computer mediation. Having defined these goals, I announced a seminar on the topic of “computer mediation of collaborative learning,” open to interested researchers from a number of disciplines—primarily education, cognitive psychology and computer science. The goal of the seminar was explicitly stated to be an experiment in the use of WebGuide to construct knowledge collaboratively, based on careful reading of selected texts. The texts traced the notion of computer mediation (Boland & Tenkasi, 1995; Caron, 1998; Hewitt, Scardamalia, & Webb, 1998; Scardamalia & Bereiter, 1996) back to situated learning theory (Bruner, 1990; Cole, 1996; Lave, 1991; Lave & Wenger, 1991; Lave, 1996)—and from there back to the notion of mediated consciousness in Vygotsky (1930/1978) and its roots in Hegel (Habermas, 1971; Hegel, 1807/1967; Koyeve, 1947/1969) and Marx (1844/1967; 1845/1967; 1867/1976).

In section 8 of this chapter I will comment on our current understanding of the six issues listed above. But first it is necessary to describe the ways in which the seminar attempts to make use of WebGuide and the conceptualization of the theory of computer mediation that is arising in the seminar.

6. Practice II: Theoretical Perspectives

The seminar on computer mediation of collaborative learning is designed to use WebGuide in several ways:

·        As the primary communication medium for internal collaboration. The seminar takes place largely on-line. Limited class time is used for people to get to know each other, to motivate the readings, to introduce themes that will be followed up on-line, and to discuss how to use WebGuide within the seminar.

·        As an example collaboration support system to analyze. Highly theoretical readings on mediation and collaboration are made more concrete by discussing them in terms of what they mean in a system like WebGuide. The advantage of using a locally-developed prototype like WebGuide as our example is that we not only know how it works in detail, but we can modify its functionality or appearance to try out suggestions that arise in the seminar.

·        As an electronic workspace for members to construct their individual and shared ideas. Ideas entered into WebGuide persist there, where they can be revisited and annotated at any time. Ideas that arise early in the seminar will still be available in full detail later so that they can be related to new readings and insights. The record of discussions over a semester or a year will document how perspectives developed and interacted.

·        As a glossary and reference library. This application of WebGuide is seeded with a list of terms that are likely to prove important to the seminar and with the titles of seminar readings. Seminar members can develop their own definitions of these terms, modifying them based on successive readings in which the terms recur in different contexts and based on definitions offered by other members. Similarly, the different readings are discussed extensively within WebGuide. This includes people giving their summaries of important points and asking for help interpreting obscure passages. People can comment on each other’s entries and also revise their own. Of course, new terms and references can easily be added by anyone.

·        As a brainstorming arena for papers. The application has already been seeded with themes that might make interesting research papers drawing on seminar readings and goals. WebGuide allows people to link notes to these themes from anywhere in the information environment and to organize notes under the themes. Thus, both individuals and groups can use this to compile, structure and refine ideas that may grow into publishable papers. Collaborative writing is a notoriously difficult process that generally ends up being dominated by one participant’s perspective or being divided up into loosely connected sections, each representing somewhat different perspectives. WebGuide may facilitate a more truly collaborative approach to organizing ideas on a coherent theme.

·        As a bug report mechanism or feature request facility. Seminar participants can communicate problems they find in the software as well as propose ideas they have for new features. By having these reports and proposals shared within the WebGuide medium, they are communicated to other seminar participants, who can then be aware of the bugs (and their fixes) and can join the discussion of suggestions.

The seminar version of WebGuide incorporates a built-in permissions system that structures the social practices surrounding the use of the system. Seminar participants each have their own personal perspective in which they can manipulate notes however they like without affecting the views in other perspectives. They can add quick discussion notes or other kinds of statements. They can edit or delete anything within their personal perspective. They can also make multiple copies or links (virtual copies) from notes in their personal perspective to other notes there. Anyone is free to browse in any perspective. However, if one is not in one’s own perspective then one cannot add, edit or delete notes there (as in figure 6-3). To manipulate notes freely, one must first copy or link the note into one’s own personal perspective. The copy or link can optionally include copying (or virtual copying) all the notes below the selected note in the tree as well. These rules are enforced by the user interface, which checks whether or not someone is in their personal perspective and only allows the legal actions.

 

Figure 6-3 goes approximately here

 

Text Box:  
Figure 6-3. The version of WebGuide used in the seminar. Note that some of the control buttons on the right are not functional when the logged-in author is not working in his own personal perspective. This enforces certain social practices. Also note that many headings have been inserted to structure the discussion space.
Students in the class can form sub-groups either within or across their different disciplines. They develop ideas in their personal perspectives. They debate the ideas of other people by finding notes of interest in the class comparison perspective (or in a subgroup comparison perspective) and copying these notes into their own personal perspective, where they can comment on them. The clash of perspectives is visible in the comparison perspectives, while the personal perspectives allow for complete expression and organization of a single perspective. This supports the “taking” of other people’s perspectives and the use of shared ideas in the “making” of one’s own perspectives (Boland & Tenkasi, 1995).

The seminar application of WebGuide stresses the use of perspectives for structuring collaborative efforts to build shared knowledge. The goal of the seminar is to evolve theoretical views on computer mediation—and to do so within a medium that supports the sharing of tentative positions and documents the development of ideas and collaboration over time. A major hypothesis investigated by the seminar is that software environments with perspectives—like WebGuide—can provide powerful tools for coordinated intellectual work and collaborative learning. It explores how the use of a shared persistent knowledge construction space can support more complex discussions than ephemeral face-to-face conversation. Many of the desires and concerns in this chapter originated as seminar notes in WebGuide. In particular, the seminar’s focus on theory has actually problematized our understanding of the role of theory.

7. Theory in Practice

Our initial application of WebGuide in the middle school environmental course raised a number of issues that led us to seek theoretical understanding through a seminar, which is serving as a second application of WebGuide. We have begun to see our research differently as a result of the theories we are incorporating into our discussions within the seminar. One thing that has changed is the relation we see of this theory to our research practice.

In my paper proposal to The American Educational Research Association (AERA)—the first draft of this chapter—written prior to our recent explorations, I described our approach by following the narrative order implied by conventional wisdom about the relation of theory to practice. After stating the goal or purpose of the work, I provided a theoretical framework, followed by sections on techniques, evidence, conclusions and educational/scientific import. The assumption here was that when one had a problem one turned first to theory for the solution and then “applied” the theory to some situation—either the problem situation or an experimental test context. After designing the solution based on the pre-existing theory and applying it to the test situation, one gathered evaluative data and analyzed it to measure success. The evaluation then implies whether or not the solution has a general import.

But such an approach is in keeping neither with our current experience nor with our emerging theory. We started last summer with an opportunity to explore some vague notions we had about something we called “perspectives.” We experimented with ever-evolving techniques through a complex collaborative process involving many people, each with their own concerns, understanding and insights. As part of this process some of us turned to theory—but the selection of theoretical texts and our interpretations of them were determined by the processes and issues we observed in our practical strivings.

In this draft of the chapter—still not considered a static final document, but a recapitulation from one particular moment in an on-going process—I am trying to narrate a story about how theory and practice have been co-mingled in our research. We began with an idea for a concrete classroom curriculum and worked on designing tools and structures to support the practical needs of that curriculum. Once we had a working software prototype that could be used over the Web, we deployed it in the middle school classroom. We immediately confronted the problems of response speed and monitor screen real estate that we had been worried about from the start. Students started asking for new functionality and it became clear that they were not using the implemented functions the way they were designed to be used. A dance commenced between the technicians, the educators, the students, the curriculum and the software; as we circled each other, we all changed and became more compatible.

There was no point in trying to evaluate the success of our experiment by gathering data under controlled conditions. It was clear that we needed to figure out how to make things work better, not to measure precisely how well they were (or were not) already working. Beyond the relatively clear technical usability issues there were deeper questions of how software can mediate interpersonal and cognitive relations within collaboration (Hewitt et al., 1998). This led us to look for a theory of computer mediation—and for that matter a theory of collaborative learning—in the graduate seminar. Of course, it turned out that there are no adequate theories on these topics sitting on the bookshelf for us to simply apply. Rather, we had to undertake the construction of such theory, building upon hints strewn about in texts from many disciplines and guided by the problematic, in which we are involved first hand.

Trusting in our intuition that software like WebGuide could facilitate group theory building, we set out to use WebGuide in our theoretical investigations, and thereby further drive the development of the software through additional practical experience even as we were developing theoretical justifications for our design. In reflecting on our experience, I have tried to organize this draft of the chapter in accordance with a non-traditional theory about the relation of theory and practice—an understanding of this relationship more in keeping not only with our practice but with our hermeneutic, dialectical, socially situated activity theory.

Thus, we started out from our vague, only partially articulated background understanding of perspectives as an interesting and promising concept for learning and for computer support (see chapter 4). We set up a real-world situation in which we could explore what happens when this idea is implemented. In this situation we nurtured a process of “structural coupling” (Maturana & Varela, 1987) in which the different actors evolve toward a workable synthesis or homeostasis. Rapid prototyping cycles and participatory design sessions help facilitate this process. As breakdowns in the intention of our design are recognized, we engage in reflection-in-action (Schön, 1983) to make our tacit pre-understanding explicit, to understand what has happened and to project corrective actions. This process of explication raises broad issues and calls for theory. But despite the generality of the issues, the theory is not understood in a completely abstract way, but in terms of its relevance to our situation and to the specific barriers we have uncovered in that concrete situation.

Theory—like everyday thought—often arises after the fact (or well into the complex process of practical investigations) in order to justify situations that would otherwise be too messy to comprehend and remember. Then, at the first chance it gets, theory reverses the order of things and presents itself as a guiding a priori principle. As Hegel (1807/1967) said, “the owl of Minerva flies only at night”: the wisdom of theory arrives on the scene only after the practical events of the day (which theory retroactively captures in concepts) have been put to bed. Theory is a cherished way to capture an understanding of what has been learned, even if it distorts the picture by claiming that the practice out of which theory arose was a simple application of the theory’s pre-existing abstract principles.

But, as is pointed out by the analyses of mediated cognition that our seminar is studying, there are other artifacts in which experience can be captured, preserved and transmitted (Cole, 1996). Narrative is one (Bruner, 1990). In this chapter, I have tried to project a voice which does not redefine the temporality of the experience I am reporting.

Sculpture is another way in which people impose meaningful form on nature and, as Hegel would say, externalize their consciousness through the mediation of wood, clay, plaster or stone, sharing it with others and preserving it as part of their culture’s spirit. Sculptures like that in figure 6-4 are such artifacts. They create spaces that project their own perspectives while at the same time being perceived from observational vantage points. Of course, Moore’s sculptures are not the result of some primordial experience of self-consciousness interacting with unmediated nature. They are late twentieth century explorations of form and material. Here, organic three-dimensional forms are showcased to contrast with socially prevalent two-dimensional representations and with the geometric shapes produced by machinery. The characteristics of the materials of nature are brought forth, in contrast to the plastic substances that retreat from our consciousness as commodities. Also, the pragmatic representational function of symbolic objects is sublimated in the study of their abstracted physical forms and materiality. In negating the commonplace characteristics of signs—which point away from themselves—the non-representational sculptures obtrusively confront their creator and viewers with the nature of the artifact as intentionally formed material object.

 

Figure 6-4 goes approximately here

 

Text Box:  
Figure 6-4. Henry Moore, Three Piece Sculpture: Vertebrae, 1968-69, bronze, Hirschhorn Sculpture Garden, Washington, DC. Photo by G. Stahl, 2004.
Polished software is a very different way of objectifying experience. Buried in the source code and affordances of a software artifact are countless lessons and insights—not only those of the particular software developer, but of the traditions (congealed labor) of our technological world upon which that developer built (Marx, 1867/1976). This is as true of the current version of WebGuide as it is of any software application. So the software application is such an artifact; one that mediates classroom collaboration. But WebGuide strives to preserve insights explicitly as well, within the notes displayed in its perspectives and within their organization, including their organization into personal and group perspectives. The discussions that evolve within this medium are also artifacts, captured and organized by the perspectives.

Perhaps when we understand better how to use WebGuide in collaborative learning contexts it will maintain the knowledge that people construct through it in a way that preserves (in the sense of Hegel’s synthesis or aufheben) the construction process as well as the resultant theory. Then we may have a type of artifact or a medium that does not reify and alienate the process by which it developed—that permits one to reconstruct the origin of collaborative insights without laboriously deconstructing artifacts that are harder than stone. Eventually, collaborative practice and software design may co-evolve to the point where they can integrate the insights of multiple perspectives into group views that do not obliterate the insights of conflicting perspectives into the multifaceted nature of truth.

8. Issues for Mediation

We conclude this chapter with an attempt to sort out what we are collaboratively learning through our use of WebGuide. The six issues for perspectives-based software like WebGuide that arose during the middle school application (section 5) appeared in the graduate seminar’s usage of the software as well—and were articulated by seminar participants in their notes in WebGuide. These are important and complex issues that other researchers have raised as well. They are not problems that we have solved, but rather foci for future work. They define central goals for our redesign of WebGuide and goals for structuring the mediation of collaborative practices.

Here is a summary of our current understanding of these issues, based on our two practical experiences and our reflections on the theory of computer mediation of collaborative learning:

8.1 Divergence Among Ideas

In his review of computer mediated collaborative learning, dePaula (1998) identified divergence of ideas to be a common problem. He argued that the tree structure imposed by standard threaded discussion support was inappropriate for collaboration. The idea of a threaded discussion is that one contribution or note leads to another, so that each new idea is connected to its “parent” in order to preserve this connection. The problem is that there is often no effective way to bring several ideas together in a summary or synthesis because that would require a particular note to be tied to several parent notes—something that is typically not supported by discussion software. The result is that discussions proceed along ever diverging lines as they branch out, and there is no systematic way to promote convergence (Hewitt, 1997). It seems clear, however, that collaboration requires both divergence (e.g., during brainstorming) and convergence (e.g., during negotiation and consensus).

WebGuide tries to avoid this common structural problem of threaded discussion media at three levels:

·        The note linking mechanism in WebGuide allows notes to be linked to multiple parents, so that they can act to bring together and summarize otherwise divergent ideas. As in threaded discussions, every note is situated in the workspace by being identified and displayed as the child of some other note. However, WebGuide allows multiple parents, so that the web of notes is not restricted to a tree.

·        Similarly, the graph of perspectives allows for multiple inheritance, so that “comparison” perspectives can be defined that aggregate or converge the contents of multiple perspectives. The Logan School application was seeded with comparison perspectives corresponding to the class and subgroup perspectives, so that the overall perspectives graph has a structure in which the inheritance of notes first diverges from the class to the subgroup and then the personal perspectives, and then converges through the subgroup comparison perspectives to the class comparison perspective, as shown in figure 2. The web of perspectives forms a directed acyclical graph rather than a strict hierarchy.

·        Another effective way to encourage a well-structured discussion is to seed the workspace with a set of headings to scaffold the discourse. By introducing carefully conceived headings high in the perspective inheritance network, a facilitator (such as a teacher) can define an arrangement of topics that will be shared by the participants and will encourage them to arrange related ideas close to each other.

Although WebGuide provided these three convergence mechanisms in both of our usage situations, most participants were not adept at using any of them. This is probably related to the other issues below and is something that needs to be explored further in the future.

8.2 Avoidance of System Use

Media competition poses a barrier to acceptance of new communication software. People are naturally hesitant to adopt yet another communication technology. In a world inundated with pagers, cell phones, voicemail, email, fax, etc. people are forced to limit their media or be overwhelmed. They must calculate how much of a burden the new medium will impose in terms of learning how to use it, acquiring the equipment, checking regularly for incoming messages and letting people know that they are communicating through it. Clearly, a critical mass of adoption by one’s communication partners is necessary as well.

In a classroom context, some of these problems are minimized: all one’s partners are required to use WebGuide and the hardware is made available. Yet, it is not so simple. The Logan School students have to communicate with mentors who may not have Internet access or the proper hardware. Communication with classmates is much easier face-to-face then typing everything (knowing it has to be carefully done for grading). In the graduate seminar, most participants do not have convenient access to the necessary equipment and have to go out of their way to a special lab. This means that they are lucky to communicate through WebGuide once a week, and therefore cannot enter into lively on-going interchanges.

We will have to make WebGuide more accessible by increasing the number of platforms/browsers that it can run on and making it work over slow modems from home. Further, we need to improve its look-and-feel to increase people’s comfort level in wanting to use it: speed up response time, allow drag-and-drop rearrangement of notes, permit resizing of the applet and fonts for different monitors and different eyes, support searching and selective printouts, and provide graphical maps of the webs of perspectives and nodes.

8.3 Naturalness of the Perspectives Metaphor

Despite the fact that WebGuide has been designed to make the perspectives metaphor seem natural and simple to navigate, people express confusion as to how to use the perspectives. What perspective should I be working in, browsing for other people’s ideas or entering for discussions? The metaphor of perspectives as a set of alternative (yet linked and over-lapping) textual workspaces is a new notion when made operational, as in WebGuide.

The fact that an individual note may have different edited versions and different linking structures in different perspectives, that notes may have multiple parents within the discussion threads, and that new perspectives can be added dynamically and may inherit from multiple other perspectives sets WebGuide apart from simple threaded discussion media. It also makes the computations for displaying notes extremely complex. This is a task that definitely requires computers. By relieving people of the equivalent of these display computations, computer support may allow people to collaborate more fluidly. This is the goal of WebGuide. Although the software now hides much of the complexity, it is not yet at the point where people can operate smoothly without worrying about the perspectives.

8.4 Representation of the Web of Perspectives

One problem that aggravates acceptance of the perspectives metaphor is that the web of inheritance of content from perspective to perspective is hard to represent visually within WebGuide. The WebGuide interface relies on an outline display, with hiding of expansion of sub-notes. This has many advantages, allowing users to navigate to and view notes of interest in an intuitive way that is already familiar. However, an outline display assumes a strictly hierarchical tree of information. Because the web of perspectives has multiple inheritance, its structure is not visible in an outline, which always shows a perspective under just one of its parents at a time. Thus, for instance, there is no visual representation of how a comparison perspective inherits from several personal perspectives.

The same is true at the level of notes. A note that has been linked to several other notes that it may summarize is always displayed as the child of just one of those notes at a time.

Two solutions suggest themselves for future exploration. One is to provide an alternative representation such as a graphical map in place of the outline view. As appealing as this idea sounds, it may be technically difficult to do on-the-fly. A bigger problem is that graphical maps are notoriously poor at scaling up. Already in our two trial situations—in which there are on the order of twice as many perspectives as participants—it would be hard to clearly label a graphical node for every perspective within the applet’s confined display area. The second alternative is to indicate additional links with some kind of icon within the outline view. This would require more understanding on the part of the users in interpreting and making use of this additional symbolic information.

8.5 Structuring of Learning Situations

We have argued based on previous experience that the crucial aspect of supporting collaborative learning has to do with structuring social practices (Koschmann, Ostwald, & Stahl, 1998). Practice, in the sense of Bourdieu’s concept of habitus (Bourdieu, 1972/1995), is the set of generally tacit procedures that are culturally adopted by a community. In introducing WebGuide into its two user communities, we have tried to establish certain usage practices, both by instruction and by enforcement in the software. Looking back at figure 1, one can see that Logan students are only allowed to navigate to certain perspectives—namely their personal perspective and those group perspectives that inherit from that perspective. Seminar participants were originally given permission to navigate throughout the system and to make changes anywhere. That was subsequently modified (as shown in figure 3) to restrict their abilities when not in their personal perspective. The governing principle was that everyone should be able to do anything they want within their personal perspective, but no one should be able to affect the display of information in someone else’s personal perspective.

When the ability to enter notes everywhere was restricted, facilities for copying and linking notes from other computational perspectives into one’s own computational perspective were introduced. This was intended to encourage people to integrate the ideas from other figurative perspectives into their own figurative perspective by making a conscious decision as to where the new note should go in their existing web of notes. However, this added a step to the process of communication. One could no longer simply select a note that one wanted to comment on and press the “add discussion” button.

In order to facilitate discussion of notes that one did not necessarily want to integrate into one’s own perspective, the “add discussion” (annotation) button was then made active in all comparison perspectives. This led to minor problems, in that one could then not edit discussion notes that one had contributed in these perspectives. This could be fixed at the cost of additional complexity in the rules by allowing the author of a note to edit it in comparison perspectives.

More significantly, our experiments with changing permission rules pointed out that people were using WebGuide primarily as a threaded discussion medium for superficial opinions and socializing—and rarely as a knowledge construction space. Furthermore, their ability to construct shared group perspectives on discussion topics was severely hampered by the lack of support for negotiation in the system.

8.6 Distinguishing the System’s Capabilities

In iterating the design of WebGuide it became increasingly clear that what the system “wanted to be” (the design vision) was a medium for construction of knowledge. Yet, users were more familiar with discussion forums and tended to ignore the perspectives apparatus in favor of engaging in threaded discussion. These are very different kinds of tasks: collaborative knowledge construction generally requires a prolonged process of brainstorming alternative ideas, working out the implications of different options and negotiating conclusions; discussion can be much more spontaneous.

This suggests that more clarity is needed on the question: what is the task? If people are going to use WebGuide for collaborative knowledge construction then they need to have a clear sense of pursuing a shared knowledge construction task. The Logan students have such a task in articulating positions on acid mine drainage. However, much of their knowledge construction takes place in classroom discussion. They use WebGuide largely as a repository for their ideas. The seminar has been concerned with understanding a series of readings, so its participants have been more interested in exchanging isolated questions or reactions than in formulating larger integrative positions.

Our experience to date already suggests the complexity of trying to support collaborative learning. We should probably distinguish the software interface functions that support discussion from those that support knowledge construction. But this should be done in such a way that spontaneously discussed ideas can later be readily integrated into longer-term knowledge construction processes. Similarly, additional functionality—most notably support for group negotiation—must be added, differentiated and integrated. New capabilities and uses of WebGuide can increase its value, as long as confusions and conflicts are not introduced. For instance, providing facilities for people to maintain lists of annotated Web bookmarks, things-to-do, favorite references, up-coming deadlines, etc. within their personal perspectives might not only give them familiarity with using the system, but would also build toward that critical mass of usage necessary for meaningful adoption.

It has become a cliché that computer mediation has the potential to revolutionize communication just like the printing press did long ago. But the real lesson in this analogy is that widespread literacy required gradual changes in the skills and practices of the populace in order to take full advantage of the technological affordances of the printing press. In fact, the transition from oral tradition to literacy involved a radical change in how the world thinks and works (Ong, 1998). Although social as well as technical changes can be propagated much faster now, it is still necessary to evolve suitable mixes of practices and systems to support the move from predominantly individual construction of knowledge to a new level of collaborative cognition.

Our investigation of the above six issues will guide the next stage of our on-going exploration of the potentials and barriers of perspectives-based computer mediated collaborative learning on the Web.

9. Dialog with JIME reviewers

In fall 2000, the preceding part of this chapter was reviewed through the JIME on-line review process. I thought the reviews nicely brought out what the paper was trying to do. They added, in a generally supportive way, confirmation of one person’s experiences from much broader backgrounds. The reflections on key issues significantly enriched the discussion.

Rather than disrupting the narrative flow of the report above, situated as it was in its particular phases of WebGuide development, responses to the reviewer comments and inquiries will be presented in question/response format below. This may serve as another layer of reflection, from a somewhat later vantage point.

Question:

A slight doubt, which I think it would be hard to understand without using the system for a while, would be if it could feel/be restrictive. When computer-mediated collaboration is “well used,” users systematically attend to convergence (using the divergent discussion as a resource) by writing summaries and essays based on the shared material. Would WebGuide confine learner freedom to synthesize/converge because of the complexity of its linking systems… just a doubt.

Response:

While WebGuide’s interface has improved considerably since its first usage, problems remain of trying to think about ideas on a computer monitor. It is still a less convivial environment to play with complexly inter-related ideas than is paper. There is also the difficult trade-off between simplicity and clarity of the interface and the desire to support complicated functionality. The mechanisms to support convergence are only partly automatic, transparent and natural. And yet, if we want to think and write collaboratively then paper will not suffice.

Question:

Does this WebGuide software provide a more straightforward discussion area to be used alongside of the work on perspectives? Somewhere here there seems to be confusion between a virtual collaborative discussion space and a tool to aid collaborative work. Another point which confused me was the idea of the software as artifact in the same way as a piece of sculpture or a narrative... even if as the author points out, software “represents a very different way of objectifying experience.” Can various perspectives be represented with a single graphical image? Perhaps Cubist painting rather than sculpture makes a better analogy?

Response:

As detailed below, I have subsequently added a “discussion perspective” that provides a space for threaded discussion. Previously, threaded discussion took place directly in the comparison perspectives—leading people to ignore their personal perspectives and aggravating the conflict between discussion and construction. One of the hardest things I have had to figure out as a designer is how to integrate this into the perspectives framework, so that ideas entered one place would be available for the rest of the knowledge-building process. I have just now implemented this and have not yet released it to my users. I have still not implemented the sorely needed negotiation procedures. Discussions with Thomas Herrmann and his colleagues in Germany have helped me to understand the issues related to these new perspectives, and why the system should include explicit discussion and negotiation perspectives.

An artifact is never a simple object. A sculpture, for instance, opens up a rich world: it not only structures physical space and offers a sensuous surface, it also evokes other objects, meanings and works. Software is yet harder to characterize: what is its form and substance, where are its resistances and affordances? A communication and collaboration artifact like WebGuide makes possible new forms of interaction and knowledge building—but how do people learn how to take advantage of this without being overloaded? The artifact here is not so much the buttons and windows of the user interface as the discussion content that gets built up through the interface. These issues have led me to another iteration of theory with a seminar in fall 2000 on how artifacts embody meaning and subsequent analysis of empirical data on how people learn to understand and use meaningful artifacts (see chapter 12 as well).

I like the cubist image. But sculptures also encourage and facilitate viewing from different visual perspectives. I have thought of replacing the mono-perspectival pictures in the chapter with video clips that could be run in the JIME publication. Perhaps I could just use animated gifs of each sculpture, that cycle through several views—creating an effect that cubism anticipated before perspectival technology was available.

Question:

This article does give us a lot of good, clear, qualitative description of the two situations in which this software is being looked at. At some point, though, there seems to be a need for firmer ground and a few numbers.

Response:

The middle school classroom had 12 students. During the several months of sporadic usage, 835 notes were entered (including revisions of old notes). This count includes guiding questions and organizing headings that the teacher and I entered.

The graduate seminar had 8 active students. During the semester, 473 notes were entered.

This semester (which is half over as I draft this response), there are 11 active participants. We have entered 497 notes already, but many of these are headings, modifications or entry of data to be shared. This probably represents an average of two entries per week per student. While I work on some technical problems that have arisen, I am not encouraging heavy use of WebGuide. Mostly entries are comments and questions on the class readings, with some follow-on discussion. If I defined some collaborative tasks, we might get much higher usage.

I try to hold class in a computer lab at the beginning of the semester so that we can learn the systems together and students can help each other. Most students can now access WebGuide from home, although this remains problematic. When we all use WebGuide at the same time in the lab, the worst technical problems come up (multi-user issues that are hard to test without class usage). Also, problems arise of how the entries are organized (how to find what one’s neighbor just said she put in) and how discussion relates to one’s personal perspective. The main beneficiary of class usage of WebGuide is still the designer, who sees what problems need to be solved and what new functionality is desirable. For the students this is a glimpse into the future, but not yet a powerful cognitive and collaborative tool. In each class that uses WebGuide the students participate in reflecting on the process of designing the software artifact—and this is integral to the course curriculum as an experiment in collaborative learning.

Question:

I see an advantage of being able to see the work of other roles in progress. Figure 2 shows that joining of perspectives takes place way (too) late, namely in Gulch class comparison. It means students are not having enough time to prepare counterarguments and it also means that students miss out on constructing their perspectives along the same lines as that of other groups. In addition, I doubt whether it is desirable to have students think only about their own role or perspective since this is rather unrealistic (I may have this wrong, I am not sure how the system actually was used in practice).

Response:

I fear there is still some confusion on how perspectives work. The inheritance diagrammed in figure 2 takes place continually as notes are added, not just when perspectives are somehow complete. Every user of WebGuide can visit every perspective and read what is there at any time. The restriction is that you can only modify (edit, delete, rearrange) notes in your own perspective. Recently, I have added “private” notes that you can add in any perspective but are only viewable by you. This way, you can annotate any notes in the system privately.

I have also added “discussion” notes that you can add in any perspective; rather than staying in that perspective (and thereby modifying someone else’s perspective), the discussion note and the note it is discussing are copied to a new “discussion perspective.” The new discussion (and a new negotiation) perspective provides a space for inter-personal discussions to take place. Your contributions in the discussion perspective are also copied into your personal perspective so that you have a complete record of all the ideas you have entered into WebGuide and so that you can integrate these ideas with others in your working perspective.

These changes are part of a rather radical re-design—or at least extension—of the WebGuide perspectives system that has not yet been tried out by users. However, it is worth presenting here in some detail because it shows my response to the worrisome issues that have come up about conflicts between discussion and knowledge building (as discussed especially in point 5 of section 8 above). It brings the presentation up to date as of spring 2001.

Figure 6-5 shows the new interface of WebGuide. It now consists of two separate windows, a Java applet interface and an HTML window. Previous interfaces included an HTML frame within a fixed size main interface window. The user can now resize and overlap the two windows to optimize and personalize use of screen real estate. The main interface consists of (a) an expandable hierarchy of notes (either their titles or the first line of their content is displayed in the hierarchy—the full content of the currently selected note is displayed in the HTML window), (b) a bar of buttons for selecting a perspective across the top and (c) a control panel of function buttons on the right side.

Figure 6-5. The new interface to WebGuide 2000.

 

Figure 6-6. The new bar of perspectives buttons in WebGuide 2000.

 

Figure 6-5 goes approximately here

 

Figure 6-6 shows a close-up of the perspectives buttons, providing direct access to the most common perspectives and a pull-down list of all defined perspectives in the current database. Note that in addition to the group (or class) perspective, the current user’s personal perspective and the (group or class) comparison perspective, there are now perspectives for discussion, negotiation and archive. We will see how these are inter-related in figure 6-8 below.

 

Figure 6-6 goes approximately here

 

 

Figure 6-7 goes approximately here

 

Figure 6-7 shows a close-up of the function controls, with restricted options grayed-out. The comment button allows a user to enter a quick comment below the selected note. The new note button is similar to the comment, but allows the user to choose a label for the kind of note and to position the new note after (i.e., at the same level of hierarchy) the selected note rather than indented below it (i.e., as a child of it). Subsequent buttons let the user edit, delete, move, and copy or link a selected note. Copy to home or link to home is used when one has selected a note that is not in one’s personal perspective and wants to create a physical or virtual copy of it there. Email lets one send an email and have the content of the email and its responses inserted below the selected note. Search conducts a simple string text search across all notes (their author, title and content) in the database and displays the resulting notes in the HTML window (where they can be easily printed out). Private note is similar to comment, except that one can insert it in any perspective and that it will only be displayed when the author is logged in as the current user. Discuss and promote create notes in the discussion and negotiation perspectives; they will be described in the next paragraph. The vote, website and graphic buttons are for adding votes on negotiation issues, live links to URLs and graphic (multimedia) URLs to be displayed in the HTML window—these functions are not yet implemented. The print displayed button causes all notes whose titles are currently displayed in the hierarchy display to have their content shown in the HTML window for printing. The print selected button lets a user select multiple notes that are not sequential and have their content displayed in the HTML window. Finally, the print recent button displays in the HTML window the content of all notes that were created in the past N days, where a value for N is selected below this button. These search and print buttons are important steps toward providing tools for more effective knowledge management—offering convenient access to selected notes.

Text Box:  
Figure 6-7. The new knowledge management control panel in WebGuide 2000. 
How should the discuss and propose buttons work? A user should be able to start a discussion based on any other user’s note found in the system. The resulting discussion should be available to everyone in the group. The two perspectives available to everyone are the group and the comparison perspectives. The comparison perspective quickly becomes over-crowded and confusing, so I decided to create a new discussion perspective derived from the group perspective. Similarly, proposals for negotiations should be able to build on anyone’s notes and should be generally available, so I also created a negotiation perspective linked to the group perspective. Recall that the group (or class) perspective contains notes agreed to by the group at large (or seeded by the teacher to provide a shared starting point). The group perspective therefore provides an over-all context for collaborative discussion and negotiation, as well as for individual efforts at knowledge building. So, while we do not want discussion and negotiation notes that have not yet been adopted by the whole group to show up directly in the group perspective (and therefore to be inherited into all other perspectives), we do want to have the discussion and negotiation perspectives inherit from the group perspective in order to provide some context and structure. Moreover, we want the negotiation to inherit from the discussion so that a note in a discussion thread can be proposed for negotiation and so that discussion threads can be viewed in relation to negotiation proposals. As shown in figure 6-8, individual personal perspectives should inherit from the group but not from the discussion or negotiation perspectives.

 

Figure 6-8 goes approximately here

 

The trick with putting notes in the discussion and negotiation perspectives is to situate them meaningfully in the hierarchy with at least some context. Suppose Text Box:  
Figure 6-8. The old inheritance structures for perspectives in WebGuide (on the left) and the new structures (on the right).
you have entered a note that I want to comment on and to present for group discussion. Your note is in your personal perspective and I may have found it in the comparison perspective. So I either select your note in the comparison perspective or go to your personal perspective and select it there. I click on the discuss button. The system then wants to start a thread in the discussion perspective starting with your note, which would then be followed by my note. To do this, the system finds your note’s (the one that I want to comment on) parent-note—the note that your note is threaded from in the hierarchy of your personal perspective—and designates that note the anchor note. If the anchor note happens to already appear in the discussion perspective (which inherits the whole group perspective), then everything is simple and the system simply makes a copy of your note below the anchor in the discussion perspective and attaches my note below that. Alternatively, if an ancestor of your note appears within the discussion perspective in the notes hierarchy, then that closest ancestor is used as the anchor. Otherwise, the system attaches a copy of your note to a special “Discussions” heading note in the discussions perspective and then attaches my note below that. Then we have a discussion thread that anyone can add to in the discussions perspective.

In addition to setting up the new thread in the discussions perspective, the system makes a copy of your note with mine attached to it and places them below the anchor note (which I inherit from the group) in my own personal perspective. This is so that my personal perspective contains all of my contributions to discussions and negotiations. That way, I see all of my ideas and I can conveniently manipulate them in my own workspace. The dotted line in figure 6-8 from negotiation to viewer’s perspective indicates that these entries will appear in my perspective when I am viewing it.