10

Rediscovering the Collaboration

This chapter was originally published as a commentary to chapters 3 and 4 of CSCL2: Carrying Forward the Conversation. It is argued that these two examples of leading-edge research in CSCL lose sight of the real phenomena of collaboration due to their use of particular research methodologies that are prevalent in CSCL work. An alternative approach, which analyzes the details of collaborative interactions, is recommended as a supplement to such approaches.

In this essay, I began to reflect on the limitations of CSCL methodologies that are derived from related fields, like education and psychology. It struck me that their drive to quantify data as grist for the statistical mill reduced the richness of the data and eliminated some of the most interesting information for understanding collaboration. While the methodology allowed one to make statistically significant tests of specific hypotheses, it obstructed any attempt to follow the processes proposed in chapter 9Õs model. This insight allowed me to anticipate the proposal of chapter 11 and its implementation in chapters 12 and 13.

The CSCL2 volume (Koschmann, Hall, & Miyake, 2002) provides a follow-up to the collection that largely defined the field of CSCL six years earlier (Koschmann, 1996b). It compiled a number of key papers from the CSCL 1997 conference and was structured with commentaries written in 2001 to spark a knowledge-building conversation within the research community. The present chapterÕs commentary attempted to suggest a methodological turn for the field of CSCL. That suggestion will be further developed and implemented in later chapters of this book.

Chapter 3 of CSCL2 was written especially for that edited volume and is entitled ÒComputer-supported collaborative learning in university and vocational educationÓ by Frank P. C. M. de Jong, Else Veldhuis-Diermanse and Gaby Lugens from the Netherlands. Chapter 4 of CSCL2 was presented at CSCL Ô97 and is entitled ÒEpistemology of Inquiry and computer-supported collaborative learningÓ by Kai Hakkarainen, Lasse Lipponen and Sanna JŠrvelŠ from Finland.

The Ambiguity of CSCL

In the penultimate sentence of their paper, Hakkarainen, Lipponen and JŠrvelŠ correctly point out that CSCL researchers have a complex challenge because they Òattempt to promote the educational use of the new information/communication technology while simultaneously trying to implement new pedagogical and cognitive practices of learning and instructionÓ (p. 153f). The na•ve, technology-driven view was that tools such as CSILE (the software system used in both studies) would, on their own, make a significant difference in the classroom. The subsequent experience has been that the classroom culture bends such tools to its own interests, and that this culture must be transformed before new media can mediate learning the way we had hoped they would. So CSCL research has necessarily and properly shifted from the affordances and effects of the technology to concerns with the instructional context. Thus, the central conclusions of both papers focus on the teacherÕs role and say little that pertains directly to the role of CSILE, let alone to the consequences of specific features of its design.

Moreover, the papers are concerned with exploring the presence of deep knowledge building within groups, as opposed to more superficial exchange of existing personal opinions or individual offerings of off-the-cuff reactions. Both papers investigate the teacherÕs role in making a difference to the depth of collaboration and learning. Again, this is an important theme for research, but the methodology seems to miss the core phenomenon of interest to CSCL: instances of collaborative learning and details of their computer support.

The two papers have a similar structure: first, they discuss abstract pedagogical issues from the educational or scientific research literature (e.g., the learner-as-thinker or the scientist-as-questioner paradigm). Second, they present a statistical analysis of the notes in specific CSILE databases. Finally, they conclude that certain kinds of individual student learning took place.

However, in both cases, one could imagine that the same learning might have taken place in these particular classrooms with their particular teacher without any computer support and without any collaboration! While there is no doubt that the concerns expressed and supported in these papers are of vital importance to CSCL research, one wonders what happened to the computer-supported collaboration in CSCL.

The high-level concern of these papers, which ends up ignoring the roles of both collaboration and technology, plays itself out at a methodological level. To see this requires reviewing the analysis undertaken in these papers.

CSCL in the University

The paper by de Jong, Diermanse and Lutgens raises three central questions for CSCL environments such as CSILE:

1.         Can these environments be integrated into curriculum at the university level?

2.         Does their use promote knowledge building?

3.         What should the role of the teacher be?

Each of these questions would require a book to answer with any completeness—assuming one knew the answers. Research today is really just starting to pose the questions. Any answers proposed either supply the writerÕs intuitive sense of what took place in an experiment or they rely on a methodology whose limitations become obvious in the very process of being applied to these questions. Let us consider each of these questions in turn.

The Cultural, Educational, Learning and Pedagogical Context

Can CSILE (to use this prototypical system as a representative of the class of possible software systems for supporting collaborative knowledge building) be integrated into curriculum? The first issue implicitly posed by raising this question in the paper was: in what cultural and educational setting could a program like CSILE be integrated? The studies presented here took place in the Netherlands, within the context of a larger European project including Finland, Belgium, Italy, and Greece. Most of the earlier studies of CSILE (Computer-Supported Intentional Learning Environment) were, of course, conducted in Canada, where the system was developed (Scardamalia & Bereiter, 1996). However, there is no evidence presented in the paper to say that national culture makes any difference in the adoption of CSILE.

A second aspect of context is: at what educational level is CSILE effective? The paper reports studies at the university level and at a vocational agricultural school at the same age level. The related European studies focused on primary school children 9-11 years old. Systems such as CSILE are most frequently used in primary and middle school classes, although they are increasingly being used in college classes as well. The studies in this paper are not contrasted with other age groups and there is no reason given to think that educational level makes any significant difference. This is actually a surprising non-result, because one might assume that collaborative knowledge building requires mature cognitive skills. It may be that within modern schooling systems college students have not developed collaborative inquiry skills beyond an elementary school level.

A third aspect has to do with the learning styles of the individual students. This issue is explicitly raised by the methodology of the first (university) study. Here the students were given tests on cognitive processing strategies, regulation strategies, mental models of learning, and learning orientation. Based on these scores, they were classified as having one of four learning styles: application-directed, reproduction-directed, meaning-directed, or undirected. A statistically significant correlation was found between the application-directed learners and the number of notes entered into CSILE. This was the only significant correlation involving learning styles. This may just mean that students who are generally more inclined to engage in tasks were in fact the ones who engaged more in the note creation task of the study—not a very surprising result.

A fourth aspect involves the incorporation of collaboration software into a particular curriculum or classroom culture. As the paper makes clear, CSILE is not intended for a traditional teacher-centered classroom with delivery of facts through lecture. The use of such a technology as a centerpiece of classroom learning raises the most complex issues of educational transformation. Not only do the teacher and student roles have to be rethought, but the curricular goals and the institutional framework need to be as well. If collaborative knowledge building is really going to become the new aim, what happens to the whole competitive grading system that functions as a certification system integral to industrial society? Is it any wonder that Òstudents are not used to sharing their knowledgeÓ? What will it take to change this?

Promoting Collaborative Knowledge Building

The paperÕs conclusion cites two arguments for the claim that CSILE resulted in much more collaborative learning by the students. First, it contrasts the study with Òpast courses in which students were directed through the course by closed tasks.Ó No attempt beyond this half sentence is made to draw out the contrast. Clearly, by definition, a course that has been restructured to centrally include collaborative discussion will at least appear to be more collaborative than its teacher-centered predecessor. But it is then important to go on and consider concretely what took place collaboratively and what specific kinds of knowledge were built collaboratively.

The second evidence for collaborative knowledge building comes from an activity that apparently took place outside of CSILE in a non-collaborative manner: the rewriting of educational policy notes. This seems like precisely the kind of collaborative task that could have pulled the whole course together as a joint project. Students could have collected and shared ideas from their readings with the goal of building a group external memory of ideas that would be used in collectively rewriting the educational policy. Instead, the individual students had to retain whatever the group learned using CSILE, combine it with individualized learning from readings and ÒtransferÓ this knowledge to the final individual ÒauthenticÓ task. Thus, the paper concludes that the use of CSILE Òresulted in sufficient transfer of the acquired understanding to work within an authentic problem.Ó There is no evidence of learning or transfer other than a general judgment that the final product was of Òhigh quality.Ó

The remaining evidence for collaborative knowledge building is given by two standard statistical measures of online discussions. The first measure is a graph of the number of notes posted by students and teachers during each week of the course. In the university study, this chart shows a large peak at the beginning and a smaller one at the end—for both students and teachers. There is virtually no addition of new notes for the central half of the course, and only a minimal reading of the notes occurs during that time. This is extraordinary, given that the paper calls this period the Òknowledge deepening phase.Ó This is precisely when one would hope to see collaborative knowledge building taking place. As students read, research and deepen their ideas they should be sharing and interacting. Clearly, they know how to use the technology at this point. If CSILE truly promotes student-directed collaboration, then why is this not taking place? (Raising this question is in no way intended to criticize anyone involved in this particular experiment, as this is an all too common finding in CSCL research.)

The vocational study also presents a graph of the number of notes posted each week. Here, there are peaks in the middle of the course. But, as the paper points out, the peaks in student activity directly follow the peaks in teacher activity. This indicates a need for continuing teacher intervention and guidance. The apparently causal relation between teacher intervention and student activity raises the question of the nature of the student activity. Are students just creating individual notes to please the teacher, or has the teacher stimulated collaborative interactions among the student notes? Because the graph only shows the number of created notes, such a question cannot be addressed.

The second statistical measure for the university study is a table of correlations among several variables of the threaded discussion: notes created, notes that respond to earlier notes, notes linked to other notes, notes revised and notes read by students. The higher correlations in the table indicate that many notes were responses to other notes and that these were read often. This is taken as evidence for a high level of collaboration taking place in CSILE. A nice sample of such collaboration is given in figure 2 of the study. Here one student, Elske, has posted a statement of her theory. A discussion ensues, mostly over three days, but with a final contribution 9 days later. This collection of 10 linked notes represents a discussion among four people about ElskeÕs theory. It might be informative to look at the content of this discussion to see what form—if any—of knowledge building is taking place.

The TeacherÕs Role

The paper ends with some important hints about how CSILE classrooms need to be different from lecture-dominated contexts: The use of the collaboration technology must be highly structured, with a systematic didactic approach, continuing teacher involvement and periodic face-to-face meetings to trouble-shoot problems and reflect on the learning process. These suggestions are not specific to the studies presented; they should only surprise people—if there still are any—who think that putting a computer box in a classroom will promote learning by itself. These are generic recommendations for any form of learner-as-thinker pedagogy, regardless of whether or not there is collaboration or computer support.

The paper by Hakkarainen et al. comes to a similar conclusion by a somewhat different, though parallel, route. Some of the preceding comments apply to it as well. But it also represents a significant advance in uncovering the quality of the discussion that takes place. In their discussion section, the authors are clearly aware of the limitations of their approach, but in their actual analysis they too fail to get at the collaboration or the computer support.

Hakkarainen et al. are interested in the Òepistemology of inquiryÓ in CSCL classrooms. That is, they want to see what kinds of knowledge are being generated by the students in three different classrooms—two in Canada and one in Finland—using CSILE. To analyze the kinds of knowledge, they code the ideas entered into the CSILE database along a number of dimensions. For instance, student knowledge ideas were coded as either (a) scientific information being introduced into the discussion or (b) a studentÕs own view. Ideas of both these kinds were then rated as to their level of explanatory power: (a) statement of isolated facts, (b) partially organized facts, (c) well-organized facts, (d) partial explanation or (e) explanation.

Statistical analysis of the coded ideas provides strong evidence that the epistemology of inquiry was different in the three classrooms. In particular, one of the Canadian classrooms showed a significantly deeper explanatory understanding of the scientific phenomena under discussion. This was attributed by the authors to a difference in the classroom culture established by the teacher, including the extent of the teacherÕs interactions with students via CSILE. Thus, the approach of coding ideas achieved the authorsÕ goal of showing the importance of the classroom culture in determining the character of collaborative knowledge building.

The Epistemology of Science

Hakkarainen et al. review certain philosophers of science and characterize the enterprise of science in terms of posing specific kinds of questions and generating particular kinds of statements. This may be a valid conceptualization of scientific inquiry, but let us consider a different perspective more directly related to collaboration and computer support.

In his reconstruction of the Origins of the Modern Mind, Donald (1991) locates the birth of science in the discovery by the ancient Greeks that Òby entering ideas, even incomplete ideas, into the public record, they could later be improved and refinedÓ (p. 342). In this view, what drives scientific advance is collaboration that is facilitated by external memory—precisely the promise of CSCL. Significantly, this framing of scientific knowledge building focuses on the social process and its mediation by technologies of external memory (from written language to networked digital repositories). According to this approach, we should be analyzing not so much the individual questions and statements of scientific discourse as the sequences of their improvement and refinement. Similarly, we can look at the effects of the affordances of technologies for expressing, communicating, relating, organizing and retaining these evolving ideas.

Reification of Data and its Consequences for CSCL

Unfortunately, Hakkarainen et al. focus exclusively on individual statements. They relate their categorization of statements to CSILE in terms of that systemÕs Òthinking types,Ó which the CSILE designers selected to scaffold the discourse of a community of learners. However, the thinking type categories that students select to label their statements in CSILE were designed precisely to facilitate the interconnection of notes—to indicate to students reading the discussion which notes were responses and refinements of other notes.

For purposes of analyzing the use of CSILE in different classrooms, the authors operationalize their view of science. They systematically break down all the notes that students communicated through CSILE into unit ÒideasÓ and categorize these textual ideas according to what kind of question or statement they express. This turns out to be a useful approach for deriving qualitative and quantitative answers to certain questions about the kind of scientific discussions taking place in the classrooms. Indeed, this is a major advance over the analysis in de Jong et al., which could not differentiate different kinds of notes from each other at all.

However, the reduction of a rich discussion in a database of student notes into counts of how many note fragments (ÒideasÓ) fall into each of several categories represents a loss of much vital information. The notes—which were originally subtle acts of communication, interaction and knowledge building within a complexly structured community of learners—are now reified into a small set of summary facts about the discussion. For all the talk in CSCL circles about moving from fact-centered education to experiential learning, CSCL research (by no means just the paper under review here, but most of the best in the field) remains predominantly fact-reductive.

Of course, the methodology of coding statements is useful for answering certain kinds of questions—many of which are undeniably important. And the methodology can make claims to scientific objectivity: wherever subjective human interpretations are made they are verified with inter-rater reliability, and wherever claims are made they are defended with statistical measures of reliability.

However, it becomes clear here that the coding process has removed not only all the semantics of the discussion so that we can no longer see what scientific theories have been developed or what critical issues have been raised, but it has also removed any signs of collaboration. We do not know what note refined what other note, how long an important train of argument was carried on, or how many students were involved in a particular debate. We cannot even tell if there were interactions among all, some, or none of the students.

To their credit, Hakkarainen et al. recognize that their (and de JongÕs) measures capture only a small part of what has taken place in the classrooms. In their paper they are just trying to make a single focused point about the impact of the teacher-created classroom culture upon the scientific level of the CSILE-mediated discourse. Furthermore, in their discussion section they note the need for different kinds of analysis to uncover the Òon-line interactions between teacher and studentsÓ that form a Òprogressive discourse,Ó which is central to knowledge building according to Bereiter (2002). For future work, they propose social network analysis, which graphically represents who interacted with whom, revealing groups of collaborators and non-collaborators. Although this would provide another useful measure, note that it too discards both the content and the nature of any knowledge building that may have taken place in the interactions. Methodologically, they still situate knowledge in the heads of individual students and then seek relations among these ideas, rather than seeking knowledge as an emergent property of the collaboration discourse itself.

Where to Rediscover CSCL

These two papers represent typical studies of CSCL. The first type provides graphs of note distributions and argues that this demonstrates computer-supported collaboration that is more or less intense at different points represented in the graph. Sometimes, additional analyses of discussion thread lengths provide some indication of processes of refinement, although without knowing what was said and how ideas evolved through interactions during those processes it is impossible to judge the importance of the collaboration. The second type of analysis codes the semantics of the notes in order to make conclusions about the character of the discussion without really knowing what the discussion was about. It has generally been assumed that the only alternative is to make subjective and/or anecdotal observations from actually observing some of the discussion and understanding its content—and that this would be impractical and unscientific.

A major problem that we have just observed with the prevalent CSCL assessment approaches is that they throw out the actual computer-supported collaborative learning along with the richness of the phenomenon when they reduce everything to data for statistics.

What we need to do now is to look at examples of CSCL and observe the collaboration taking place. Collaborative knowledge building is a complex and subtle process that cannot adequately be reduced to a simple graph or coding scheme, however much those tools may help to illustrate specific parts of the picture. One central question that needs to be seriously addressed has to do with our claim that collaboration is important for knowledge building. We need to ask where is there evidence that knowledge emerged from the CSCL-mediated process that would not have emerged from a classroom of students isolated at their desks, quietly hunched over their private pieces of paper. Beyond that, we should be able to trace the various activities of collaborative knowledge building: where one personÕs comment stimulates anotherÕs initial insight or question, one perspective is taken over by another, a terminological confusion leads to clarification, a set of hypotheses congeals into a theory, and a synergistic group understanding emerges thanks to the power of computer-supported collaborative learning.

Before we had systems such as CSILE, collaboration across a classroom was not feasible. How could all the students simultaneously communicate their ideas in a way to which others could respond whenever they had the time and inclination? How could all those ideas be captured for future reflection, refinement and reorganization? CSCL proposes that this is now possible. We have to demonstrate, in showcase classrooms, that it has become a reality—that CSCL systems really can support this and that, thanks to this technology, exciting things really are taking place that would not otherwise have been possible. Only when our analyses demonstrate this will we have rediscovered CSCL in our analysis of classroom experiments.

Making Collaborative Learning Visible

Statistical analysis of outcomes has dominated educational research because it was assumed that learning takes place inside peopleÕs heads, and since Descartes it has been assumed that we have only indirect access to those processes. Much work in the cognitive sciences, including artificial intelligence, assumes that we can, at best, model the mental representations that are somehow formed or instilled by learning. Whatever we may think of these assumptions as applied to individual cognition, they surely do not apply to collaborative learning. By definition, this is an intersubjective achievement; it takes place in observable interactions among people in the world.

The point is that for two or more people to collaborate on learning, they must display to each other enough that everyone can judge where there are agreements and disagreements, conflicts or misunderstandings, confusions and insights. In collaborating, people typically establish conventional dialogic patterns of proposing, questioning, augmenting, mutually completing, repairing, and confirming each otherÕs expressions of knowledge. Knowledge here is not so much the ownership by individuals of mental representations in their heads as it is the ability to engage in appropriate displays within the social world. Thus, to learn is to become a skilled member of communities of practice (Lave & Wenger, 1991) and to become competent at using their resources (Suchman, 1987), artifacts (Norman, 1993), speech genres (Bakhtin, 1986a) and cultural practices (Bourdieu, 1972/1995). The state of evolving knowledge must be continually displayed by the collaborating participants to each other. The stance of each participant to that shared and disputed knowledge must also be displayed.

This opens an important opportunity to researchers of collaborative learning that traditional educational studies lacked: what is visible to the participants may be visible to researchers as well. Assuming that the researchers can understand the participant displays, they can observe the building of knowledge as it takes place. They do not have to rely on statistical analyses of reified outcomes data and after-the-fact reconstructions (interviews, surveys, talk-alouds), which are notoriously suspect.

Koschmann (1999a) pointed out this potential, derived from the nature of dialog as analyzed by Bakhtin, and also cited several studies outside of CSCL that adopted a discourse analytic approach to classroom interactions. According to Bakhtin (1986a), a particular spoken or written utterance is meaningful in terms of its references back to preceding utterances and forward to anticipated responses of a projected audience. These situated sequences of utterances take advantage of conventional or colloquial Òspeech genresÓ that provide forms of expression that are clearly interpretable within a linguistic community. Explicit cross-references and implicit selections of genres mean that sequences of dialogic utterances display adoptions, modifications and critiques of ideas under discussion, providing an intersubjectively accessible and interpretable record of collaborative knowledge building.

In order for collaborative learning processes to be visible to researchers, the participant interaction must be available for careful study and the researchers must be capable of interpreting them appropriately. In CSCL contexts, learning may take place within software media that not only transmit utterances but also preserve them; the information preserved for participants may be supplemented with computer logging of user actions for the researchers. If communications cannot otherwise be captured, such as in face-to-face collaboration, they can be videotaped; the tapes can be digitized and manipulated to aid in detailed analysis. In either case, it may be possible for researchers to obtain an adequate record of the interaction that includes most of the information that was available to participants. In face-to-face interaction, this generally includes gesture, intonation, hesitation, turn-taking, overlapping, facial expression, bodily stance, as well as textual content. In computer-mediated collaboration, everyone is limited to text, temporal sequence and other relationships among distinct utterances—but the number of relevant interrelated utterances may be much higher. To avoid being swamped with data that requires enormous amounts of time to analyze, researchers have to set up or focus on key interactions that span only a couple of minutes (see chapters 12 and 21).

The problem of researchers being capable of appropriately interpreting the interactions of participants is a subtle one, as anthropologists have long recognized (Geertz, 1973). A family of sciences has grown up recently to address this problem; these include conversation analysis (Sacks, 1992), ethnomethodology (Garfinkel, 1967; Heritage, 1984), video analysis (Heath, 1986), interaction analysis (Jordan & Henderson, 1995) and micro-ethnography (Streeck, 1983). These sciences have made explicit many of the strategies that are tacitly used by participants to display their learning to each other. Researchers trained in these disciplines know where to look and how to interpret what is displayed. Researchers should also have an innate understanding of the culture they are observing. They should be competent members of the community or should be working with such members when doing their observation and analysis. For this reason, as well as to avoid idiosyncratic and biased interpretations, an important part of the analysis of interaction is usually conducted collaboratively. At some point, the interpretation may also be discussed with the actual participants. Collaboration is an intersubjective occurrence and its scientific study requires intersubjective confirmation rather than statistical correlations to assure its acceptability.

Observing Computer-Supported Collaborative Learning

If collaborative learning is visible, then why havenÕt more researchers observed and reported it? Perhaps the answer is because collaborative knowledge building is so rare today. I have tried to use systems similar to CSILE in several classrooms and have failed to see them used for knowledge building (see chapter 6). They may be used by students to express their personal opinions and raise questions but rarely to engage in the kind of ongoing dialog that Donald (1991) saw as the basis for a theoretic culture, or to engage in the investigation of Òconceptual artifactsÓ (e.g., theories) that Bereiter (2002) identifies as central to knowledge building. Of the five classrooms reviewed in the two papers featured here, probably only one of them, a Canadian classroom, advanced significantly beyond the level of chat to more in-depth knowledge building. The exchange of superficial opinions and questions is just the first stage in a complex set of activities that constitute collaborative knowledge building (see chapter 9). Even simple statistics on thread lengths in threaded discussion systems (Guzdial & Turns, 2000; Hewitt & Teplovs, 1999) indicate that communication does not usually continue long enough to get much beyond chatting. Hence, the reviewed papers are correct that the classroom culture and pedagogy are critical, but they do not go far enough.

It is probably important for researchers to set up special learning contexts, in which students are guided to engage in collaborative knowledge building. Too much of this was left up to the teachers in the studies we have just reviewed, despite the fact that teachers in CSILE classrooms are explicitly trained to foster collaborative learning. Student activities must be carefully designed that will require collaboration and that will take advantage of computer support for it. For instance, in the Dutch university case, it sounds like the wrong tasks were made the focus of collaboration and computer support. Very few notes were entered into the computer system during the long Òknowledge-deepening phaseÓ when students were reading. Perhaps through a different definition of tasks, the students would have used the system more while they were building their knowledge by collecting relevant ideas and facts in the computer as a repository for shared information. The final product—the educational policy note—could have been made into the motivating collaborative task that would have made the collection and analysis of all the issues surrounding this meaningful.

A nice success story of a researcher setting up a CSCL situation is related by Roschelle (1996). He designed a series of tasks in physics for pairs of students to work on using a computer simulation of velocity and acceleration vectors. He videotaped their interactions at the computer and in subsequent interviews. Through word-by-word analysis of their interactions, Roschelle was able to observe and interpret their collaboration and to demonstrate the degrees to which they had or had not learned about the physics of motion. He did the equivalent of looking seriously at the actual content of the thread of notes between Elske and her fellow students in the Netherlands. Through his micro-analysis, he made the learning visible.

It is true that Roschelle analyzed face-to-face communication, and this is in some ways a richer experience than computer-mediated interaction using software such as CSILE. But conversation analysis was originally studied in the context of telephone interactions (Schegloff & Sacks, 1973), so it is possible to interpret interactions where bodily displays are excluded. Computer-mediated collaboration will turn out to look quite different from face-to-face interaction, but we should still be able to observe learning and knowledge building taking place by working out the ways in which people make and share meaning across the network. By making visible in our analysis what is already visible to the participants, we can rediscover the collaborative learning and the effects of computer support in CSCL contexts.