by Gerry Stahl
The ambiguity of CSCL
In their penultimate sentence, Hakkarainen, Lipponen & Järvelä correctly point out that CSCL researchers have a complex challenge because the educational use of new information/communication technologies is inextricably bound up with new pedagogical and cognitive practices of learning and instruction. The naïve, technology-driven view was that tools like CSILE would make a significant difference on their own. The subsequent experience has been that the classroom culture bends such tools to its own interests, and that this culture must be transformed before new media can mediate learning the way we had hoped they would. So CSCL research has necessarily and properly shifted from the affordances and effects of the technology to concerns with the instructional context. Thus, the central conclusions of Chapters 3 and 4 focus on the teacher’s role and say little that pertains to the presence of CSILE as such.
The two chapters have a similar structure: first they discuss abstract pedagogical issues from the educational or scientific research literature – e.g., the learner-as-thinker or the scientist-as-questioner paradigm. Then they present a statistical analysis of the notes in specific CSILE databases. Finally, they conclude that certain kinds of learning took place.
However, in both cases, one could imagine that the same learning might have taken place in these particular studied classrooms with their particular teacher guidance, without any computer support and without any collaboration! While there is no doubt that the concerns expressed and supported in these chapters are of vital importance to CSCL research, one wonders what happened to the CSCL.
The high-level concern of these chapters, which ends up ignoring the role of collaboration and technology, plays itself out at a methodological level. To see this requires reviewing the analysis undertaken in these chapters.
CSCL in the university
The chapter by de Jong, Diermanse & Lutgens raises three central questions for CSCL environments like CSILE:
Each of these questions would require a book to answer with any completeness – if we knew the answers. Research today is really just starting to pose the questions. Any answers proposed either supply the writer’s intuitive sense of what took place in an experiment or they rely on a methodology whose limitations become obvious in the very process of being applied to these questions. Let us consider each of these questions in turn.
The cultural, educational, learning and pedagogical context
Can CSILE (to use this prototypical system as a representative of the class of possible software systems for supporting collaborative knowledge building) be integrated into curriculum? The first issue implicitly posed by raising this question in the chapter was: in what cultural and educational setting? The studies presented here took place in the Netherlands, within the context of a larger European project including Finland, Belgium, Italy, and Greece. Much of the earlier work on CSILE was of course conducted in Canada, where the system was developed. There is no evidence presented in the chapter to say that national culture makes any difference in the adoption of CSILE.
A second aspect of context is: at what educational level is CSILE effective? The chapter reports studies at the university level and at a vocational agricultural school at the same age level. The related European studies focused on primary school children 9-11 years old. Systems like CSILE are most frequently used in primary and middle school classes, although they are increasingly being used in college classes as well. The studies in this chapter are not contrasted with other age groups and there is no reason given to think that educational level makes any significant difference. This is actually a surprising non-result, since one might assume that collaborative knowledge building requires mature cognitive skills. It may be that within modern schooling systems college students have not developed collaborative inquiry skills beyond an elementary school level.
A third aspect has to do with the learning styles of the individual students. This issue is explicitly raised by the methodology of the first (university) study. Here the students were given tests on cognitive processing strategies, regulation strategies, mental models of learning, and learning orientation. Based on these scores, they were classified as having one of four learning styles: application-directed, reproduction-directed, meaning-directed, or un-directed. A statistically significant correlation was found between the application-directed learners and the number of notes entered into CSILE. This was the only significant correlation involving learning styles. This may just mean that students who are generally more inclined to engage in tasks were in fact the ones who engaged more in the note creation task of the study – not a very surprising result.
A fourth aspect involves the incorporation of collaboration software into a particular curriculum or classroom culture. As the chapter makes clear, CSILE is not intended for a traditional teacher-centered classroom with delivery of facts through lecture. The use of such a technology as a centerpiece of classroom learning raises the most complex issues of educational transformation. Not only the teacher and student roles, but also the curricular goals and the institutional framework have to be re-thought. If collaborative knowledge building is really going to become the new aim, what happens to the whole individual/competitive grading system that functions as a certification system integral to industrial society? Is it any wonder that “students are not used to sharing their knowledge”? What will it take to change this?
Promoting collaborative knowledge building
The chapter’s conclusion section cites two arguments for the claim that CSILE resulted in much more collaborative learning by the students. First, it contrasts the study with “past courses in which students were directed through the course by closed tasks.” No attempt beyond this half sentence is made to draw out the contrast. Clearly, by definition, a course that has been restructured to centrally include collaborative discussion will at least appear to be more collaborative than its teacher-centered predecessor. But it is then important to go on and consider concretely what took place collaboratively and what specific kinds of knowledge were built collaboratively.
The second evidence for collaborative knowledge building comes from an activity that apparently took place outside of CSILE in a non-collaborative manner: the rewriting of educational policy notes. This seems like precisely the kind of collaborative task that could have pulled the whole course together as a joint project. Students could have collected and shared ideas from their readings with the goal of building a group external memory of ideas that would be used in collectively rewriting the educational policy. Instead, the individual students had to retain whatever the group learned using CSILE, combine it with individualized learning from readings, and “transfer” this knowledge to the final individual “authentic” task. Thus, the chapter concludes that the use of CSILE “resulted in sufficient transfer of the acquired understanding to work with in an authentic problem.” There is no evidence of learning or transfer other than a general judgment that the final product was of “high quality.”
The remaining evidence for collaborative knowledge building is given by two standard statistical measures of on-line discussions. The first measure is a graph of the number of notes posted each week of the course by students and by teachers. In the university study, this chart shows a large peak at the beginning and a smaller one at the end – for both students and teachers. There is virtually no addition of new notes for the central half of the course, and only minimal reading of notes then. This is extraordinary, given that the chapter calls this period the “knowledge deepening phase.” This is precisely when one would hope to see collaborative knowledge building taking place. As students read, research, and deepen their ideas they should be sharing and interacting. Clearly, they know how to use the technology at this point. If CSILE truly promotes student-directed collaboration, then why is this not taking place? Raising this question is in no way intended to criticize anyone involved in this particular experiment, for this is an all-too common finding in CSCL research.
The vocational study also presents a graph of the number of notes posted each week. Here, there are peaks in the middle of the course. But, as the chapter points out, the peaks in student activity directly follow the peaks in teacher activity. This indicates a need for continuing teacher intervention and guidance. The apparently causal relation between teacher intervention and student activity raises the question of the nature of the student activity. Are students just creating individual notes to please the teacher, or has the teacher stimulated collaborative interactions among the student notes? Since the graph only shows the number of created notes, such a question cannot be addressed; the required information is no longer present in the data.
The second statistical measure for the university study is a table of correlations among several variables of the threaded discussion: notes created, notes that respond to earlier notes, notes linked to other notes, notes revised, notes read by students. The higher correlations in the table indicate that many notes were responses to other notes and that these were read a lot. This is taken as evidence for a high level of collaboration taking place in CSILE. A nice sample of such collaboration is given in Figure 2. Here one student (Elske) has posted a statement of her theory (MT). A discussion ensues, mostly over three days, but with a final contribution 9 days later. This collection of 10 linked notes represents a discussion among four people about Elske’s theory. It might be informative to look at the content of this discussion to see what form – if any – of knowledge building is taking place.
The teacher’s role
The chapter ends with some important hints about how CSILE classrooms need to be different from lecture dominated contexts: the use of the collaboration technology must be highly structured, with a systematic didactic approach, continuing teacher involvement, and periodic face-to-face meetings to trouble-shoot problems and reflect on the collaborative learning process. These suggestions are not specific to the studies presented; they should only surprise people – if there still are any – who think that putting a computer box in a classroom will promote learning by itself. These are generic recommendations for any form of learner-as-thinker pedagogy, regardless of whether or not there is collaboration or computer support.
The chapter by Hakkarainen et al. comes to a similar conclusion, by a somewhat different, though parallel route. Some of the preceding comments apply to it as well. But it also represents a significant advance at uncovering the quality of the discussion that takes place. In their discussion section, the authors are clearly aware of the limitations of their approach, but in their actual analysis they too fail to get at the collaboration or the computer support.
Hakkarainen et al. are interested in the “epistemology of inquiry” in CSCL classrooms. That is, they want to see what kinds of knowledge are being generated by the students in three different classrooms – two in Canada and one in Finland – using CSILE. To analyze the kinds of knowledge, they code the ideas entered into the CSILE database along a number of dimensions. For instance, student knowledge ideas were coded as either (a) scientific information being introduced into the discussion or (b) a student’s own view. Ideas of both these kinds were then rated as to their level of explanatory power: (a) statement of isolated facts, (b) partially organized facts, (c) well-organized facts, (d) partial explanation, or (e) explanation.
Statistical analysis of the coded ideas provides strong evidence that the epistemology of inquiry was different in the three classrooms. In particular, one of the Canadian classrooms showed a significantly deeper explanatory understanding of the scientific phenomena under discussion. This was attributed by the authors to a difference in the classroom culture established by the teacher, including through the teacher’s interactions with students via CSILE. Thus, the approach of coding ideas achieved the authors’ goal of showing the importance of the classroom culture to the character of collaborative knowledge building.
The epistemology of science
Hakkarainen et al. review certain philosophers of science and characterize the enterprise of science in terms of posing specific kinds of questions and generating specific kinds of statements. This may be a valid conceptualization of scientific inquiry. But let us consider a different perspective more directly related to collaboration and computer support.
In his reconstruction of the Origins of the Modern Mind, Donald (1991) locates the birth of science in the discovery by the ancient Greeks that “by entering ideas, even incomplete ideas, into the public record, they could later be improved and refined” (p. 342). In this view, what drives scientific advance is collaboration that is facilitated by external memory – precisely the promise of CSCL.
Significantly, this framing of scientific knowledge building focuses on the social process and its mediation by technologies of external memory (from written language to networked digital repositories). According to this approach, we should be analyzing not so much the individual questions and statements of scientific discourse, but the sequences of their improvement and refinement. Relatedly, we can look at the effects of the affordances of technologies for expressing, communicating, relating, organizing, and retaining these evolving ideas.
Reification of data and its consequences for CSCL
Unfortunately, Hakkarainen et al. focus exclusively on the individual statements. They relate their categorization of statements to CSILE in terms of that system’s “thinking types,” which the CSILE designers selected to scaffold the discourse of a community of learners. However, the thinking type categories that label statements in CSILE were designed precisely to facilitate the interconnection of notes – to indicate to students which notes were responses and refinements of other notes.
For purposes of analyzing the use of CSILE in different classrooms, the authors operationalize their view of science. They systematically break down all the notes that students communicated through CSILE into unit “ideas” and categorize these textual ideas according to what kind of question or statement they express. This turns out to be a useful approach for deriving qualitative and quantitative answers to certain questions about the kind of scientific discussions taking place in the classrooms. Indeed, this is a major advance over the analysis in de Jong et al., which cannot differentiate different kinds of notes from each other at all.
However, the reduction of a rich discussion in a database of student notes into counts of how many note fragments (“ideas”) fall into each of several categories represents a loss of much vital information. The notes – which were originally subtle speech acts within a complexly structured community of learners – are now reified into a small set of summary facts about the discussion. For all the talk in CSCL circles about moving from fact-centered education to experiential learning, CSCL research (by no means just the chapter under review here, but most of the best in the field) remains predominantly fact-reductive.
Of course, the methodology of coding statements is useful for answering certain kinds of questions – many of which are undeniably important. And the methodology can make claims to scientific objectivity: wherever subjective human interpretations are made they are verified with inter-rater reliability, and wherever claims are made they are defended with statistical measures of reliability.
However, it becomes clear here that the coding process has removed not only all the semantics of the discussion so that we can no longer see what scientific theories have been developed or what critical issues have been raised. It has also removed any signs of collaboration. We do not know what note refined what other note, how long an important train of argument was carried on, or how many students were involved in a particular debate. We cannot even tell if there were interactions among all, some, or none of the students.
To their credit, Hakkarainen et al. recognize that their (and de Jong’s) measures capture only a small part of what has taken place in the classrooms. In their chapter they are just trying to make a single focused point about the impact of the teacher-created classroom culture upon the scientific niveau of the CSILE-mediated discourse. Furthermore, in their discussions section they note the need for different kinds of analysis to uncover the “on-line interactions between teacher and students” that forms a “progressive discourse,” which is central to knowledge building according to Bereiter (2000) . For future work they propose social network analysis, which graphically represents who interacted with whom, revealing groups of collaborators and non-collaborators. While this would provide another useful measure, note that it too discards both the content and the nature of any knowledge building that may have taken place in the interactions. Methodologically, they still situate knowledge in the heads of individual students and then seek relations among these ideas, rather than seeking knowledge as an emergent property of the collaboration discourse itself.
Where to rediscover CSCL
Chapters 3 and 4 represent typical studies of CSCL. The first type provides graphs of note distributions and argues that this demonstrates computer-supported collaboration that is more or less intense at different points represented in the graph. Sometimes, additional analyses of discussion thread lengths provide some indication of processes of refinement, although without knowing what was said and how ideas evolved through interactions during that process it is impossible to judge the importance of the collaboration. The second type of analysis codes the semantics of the notes to make conclusions about the character of the discussion without really knowing what the discussion was about. It has generally been assumed that the only alternative is to make subjective and/or anecdotal observations from actually observing some of the discussion and understanding its content – and that this would be impractical and unscientific.
A major problem that we have just observed with the prevalent assessment approaches is that they throw out the CSCL with the richness of the phenomenon when they reduce everything to data for statistics.
What we need to do now is to look at examples of CSCL and observe the collaboration taking place. Collaborative knowledge building is a complex and subtle process that cannot adequately be reduced to a simple graph or coding scheme, however much those tools may help to paint specific parts of the picture. One central question that needs to be addressed seriously has to do with our claim that collaboration is important for knowledge building. We need to ask where there is evidence that knowledge emerged from the CSCL-mediated process that would not have emerged from a classroom of students isolated at their desks, quietly hunched over their private pieces of paper. Beyond that, we should be able to trace the various activities of collaborative knowledge building: where one person’s comment stimulates another’s initial insight or question, one perspective is taken over by another, a terminological confusion leads to clarification, a set of hypotheses congeals into a theory, . . . , and a synergistic understanding emerges thanks to the power of computer-supported collaborative learning.
Before we had systems like CSILE, collaboration across a classroom was not feasible. How could all the students simultaneously communicate their ideas in a way that others could respond to whenever they had the time and inclination? How could all those ideas be captured for future reflection, refinement, and reorganization? CSCL promises that this is now possible. We have to show that it has become a reality in showcase classrooms – that CSCL systems really do support this and that exciting things really are taking place thanks to this technology that could not otherwise. Only when our analyses demonstrate this will we have rediscovered CSCL in our analysis of classroom experiments.
Making collaborative learning visible
Statistical analysis of outcomes has dominated educational research because it was assumed that learning takes place in people’s heads, and since Descartes it has been assumed that we have only indirect access to processes in there. Much work in cognitive sciences, including artificial intelligence, assumes that we can at best model the mental representations that are somehow formed or instilled by learning. Whatever we may think of these assumptions, they surely do not apply to collaborative learning. By definition, this is an intersubjective achievement; it takes place in observable interactions among people in the world.
The point is that for two or more people to collaborate on learning, they must display to each other enough that everyone can judge where there are agreements and disagreements, conflicts or misunderstandings, confusions and insights. In collaborating, people typically establish conventional dialogic patterns of proposing, questioning, augmenting, mutually completing, repairing, and confirming each other’s expressions of knowledge. Knowledge here is not so much the ownership by individuals of mental representations in their heads as it is the ability to engage in appropriate displays within the social world. Thus, to learn is to become a skilled member of communities of practice (Lave & Wenger, 1991) and to be competent at using their resources (Suchman, 1987) , artifacts (Norman, 1993) , speech genres (Bakhtin, 1986) , and cultural practices (Bourdieu, 1972/1995) . The state of evolving knowledge must be continually displayed by the collaborating participants to each other. The stance of each participant to that shared and disputed knowledge must also be displayed.
This opens an important opportunity to researchers of collaborative learning that traditional educational studies lacked: what is visible to the participants may be visible to researchers as well. Assuming that the researchers can understand the participant displays, they can observe the building of knowledge as it takes place. They do not have to rely on statistical analysis of reified outcomes data and after-the-fact reconstructions that are notoriously suspect. Koschmann (1999) pointed out this potential deriving from the nature of dialog as analyzed by Bakhtin, and also cited several studies outside of CSCL that adopted a discourse analytic approach to classroom interactions.
According to Bakhtin (1986) , a particular spoken or written utterance is meaningful in terms of its references back to preceding utterances and forward to responses of a projected audience. These situated sequences of utterances take advantage of conventional or colloquial “speech genres” that provide forms of expression that are clearly interpretable within a linguistic community. Explicit cross-references and implicit selections of genres mean that sequences of dialogic utterances display adoptions, modifications, and critiques of ideas under discussion, providing an intersubjectively accessible and interpretable record of collaborative knowledge building.
In order for collaborative learning processes to be visible to researchers, the participant interaction must be available for careful study and the researchers must be capable of interpreting them appropriately. In CSCL contexts, learning may take place within software media that not only transmit utterances but also preserve them; the information preserved for participants may be supplemented with computer logging of user actions for the researchers. If communications are not otherwise captured, as in face-to-face collaboration, they can be videotaped; the tapes can be digitized and manipulated to aid detailed analysis. In either case, it may be possible for researchers to obtain an adequate record of the interaction that includes most of the information that was available to participants. In face-to-face interaction, this generally includes gesture, intonation, hesitation, turn-taking, overlapping, facial expression, bodily stance, as well as textual content. In computer mediated collaboration, everyone is limited to text, temporal sequence, and other relationships among distinct utterances – but the number of relevant interrelated utterances may be much higher. To avoid being swamped with data that requires enormous amounts of time to analyze, researchers have to set up or focus on key interactions that span only a couple of minutes.
The problem of researchers being capable of appropriately interpreting the interactions of participants is a subtle one, as anthropologists have long recognized (Geertz, 1973) . A family of sciences has grown up recently to address this problem, including conversation analysis (Sacks, 1992) , ethnomethodology (Garfinkel, 1967; Heritage, 1984) , video analysis (Heath, 1986) , interaction analysis (Jordan & Henderson, 1995) , and microethnography (Streeck, 1983) . These sciences have made explicit many of the strategies that are tacitly used by participants to display their learning to each other. Researchers trained in these disciplines know where to look and how to interpret what is displayed. Researchers should also have an innate understanding of the culture they are observing. They should be competent members of the community or should be working with such members when doing their observation and analysis. For this reason, as well as to avoid idiosyncratic and biased interpretations, an important part of the analysis of interaction is usually conducted collaboratively. At some point, the interpretation may also be discussed with the actual participants to confirm its validity. Collaboration is an intersubjective occurrence and its scientific study requires intersubjective confirmation rather than statistical correlations to assure its acceptability.
Observing computer-supported collaborative learning
If collaborative learning is visible, then why haven’t more researchers observed and reported it? Perhaps because collaborative knowledge building is so rare today. I have tried to use systems similar to CSILE in several classrooms and have failed to see them used for knowledge building (Stahl, 1999) . They may be used by students to express their personal opinions and raise questions, but rarely to engage in the kind of on-going dialog that Donald (1991) saw as the basis for a theoretic culture or to engage in the investigation of “conceptual artifacts” (e.g., theories) that Bereiter (2000) identifies as central to knowledge building. Of the five classrooms reviewed in chapters 3 and 4, probably only one of the Canadian classrooms advanced significantly beyond the level of chat to more in-depth knowledge building. The exchange of superficial opinions and questions is just the first stage in a complex set of activities that constitute collaborative knowledge building (Stahl, 2000) . Even simple statistics on thread lengths in threaded discussion systems (Guzdial & Turns, 2000; Hewitt & Teplovs, 1999) indicate that communication does not usually continue long enough to get much beyond chatting. So the reviewed chapters are right that the classroom culture and pedagogy are critical, but they do no go far enough.
It is probably important for researchers to set up special learning contexts in which students are guided to engage in collaborative knowledge building. Too much of this was left up to the teachers in the studies we have just reviewed – despite the fact that teachers in CSILE classrooms are explicitly trained to foster collaborative learning. Student activities must be carefully designed that will require collaboration and that will take advantage of computer support for it. For instance, in the Dutch university case it sounds like the wrong tasks were made the focus of collaboration and computer support. Very few notes were entered into the computer system during the long “deepening knowledge phase” when students were reading. Perhaps through a different definition of tasks, the students would have used the system more while they were building their knowledge by collecting relevant ideas and facts in the computer as a repository for shared information. The final product – the educational policy note – could have been made into the motivating collaborative task that would have made the collection and analysis of all the issues surrounding this meaningful.
A nice success story of a researcher setting up a CSCL situation is related by Roschelle (1996) . He designed a series of tasks in physics for pairs of students to work on using a computer simulation of velocity and acceleration vectors. He videotaped their interactions at the computer and in subsequent interviews. Through word-by-word analysis of their interactions, Roschelle was able to observe and interpret their collaboration and to demonstrate the degrees to which they had or had not learned about the physics of motion. He did the equivalent of looking seriously at the actual content of the thread of notes between Elske and her fellow students in the Netherlands. Through his micro-analysis, he made the learning visible.
It is true that Roschelle analyzed face-to-face communication and this is in some ways a richer experience than computer-mediated interaction using software like CSILE. But communication analysis was originally studied in the context of telephone interactions (Schegloff & Sacks, 1973) , so it is possible to interpret interactions where bodily displays are excluded. Computer-mediated collaboration will turn out to look quite different from face-to-face interaction, but we should still be able to observe learning and knowledge building taking place there by working out the ways in which people make and share meaning across the network. By making visible in our analysis what is already visible to the participants, we can rediscover the collaborative learning and the effects of computer support in CSCL contexts.
The view of collaborative learning as visible in interaction is itself a collaborative product that has emerged in interactions of the author with Timothy Koschmann, Curtis LeBaron, Alena Sanusi and other members of a Fall 2000 seminar in CSCL.
Bakhtin, M. (1986) Speech Genres and Other Late Essays, (V. McGee, Trans.), University of Texas Press, Austin, TX.
Bereiter, C. (2000) Education and Mind in the Knowledge Age. Available at: http://csile.oise.utoronto.ca/edmind/main.html.
Bourdieu, P. (1972/1995) Outline of a Theory of Practice, (R. Nice, Trans.), Cambridge University Press, Cambridge, UK.
Donald, M. (1991) Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition, Harvard University Press, Cambridge, MA.
Garfinkel, H. (1967) Studies in Ethnomethodology, Prentice-Hall, Englewood Cliffs, NJ.
Geertz, C. (1973) The Interpretation of Cultures, Basic Books, New York, NY.
Guzdial, M. & Turns, J. (2000) Sustaining discussion through a computer-mediated anchored discussion forum, Journal of the Learning Sciences .
Heath, C. (1986) Video analysis: Interactional coordination in movement and speech. In Body Movement and Speech in Medical Interaction, Cambridge University Press, Cambridge, UK, pp. 1-24.
Heritage, J. (1984) Garfinkel and Ethnomethodology, Polity Press, Cambridge, UK.
Hewitt, J. & Teplovs, C. (1999) An analysis of growth patterns in computer conferencing threads, In: Proceedings of Computer Supported Collaborative Learning (CSCL '99), Palo Alto, CA, pp. 232-241.
Jordan, B. & Henderson, A. (1995) Interaction analysis: Foundations and practice, Journal of the Learning Sciences, 4 (1), pp. 39-103. Available at: http://lrs.ed.uiuc.edu/students/c-merkel/document4.HTM.
Koschmann, T. (1999) Toward a dialogic theory of learning: Bakhtin's contribution to learning in settings of collaboration, In: Proceedings of Computer Supported Collaborative Learning (CSCL '99), Palo Alto, CA, pp. 308-313. Available at: http://kn.cilt.org/cscl99/A38/A38.HTM.
Lave, J. & Wenger, E. (1991) Situated Learning: Legitimate Peripheral Participation, Cambridge University Press, Cambridge, UK.
Norman, D. A. (1993) Things That Make Us Smart, Addison-Wesley Publishing Company, Reading, MA.
Roschelle, J. (1996) Learning by collaborating: Convergent conceptual change. In T. Koschmann (Ed.) CSCL: Theory and Practice of an Emerging Paradigm, Lawrence Erlbaum Associates, Hillsdale, NJ, pp. 209-248.
Sacks, H. (1992) Lectures on Conversation, Blackwell, Oxford, UK.
Schegloff, E. A. & Sacks, H. (1973) Opening up closings, Semiotica, 8 , pp. 289-327.
Stahl, G. (1999) WebGuide: Guiding collaborative learning on the Web with perspectives, In: Proceedings of Annual Conference of the American Educational Research Association (AERA '99), Montreal, Canada. Available at: http://GerryStahl.net/publications/conferences/1999/aera99/ -- and -- http://www-jime.open.ac.uk/00/stahl/stahl-t.html.
Stahl, G. (2000) A model of collaborative knowledge-building, In: Proceedings of Fourth International Conference of the Learning Sciences (ICLS 2000), Ann Arbor, MI, pp. 70-77. Available at: http://GerryStahl.net/publications/conferences/2000/icls/ -- and -- http://www.umich.edu/~icls/proceedings/abstracts/ab70.html.
Streeck, J. (1983) Social Order in Child Communication: A Study in Microethnography, Benjamins, Amsterdam, NL.
Suchman, L. (1987) Plans and Situated Actions: The Problem of Human-Machine Communication, Cambridge University Press, Cambridge, UK.
Go to top of this page
Return to Gerry Stahl's Home Page
Send email to Gerry.Stahl@drexel.edu
This page last modified on August 11, 2004