In Proceedings of the Computer Support for Collaborative Learning (CSCL) 1999 Conference, C. Hoadley & J. Roschelle (Eds.) Dec. 12-15, Stanford University, Palo Alto, California. Mahwah, NJ: Lawrence Erlbaum Associates.

Collaborative Scientific Inquiry in a Simulated Laboratory

Anandi Nagarajan, Cindy E. Hmelo

Department of Educational Psychology, Rutgers University

Roger S. Day

Pittsburgh Cancer Institute, University of Pittsburgh Medical Center

Abstract: Designing clinical trials is a complex, ill-structured task. Medical students are equipped neither with relevant experience nor appropriate schemas to effectively design these experiments. Providing novice learners with real-world experiences are not always feasible due to safety, time, and cost. However, simulated experimental design environments can make these experiences possible. Computer models can promote new ways of thinking, facilitate multiple representation of problems, model the inquiry process, and provide explicit external representations. The Oncology Thinking Cap is one such modeling environment that provides students with experiences in designing and simulated clinical trials. In this paper, we present a case study of a group of fourth year medical students working collaboratively at designing Phase II clinical trials to test a cancer drug, Pittamycin. The trials were designed with a special interface that served to promote task-centered discussion and the construction of a shared problem space. The students became aware of the factors involved at this complex task and used the feedback to revise their understanding.
Keywords: Distributed Intelligence, Simulation, Cognitive Analysis

Introduction

Effective scientific inquiry skills are difficult to learn. Medical students are expected to have general as well as domain-specific inquiry skills to participate in clinical research (Hmelo et al., 1998). For example, they need to understand clinical trial designs. While expert scientists have the relevant experience and appropriate schemas to effectively design experiments, novices need to develop these skills through case-based practice and experience (Kolodner, 1993). Providing students with real-life learning experiences is not always feasible or safe. External artifacts and computing tools can be used to provide novices with opportunities to build and refine their mental models as well as to develop problem-solving skills (Kozma, 1991; Pea, 1993).

This paper presents a case study of a group of fourth year medical students working collaboratively at designing Phase II clinical trials to test a cancer drug, Pittamycin. The students designed their trials using the Oncology Thinking Cap (OncoTCAP), a computer-based modeling laboratory for conducting experiments in cancer biology. To make OncoTCAP easier to use, a special purpose interface, the Phase II clinical trial wizard was developed. The wizard served as a tool to focus task-centered discussion and plan clinical trials. This paper presents a qualitative overview of how the software served as a tool to support distributed learning across a group of medical students as they designed a Phase II clinical trial.

Distributed learning

Learning is not an isolated process. Students depend on interaction with teachers and peers to construct knowledge. Interaction is even more important in learning about ill-structured domains (Koschmann et al., 1996). However, if students do not have adequate prior domain knowledge, they are not likely to construct knowledge through collaboration alone. Computer models or simulations can provide support as they provide 1) a context for learning and 2) software-realized scaffolding (de Jong et al., 1998; Pea, 1993) that can communicate the inquiry process by structuring and sometimes simplifying the task (Hmelo & Guzdial., 1996). Computer tools can be used to offload lower-level tasks so the learner can concentrate on constructing a high level understanding. Used in this manner, computers are no longer seen as isolated tools to individualize learning, but can provide a context for discourse (Reusser, 1993).

Salomon (1993) distinguished between the role of computers as pedagogic tools versus performance-oriented tools. For example, tools that help guide physicians to become better diagnosticians are and should be more pedagogically-oriented as the emphasis is on the development of the physicianís solo diagnostic skills. Computer tools can be used to promote new ways of thinking that allow learners to participate in activities and accomplish tasks they could not do otherwise (Pea, 1993). They can help learners represent problems in a more expert manner, test out their understanding and receive feedback to revise their thinking, and model appropriate problem-solving and inquiry processes. Because they provide an external representation, computer tools support learning as students' ideas become objects of inquiry.

Representing the problem

Prior to planning a problem solution, learners need to have a clear representation of the goals of the task. Reusser (1993) proposed that a problem solving process consist of a "representation-construction phase, followed by problem-solving operations that act upon the created representation." In a collaborative learning situation, learners have to establish a shared conceptual understanding of task referred to as Joint Problem Space (JPS) (Teasley & Roschelle, 1993). The JPS is created through a shared understanding of goals, knowledge of the current problem state, as well as awareness of problem-solving actions that could be used to move from the current state to the final goal. While the creation of a joint problem space is facilitated by the social interaction among the group members, it is this shared understanding that provides a context for further dialogue and problem-solving. Koschmann et al. (1996) note that a single approach or problem representation is insufficient in understanding the complexity of an ill-structured problem, and suggest the use of multiple perspectives, representations, and strategies.

Problem representation is often a difficult task for learners. Although it seems that a group of students would bring multiple perspectives to a problem generating a broader understanding, similarity in their backgrounds could involve a similar knowledge base (Dunbar, 1995). Also, complexity of the problem, interactions among variables, and social-conflict could add to the learner's cognitive load. Pea argued that computational tools can be used to provide multiple representations and avoid the cognitive overload of tasks with complex goal structures (Pea, 1993). The use of "cognitively plausible and pedagogically useful representation systems" can help students in creating mental models of a problem while capturing itsí essential structural features (Reusser, 1993, p. 149).

Computational support for experimental design

One example of an ill-structured problem is experimental design. There are many different types of designs that might be appropriate in different domains and for different purposes. Computer tools can be designed to support the experimental process by providing slots for input variables (Baker & Dunbar, 1996), proceduralizing information (Kozma, 1991), executing plans (Pea, 1993), and providing learners with representational tools for discussion and interpretation (Reusser, 1993).

Computer models are specially useful in terms of constructing links between symbolic domains and the phenomena represented by real-world situations (Kozma, 1991). Manipulating a computer model by changing values of input variables and observing changes in output variables are basic steps in scientific discovery learning (de Jong et al., 1998). While computer environments can provide additional support to guide discourse in the form of hints, menus, and elaborative feedback, it is important to ensure that computer tools assist learners in developing internal mental models and not provide easy answers. Salomon (1993) argued that while technology can assist learners in understanding concepts and solving problems, transfer of knowledge and strategies is possible only if learners internalize the knowledge constructed with the help of pedagogic tools.

Designing clinical trials

Medicine is a domain that is both complex and ill structured (Koschmann et. al., 1996). Patients can exhibit considerable variability making it difficult to apply a single diagnosis or solution across patients. Training medical students in diagnosis by using a case-based approach facilitates learning in real-world situations. Similarly, providing students with a real-world experience in designing clinical trials should improve their understanding of the complex cognitive processes needed for effective trial design. Since this kind of testing takes a long time, is expensive, and exposes people to health risks, a computer-based simulated laboratory can provide students with abundant learning opportunities, ample practice, and understanding of how a particular drug works overall as well as showing individual patient variability.

A drug typically goes through several stages of laboratory and clinical testing before it is made available for general use (Simon, 1993). Phase I of clinical testing involve a small group of patients and itís aim is to identify a safe maximum tolerated dose (MTD). A Phase II trial is subsequently conducted to see if there is any clinical response to the drug. In the design of a Phase II trial, researchers specify a single dose and dosing schedule for the drug along with conditional rules for dose modification following occurrences of toxicity. There are no control conditions in Phase II trials. Instead, factors such as patientís age and stage of disease are used to determine the statistical criteria for declaring a drug worthy of further study (Simon, 1993). The goals of Phase II studies are two-fold: assessing the positive effects of the drug in terms of clinical responses, and determining negative effects in terms of toxicity.

Methodology

Participants

This study was conducted at the University of Pittsburgh School of Medicine. A group of medical experts and 6 groups of fourth year medical students participated in the study. For this paper, the verbal protocol of one student group was selected and analyzed qualitatively. Elsewhere, we have demonstrated that, overall, students learned about trial design from this task (Hmelo et al., 1998). The student group had received a lecture on clinical trial design previously and had also designed a clinical trial on paper prior to working at the computer simulation.

Materials

The participantsí task was to design a successful Phase II clinical trial. These trials were designed using the OncoTCAP, which is a computer-based modeling laboratory. It models the heterogeneous populations of cells that comprise tumors. OncoTCAP can model important concepts in cancer biology such as cell growth, death, and repair mechanisms, mutational processes, treatment characteristics, resistance, and schedules; and genetic characteristics (Day et al., 1998). OncoTCAP models these processes by specifying the different properties of cancer cells such as their genetic make-up, location, and resistance (Ramakrishnan et al., 1998).

Figure 1: Step 1 of the Clinical Trial Design Wizard: Defining the dose and schedule

Figure 2: Step 2 of the Clinical Trial Design Wizard: Modifying the dose due occurrence of toxicity

Figure 3: Step 3 of the Clinical Trial Design Wizard: Deciding when individual patients will be taken off-treatment

Figure 4: Step 4 of the Clinical Trial Design Wizard: Setting the statistical parameters

To make this tool usable for novices, a special purpose interface was developed. The Phase II clinical trial wizard assists novices by dividing the task into four steps that lead the user through the experimental design process (Hmelo et al, 1998). In these screens, the user enters various design parameters for the trial such as the Schedule (Figure 1), Dose Modifications (Figure 2), Off treatment criteria (Figure 3), and Statistical Criteria (Figure 4). After completing these steps, the participants run this simulation in the Multiple Patient Simulator (Figure 5). The users can then return to the wizard to make further modifications to the design and rerun the trial.

Figure 5: Modified Multiple Patient Simulator (MPS) summarizes the results of the simulation and allows users to examine individual patient event histories

Procedure

Participants were told that they were meeting to use a computer simulation to help them design a Phase II clinical trial for a new drug called Pittamycin. They were given the pre-clinical information and the results of a Phase 1 trial. This information included information about the maximally tolerated dose and the kinds of toxicity observed in the Phase 1 trial. In particular, they were told that there were side effects related to the blood and bone marrow (hematologic, or "hem" as the students refer to it) and brain and nervous system side effects (neurologic or "neuro"). The participants worked in groups of four and designed Phase II trials. The second author facilitated the session by 1) helping participants with interface problems, 2) asking them to justify their changes, and 3) encouraging them to reflect on what they learned from this experience. One student typed at the keyboard and used the mouse based on the group's consensus. The computer screen was projected onto a large whiteboard at the front of the room. Student discussions were audio- and videotaped and were subsequently transcribed.

Coding and analyses

For this paper, the transcript of one student group was analyzed qualitatively to examine how the software facilitated studentsí learning and understanding. Discussion protocols have been selected to exemplify these issues. Initial representation of the problem was compared to the software-supported problem representation. Also, the construction of joint understanding among students was identified. Finally, the role of multiple representations provided by the software on studentsí understanding was addressed.

Results and Discussion

This student group worked for three hours. They designed and ran a total of 14 experiments using OncoTCAP. The qualitative analyses revealed how the studentsí understanding of the problem space changed as they used the wizard. The wizard not only enabled them to better understand the nature of the task but also served as a context for interpretation of results.

Initial problem representation

The initial problem representation was captured in the studentsí summary of the trial they had designed on paper prior to the computer-based clinical trial design task. Their initial plan included patient selection criteria, dosage levels, and schedule. They initially proposed conducting the trial with two groups of patients, each receiving a dosage of 60 mg2 of the drug per week but for different time periods. While this suggests a general knowledge of experimental design, they neglected the domain-specific issues in designing clinical trials such as dose-modification rules. Moreover, they did not have a complete representation of the goal-structure for this task. The studentsí initial problem representation was reflected in their plans and goals.

J: Well, what we are going to actually do is... Letís seeÖwe were going to try to recruit fifty patients, which were age rangesÖI think we chose, 35, well 40 to 70. Age ranges 40 through 70 all of them being, all of them being, you know, Stage 3-B or 4 who have had ah, were either only candidates for palliative therapy or who had failed previous, previous therapy.
J: And ah, we are going to start them on ah, pittamycinÖ, we are going to split them into two groups.
J: The first group ah, we are going to , start them at the 60 mg per meter squared dose ah, which we found on the Stage I trial, as the MTD. And we are going to do that at a dosing schedule of 60 mg per meter squared IV once per week.
J: And continue that for 4 weeks. Thatís for the 1st group. The 2nd group is pretty much identical to that except for we are going to extend the uhm treatment period to 8 weeks.
F (Facilitator): Why did you decide to have two arms here?
C: We want to, I think we wanted to seeÖwe are interested in the ah, if we could have a difference in therapeutic effect if we had an extended, extended, treatment period.

So here, the students were clearly representing the problem as one of maximizing the therapeutic effect or patient response to the drug. They did not initially consider the second goal of avoiding potential toxic side effects. What the students were clearly lacking was knowledge of a definite structure of the task.

Structure of the task

The wizard screens helped to structure the task for the students and promoted understanding of the "minimize toxicity" goal. Since there were several variables involved in the task, the wizard was designed to break down the components into screens with smaller semantically related chunks of information. Breaking up the task into semantic chunks made it easier for the students to understand the variables involved and to focus on specific design components (Lajoie, 1993; Reusser, 1993). The students had no difficulty entering values for dosage and schedule as they had included these variables in their initial design. However, when asked to input dose-modification rules, the students realized that they had not considered toxicity issues at all. The next segment exemplifies this.

F: O.K, you can go onto the next screenÖ this is dose modification due to toxicity. So you can reduce the dosage of the patients that experience different kinds of toxicity. That might be something that you may wanna...you didn't write anything about that in your designs, so.
C: No, we didn't.
J: So far we didn't. Oh O.K.
F: That might be something that you may wanna, come back to afterwards. So if you have particular kinds of toxicities at particular grades you can specify whether you want to reduce the doses or how long.
C: We had it vaguely in our program...if their toxicity treatment were different.

In the student's initial plan, there were vague references to monitoring patients but with no mention of actions to be taken if adverse effects were observed. On the next screen , students were expected to specify the circumstances under which treatment would be discontinued. What is intriguing is that, although the students did not mention toxicity criteria in their initial plan, when faced with the appropriate screen on the wizard, the students engaged in a long conversation to determine how the variables were defined and what they meant. This suggests that the students did not always know which knowledge to access in complex problem-solving situations but that they could do so when the task was appropriately structured. In this case, the simulation input fields served as prompts. In the next example, students used information provided by the wizard as a cue to look up the effects of hematologic and neurologic toxicity so they could make a decision about when they would reduce the drug dosage.

S: O.K, so this is heme neuro stuff so we had to in order to. It says neutropenia thrombocytopenia. So we have to look ah at the first box, white blood cells for instance and platelets are going to be both important.

All participants were looking at the dose modification screen when S mentioned the "first box" and proceeded to understand the distinction between different toxicity grades. They continued their discussion to understand effects of toxicity types and grades by examining the toxicity criteria and the slots to be filled on the dose modification screen. They realized that the decisions were not easy to make without adequate prior knowledge as they struggled to understand the distinction between neurologic toxicity grades 2 and 3.

S: Oh man. Youíve got to know about this. So if we get a neuroÖ
C: For pittamycin weíre worrying about neuro and heme right?
J: Yeah.
S: Yeah.
C: O.K. I mean, with, with, three sounds good. I mean if it looksÖ
S: Now look at, look at neuro visions. Three is symptomatic subtotal loss of vision where it means itÖ Ö
S: Öno they donít have a one and two for neuro visions. Woo! Itís like either three is the first thing that you see is bad.
C: Right.
S: And then four, total blindness is worse but thereís nothing for two so maybe three is the right cutoff. I donít know, it seems to make sense. Ö
S: The difference between two and three is big.

The students were now in the process of constructing a joint problem space, understanding the factors involved in the trial design process, and getting a clearer idea of the task goals while making a decision about when to reduce the drug dose. Finally, the students were directed to specify statistical criteria for selecting or rejecting the drug. Though this was not specified in their initial design, they made use of analogies to other trials they had learned about:

S: Or you can use the numbers from the Vitamin E trial to process it. Use historically the numbers.
Shared problem representation

Part of constructing a Joint Problem Space involves creating a shared problem representation. The software supported this in two ways: by making the experimental variables and the dual goal structure explicit. While the structure of the screens in the software served to simplify the task for the students, it also provided them with an exposure on all the factors involved in the experiment requiring consideration. In spite of this, the students were not in a position to construct a joint problem space or a shared problem representation until they saw the results from the first trial. The results of the simulation provided information about responses, toxicity, number of treatments the person received, recurrence of tumor and so on. The students looked through the patient histories to figure out what the effects of the drug were on individual patients as well as the overall efficacy of the drug. In the following discussion, students jointly interpreted the "Results" screen showing an individual patient history in which the cancer spread to the liver and then developed toxicity.

J: See this is this, this is patient one O.K. So, this is what happened. Treatment on,...
S: Oh I can see it with the ...this, this person has a liver met. That's what it says.
F: Right.
C: Liver...
S: And then they have a heme...hematologic toxicity Grade 1...Grade 0.

These histories provided a context for students to better understand the implications of the MPS summary statistics, thus the different representations supported overall understanding.

Multiple representations of feedback and their interpretation

Students received dynamic feedback through multiple displays. They could view survival plots for the patient sample as well as graphic representations of individual patients with the single patient simulator. The single patient simulator depicted phases when treatments were given and their effects on the tumor cell count. By observing these graphs and the tables provided, the students were able to jointly construct knowledge about the task.

S: Oh man, Ö We're not giving them enough... . Maybe they need more treatments.
J: If we give them more. If we give them...
S: So we do the next study twice as long? Ö
J: We could even give them more treatments per week or we can give them a longer, longer trial. Ö
S: Letís see if Ö we can learn anything more about toxicity before we press on.
F: Uhm, the K-M plot is a Kaplan-Meier Survival Plot.
C: Kaplan-Meier. Yeah.

After perusing through the graphs and getting a basic understanding, one student took it upon himself to review what they had learned so far. This is good summary of the understanding constructed by the group, in terms of understanding the trial design factors as well as interpreting graphs, and evaluating toxicity issues.

S: We were reviewing everything and another part of the trial which we had to review ah...We had to determine what kind of toxicities we're going to be worried about...basically. And what grade... here is like a list of each, like, kind of toxicity and grade and remember hem and neuro toxicity were important also. O.K. so that's like platelets, white blood cells... This person was at Grade 2 hematologic toxicity, see that? So that means they could have had platelets of 75, or no, 50. We decided if they get a Grade 3 or greater of any kind of toxicity then they don't need anymore drugs. But nobody got that bad toxicity because we only gave the drug four times.

The student explained how toxicity criteria were established. The student inferred that nobody died from toxicity due to low dosage of the drug. The student then suggested looking at the cell count in another patient to understand the drug's effects.

S: Ö Letís keep going and see what kind of toxicities we can find in other patients. So this, that person had a neurologic toxicity, see there. Neurologic 1 is when they got toxicity and neurologic 0 is when it resolved. This is them over time in months. The tumors start growing at 0 month so that at ninety-nine point seven eight (99.78) months is when they die. Now show them the ah, the ah, the little growth of the tumor thing againÖYou're gonna plot the ah, plot the tumor growth ending in that curve...so if you're...
S: So here. See the blue is the tumor...the primary.
P: Uh-huh.
S: When it goes down to nothing is when it's time a surgery occurs, right?

The students used the cell count display to collaboratively construct an understanding of a complete response.

S: When they take that little dip that's when we gave the drug. You can see the tiny, tiny effect of less than one log effect of total tumor cells that we're burning that our treatment gave and then... At 1012 the patient dies and at 109 that's the clinically observable realm, understand, what I mean?
C: O.K. so what if that red line would have dropped below 109 or at 109 that would have been perceived as a complete response.
S: At 109 is a diagnosis.
P: It's a complete response.
C: Yeah. O.K.

This segment provides evidence that the group has jointly constructed an understanding of the task and exerted considerable effort at representation and interpretation. The students were clear about the goals related to toxicity as well as how dose modifications could affect patient responses. Moreover, they developed an enhanced understanding of the effect of treatment and what a complete response meant in terms of cell count. Equipped with an understanding of the task and the software, the students used this knowledge to design subsequent trials. The first two trials took them as much time as all the remaining 12.

Conclusion

The studentsí initial representation of the clinical trial design task was incomplete and fragmented. Their understanding of the task goals was limited as they were only trying to maximize clinical responses to the drug. The OncoTCAP wizard reminded the students about important factors they need to consider in designing clinical trials by presenting the information in semantically structured chunks making it possible for them to develop and construct a shared understanding of the task.

Although the wizard screens exemplified the inclusion of the various factors in the design, it was not until the trial was run on the simulator that the students realized the role and effects of these factors. The software presented feedback in multiple representations: a multiple patient simulator that gave a summary for individual patients, a graphical representation of the effect of treatments on cancer cells, as well as the survival plot for each trial.

The software helped the students reorganize their thinking from an initial sparse representation to a greater conceptual understanding. Despite this initial representation, the students arrived at an adequate solution by the end of the 14th trial. They did not have extensive knowledge base and therefore spent a great deal of time learning about the nature of the Phase II trial design process as well about the characteristics of the drug. The software clearly provided an engaging context for the collaborative learning process. At the end of the session, each student acknowledged that they had learned a great deal from this task, especially with regards to the variables involved, toxicity issues, and statistical criteria. The software provided a shared context for students to reflect on what they had learned from this simulation. For example:

J: Well, we got to look at...look at each patient and how they progressed through the therapy. You know I didn't expect to be you know looking at when they had their toxicities in relation to the treatments and.... Ah, that was probably...you know...that one complicated variable and probably others that I didn't expect to have to deal with...you know I was just picking...you know... Did they have a reduction of tumor and that's it?

By distributing their learning among the technology and the collaborative group, these students were able to manage an extremely complex task while gaining a greater conceptual understanding of scientific inquiry in medicine.

Bibliography

Baker, L.M., & Dunbar, K. (1996). Constraints of the experimental design process in real-world science. In G. Cottrell (Ed.), Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society (pp. 154-159). Mahwah NJ: Erlbaum.

Day, R., Shirey, W., Ramakrishnan, S., & Huang, Q. (1998). Tumor biology modeling workbench for prospectively evaluating cancer treatments. Presented at the 2nd IMACS International Multiconference: CESAí98, Tunisia.

De Jong, T., & van Joolingen, W.R. (1998). Scientific discovery learning with a computer: Simulations of conceptual domains. Review of Educational Research, 68(2), 179-201.

Dunbar, K. (1995). How scientists really reason: Scientific reasoning in real-life laboratories. In R.J. Sternberg & J. Davidson (Eds.), Mechanisms of insight. Cambridge MA: MIT Press.

Hmelo, C.E., Ramakrishnan, S., Day, R., Shirey, W., Huang, Q., & Baar, J. (1998). Developing inquiry skills through scaffolded use of a simulation. In A. Bruckman, M. Guzdial, & J.L. Kolodner (Eds.), Proceedings of ICLS 98 (pp.145-151). Charlottesville VA:AACE.

Hmelo, C.E., & Guzdial, M. (1996). Of black boxes and glass boxes: Scaffolding for doing and learning. In D. Edelson, & E. Domeshek (Eds.), Proceedings of the ICLS 96 (pp.128-134.). Charlottesville VA: AACE.

Kolodner, J. L. (1993). Case-Based reasoning. San Mateo CA: Morgan Kaufmann.

Koschmann, T. Kelson, A.C., Feltovich, P.J., & Barrows, H.S. (1996). Computer-supported problem-based learning: A principled approach to the use of computers in collaborative learning. In T. Koschmann (Ed.), CSCL: Theory and practice of an emerging paradigm (pp. 83-124). Mahwah, NJ: Erlbaum.

Kozma, R.B. (1991). Learning with media. Review of Educational Research, 61(2), 179-211.

Lajoie, S. (1993). Computer environments as cognitive tools for enhancing learning. In Lajoie, S.P., & Derry, S.J. (Eds.), Computers as cognitive tools (pp. 261-288). Mahwah, NJ: Erlbaum.

Pea, R. (1993). Practices of distributed intelligence and designs of education. In G. Salomon (Ed.). Distributed Cognitions. New York: Cambridge University Press.

Ramakrishnan, S., Hmelo, C., Day, R., Shirey, W., & Huang, Q. (1998). The integration of a novice user interface into a professional modeling tool, Proceedings 1998 AMIA conference.

Reusser, K. (1993). Tutoring systems and pedagogical theory: Representational tools for understanding, planning, and reflection in problem solving. In Lajoie, S.P., & Derry, S.J. (Eds.). Computers as cognitive tools (pp. 143-177). Mahwah, NJ: Erlbaum.

Salomon, G. (1993). On the nature of pedagogic computer tools: The case of the Writing Partner. In Lajoie, S.P., & Derry, S.J. (Eds.), Computers as cognitive tools (pp. 179-196). Mahwah, NJ: Erlbaum.

Simon, R. (1993). Design and conduct of clinical trials. In V. DeVita, S. Hellman, & S. Rosenberg (Eds.), Cancer: Principles and Practice of Oncology (pp. 418-439).

Teasley, S.D., & Roschelle, J. (1993). Constructing a joint problem space: The computer as a tool for sharing knowledge. In Lajoie, S.P., & Derry, S.J. (Eds.), Computers as cognitive tools (pp. 229-260). Mahwah, NJ: Erlbaum.

Authors' addresses

Anandi Nagarajan (annagara@eden.rutgers.edu)
Cindy E Hmelo (chmelo@rci.rutgers.edu)
Department of Educational Psychology; Graduate School of Education; 10 Seminary Place; New Brunswick, NJ 08901. Tel. (732) 940-8674. Fax. (732)932-1157.
Roger S. Day (day@zydeco.pci.upmc.edu)
Pittsburgh Cancer Institute, University of Pittsburgh Medical Center; 3600 Forbes Avenue; Pittsburgh, PA 15213. Tel. (412) 647-8008.

Powered by CILTKN