proposal to the McDonnell Foundation

"Allowing Learners to be Articulate:

Incorporating Automated Text Evaluation into Collaborative Software Environments"

a joint project of the Institute for Cognitive Science and the Center for LifeLong Learning and Design at the University of Colorado

 

1. Abstract

We have been developing software environments that allow teachers or students to build educational simulations to foster collaborative learning. In particular, our WebQuest adventure games motivate players to explore subject matter topics on the World Wide Web as part of classroom research projects. Our goal is to explore how "edutainment" software (like Where in the World is Carmen Sandiego?) can support the construction of personal knowledge and the articulate self-expression of learners. In this effort, we have confronted a problem that is quite pervasive in educational software: the challenges posed to game players by games like WebQuest are currently restricted to questions having well-defined factual answers that can be checked by the software. In order to promote and evaluate the construction of deeper knowledge the software needs to be able to make computational judgments about the content of unrestricted essays that the students write.

A new mathematical technique being developed as part of a cognitive theory of text comprehension¾latent semantic analysis, or LSA¾promises to provide the necessary computational ability. LSA computes the semantic relations within a corpus of literature on a given subject matter and then uses this information to judge the semantic similarities among submitted written responses. Although LSA has been found to be almost as reliable as human readers in several laboratory tests, it has yet to be applied in classroom settings. The proposed project will incorporate LSA in a variety of ways within our educational software in order to explore a range of theoretical issues related to how computer-based media can help students learn.

We will develop several of our current software prototypes (WebQuest, Remote Explorium, Teacher’s Curriculum Assistant) further by extending them with LSA mechanisms and by working with teachers and students in the classroom. Development will be guided by cognitive theory concerning text comprehension, research techniques for educational software, and evaluation of various applications of our software in educational practice. The software will be extended to allow students to design and create their own games for fellow students to play. Both questions and answers will be in text format, evaluated automatically by the software using LSA. Classes can select themes, create multiple games incorporating summaries of group knowledge, critique the games, and share the games with other schools over the Internet. Ultimately, LSA can be used to match the most appropriate versions of games or information sources on Web sites to different classrooms or to individual students by evaluating the students’ written products and comparing them to alternative sources of background information.

The project goal is to explore computer-based tools for supporting the collaborative construction of knowledge in classrooms and the articulate self-expression of individual learners without over-burdening teachers. Automated text evaluation mechanisms will be investigated to allow fact-centered questions to be replaced with open-ended, question-answer interactions, without requiring continuous teacher intervention. More generally, the project will address how software environments can help students to learn in an information-intensive, technologically mediated world by matching individual competencies to appropriate resources.

2. Instructional Problem

The Center for LifeLong Learning and Design at the University of Colorado has been working with classrooms and teachers in the Boulder Valley School District to conduct research in educational software. Specifically, our WebQuest software presents students with an adventure game that teaches students research skills involving the Internet. (See Figure 1.) Each time a student confronts an obstacle in the game, the student must answer questions using information found on the World Wide Web (WWW or Web).

Students are enthusiastic about playing the game and surfing the Web. Although WebQuest was just recognized as the "best innovative application of the WWW for education" at the international WWW5 conference in Paris, we think we can make it into a much more effective classroom tool. We recognize several major pedagogical weaknesses to our current approach, based on constructivist theories of learning. These weaknesses are endemic to the computer game approach to education, in which one tries to embed learning opportunities within a motivational game context:

· The questions posed require multiple-choice or keyword answers, not the articulation of deeper reflection.

· The investigation of information is guided by an externally imposed game framework, rather than being student-centered.

· The acquired knowledge is not tuned to the background knowledge and capabilities of the student.

· The learning process is not social and interactional.

As a first step in overcoming these weaknesses, we have begun to experiment with having students actually author (i.e., design and program) adventure games for their classmates to play. This makes for a much more intense learning-by-teaching experience; it opens up exciting new possibilities for interactions in the classroom. However, the bottleneck of multiple-choice or keyword answers remains. A student authoring a game must reduce any knowledge about a topic to a few atomic facts which students playing the game have to match literally. We want to allow learners to be more articulate than this.

Multiple-choice questions and keyword answers have always been resorted to in education for pragmatic reasons. Teachers simply do not have the time to read and understand answers to open-ended questions for every test and quiz. Multiple-choice questions have been used for standardized tests because of technical limitations to machine processing of answer sheets. We know how stultifying this restriction to keyword answers has been. It forms a major barrier to moving classroom emphases from the memorization of atomic facts and isolated terms to the construction of deeper understanding and fuller self-expression.

The constructivist alternative to multiple choice questions has proven untenable to date because of the burden it places on teachers. Within the context of an NSF-funded research project focused on learning-on-demand (student-centered and task-centered) we found that self-directed, authentic learning activities require substantially more teacher resources than are normally available in K-12 or university classrooms. Teachers must evaluate written reports and portfolios on topics that may be relatively new to the teachers themselves. To be most effective, feedback in response to student attempts at articulating their growing knowledge must be timely. In the context of educational games, the situation is even more extreme: evaluation of answers must be immediate to avoid interruption of the motivational game context.

If educational software could adequately process unrestricted text, then it could provide a medium for students to construct and communicate higher-order understandings of subject matter without placing an impossible burden on teachers. For instance, if WebQuest could automatically evaluate unrestricted text, then authors of new games could define obstacle problems using short essays, and students playing the game could enter brief texts that would be compared with the problem essay. In this way, everyone could express their own understanding in their own terms.

The ramifications of evaluating unrestricted text by educational software are far-reaching. Ultimately, this capability would allow textual presentations of topics to be selected based upon students' background knowledge. For instance, an individual student or a classroom of students could be evaluated by software that analyzes their sample writings. When the software then presents WWW sites for the student to explore, it could select sites whose text is at an appropriate reading level. As the use of such software becomes prevalent, WWW sites, WebQuest games, and other educational resources could be structured to provide versions of texts at different reading levels. In the "articulate classroom" that we envision, students would express their ideas in writing, producing portfolios of text that the software could evaluate to form a model of the students’ levels of understanding. This would provide a valuable tool for the teacher to use in guiding students.

The fact that software like WebQuest is currently restricted to multiple-choice questions illustrates a significant and wide-spread problem in education: how to evaluate, score, classify, and otherwise process unrestricted text automatically, without laborious efforts by highly qualified but over-burdened professionals, such as teachers. Adding a free text capability to WebQuest could increase the educational value of the software, in that information of greater complexity could be searched for, and the students’ answers would not have to be as narrowly constrained.

We believe that full natural language understanding by computers is not necessary to remove the bottleneck. Certain computationally feasible analyses of text may be sufficient to meet the needs of software like WebQuest for processing essay answers. The proposed research would permit us to explore this possibility, further developing a promising text analysis technique and extending our educational software to overcome its current weaknesses. Moreover, the project would allow us to test and refine our laboratory-based theories of text comprehension within the context of classroom practice.

Specifically, we propose to investigate a new technique of text evaluation known as latent semantic analysis (LSA). We anticipate that LSA can provide a fully automatic computer technique that allows assessing the content of a text by comparing it with other texts, such as books, articles, essays written by students, single sentences or phrases, even single words. The technique has its limitations and is still being developed. Furthermore, we have only begun to explore its implications, both for psychological theories of meaning and for educational applications. Nevertheless, our work has progressed enough to show that further research along these lines is worthwhile and, indeed, highly promising.

The general cognitive issue that we want to focus on with the proposed project is the question of what it means to acquire subject matter knowledge using tomorrow’s technologies of large information bases and efficient search methods. The ability of students to benefit from external information sources both relies upon a level of internally assimilated background knowledge and simultaneously transforms the motivations for acquiring and internalizing such knowledge. What content do students have to know for successful searching? Will they learn if they know that they can always easily find answers by searching? How do these factors combine to produce intellectual competence and motivation?

We will explore these issues through a series of five educational interventions in K-12 classrooms:

1. Having students play WebQuest games that have been authored by us or by the teachers.

2. Having students author their own WebQuest games for their peers to play.

3. Enhancing the use of factual questions and keyword answers in WebQuest with open-ended questions and essay answers, evaluated using automated LSA mechanisms.

4. Allowing students to share WebQuest games and game components across the Internet using Remote Explorium software that we have developed.

5. Supporting the creation and sharing of theme-centered sets of WebQuest games and related curricular resources using Teacher’s Curriculum Assistant software that we have prototyped.

These interventions and the evaluation of their effects will be described in Sections 4 and 5, following a discussion of the potential of LSA.

3. Cognitive Research

The proposed work is based on constructivist and collaborative theories of learning, broadly defined. The psychological background most relevant to the proposal is the construction-integration theory of text comprehension (Kintsch, 1994) and the latent semantic analysis theory of knowledge acquisition and knowledge representation (Landauer & Dumais, in press). This theoretical framework is complementary to the cognitive theories guiding our design of computer support for learning: breakdown / repair (Fischer, 1994) and situated interpretation (Stahl, 1993). For the most part this background is relatively widely known; since the proposed project centers on the application of a technique that is less well known, we will focus on explaining latent semantic analysis in this section.

What is LSA?

Latent semantic analysis (LSA) is a mathematical / statistical technique for extracting and representing the similarity of meaning of words and passages by analysis of large bodies of text. LSA uses singular value decomposition, a form of factor analysis, to condense a very large matrix of word-by-context data into a much smaller, but still large¾typically 100-350 dimensional¾representation. (Berry, Dumais & O’Brien, 1995; Deerwester, Dumais, Furnas, Landauer & Harshman, 1990). The right number of dimensions has been discovered to be crucial; with the best values, which can be easily optimized for a domain, LSA yields up to four times as accurate simulation of human meaning judgments as ordinary co-occurrence measures.

The promise of LSA

Several sources of evidence show that LSA validly reflects human knowledge of word meaning and human interpretations of terms in text passages:

· After training on about 2,000 pages of English text, LSA scored as well as average test-takers on the synonym portion of TOEFL (the ETS Test of English as a Foreign Language).

· After training on an introductory psychology textbook, LSA equaled students' scores on a multiple-choice hour exam.

· LSA significantly improves automatic information retrieval in general by allowing user requests to find relevant text on a desired topic even when the text contains none of the words used in the query.

· The semantic similarity of successive sentences as measured by LSA mirrored manipulated variations in coherence in expository texts and accurately predicted their comprehensibility (Foltz, Kintsch and Landauer, 1994 ).

· Simple averages (centroids) of the words contained in these sentences significantly predicted the semantic priming by sentences of words judged to be related to the sentences’ overall meaning (Landauer and Dumais, in press).

· Pilot studies have found promising results of using LSA (a) to predict which of a set of brief texts an individual student will learn most from depending on the student’s prior knowledge as expressed in a short essay (research in progress), and (b) to evaluate the content of essays based on their LSA resemblance to text studied by the student or to pre-scored essays written by other students (Foltz, 1996, and research in progress).

Potential uses of LSA for learning and teaching

In what follows, we mention a number of examples of potential educational applications that appear worth pursuing eventually. Since we are proposing to develop a tool, it is important to form some idea about the possible range of uses for this tool. Of course, we can investigate only certain of these uses in the present project, as we shall describe in Section 4.

We believe that LSA can eventually provide the basis for a spectrum of effective new tools for facilitating and enhancing exploratory, project-based and collaborative learning, and we mostly describe such potential applications. However, we believe that most of the methods could also be applied in conjunction with other educational styles and methods, including computer-based tutoring, independent study, and traditional classroom instruction. In all cases, the goal of the new tools is not to supplant other methods, but to augment and amplify their benefits to learners and to help educators produce more and better learning with the same educator effort.

1. Finding optimal text for learning. Since actually finding relevant sources in a large information base such as a library or WWW is very difficult, the teacher traditionally provides a closed set of resources containing the necessary information. Furthermore, the problem is not merely to find resources relevant to a topic, but to find ones comprehensible to the learner with a particular background knowledge. LSA may be able to enhance a teacher’s ability to automatically match educational resources to individual students.

LSA is not only capable of selecting topic relevant materials, but it is able to match individual levels of prior knowledge and terminological sophistication as well. A research project directed by Landauer & Kintsch and funded by DARPA has shown that LSA can be used to choose, from a set of texts on a particular topic, the one text from which an individual student will learn the most. The underlying principle is a notion adapted from Vygotsky (1968), "zones of proximal learning" (Kintsch, 1994). A student learns most from text that, on the basis of prior knowledge, is understood with moderate effort and contains just the right amount of new information. Students are first asked to write short essays on a topic, then the LSA centroid of their essays are compared with those of texts on the same topic but at varying levels of sophistication. Wolfe, Schreiner, Rehder, Landauer & Kintsch (in preparation), using texts about heart function, have shown that LSA-based choice of a text for an individual can result in about 50% more learning than random choice of text.

2. Coherence and comprehensibility measurement. LSA can be used to automatically measure text coherence and comprehensibility, important aspects of thinking and its written expression. Automatic evaluations could be incorporated as a component for a computer tutor in most subjects, or directly serve as an aid for independent learners¾much like current spelling, grammar or style checkers.

3. Connecting students with each other and with relevant experts. LSA could also be used to match more effectively a particular student to other people with similar interests for conversation, collaboration or consultation. Students could either post messages on the Internet or leave statements of interest with characters in a WebQuest game. A computer-based agent would collect them, make LSA comparisons to match areas of interest or levels of knowledge, then pass on recommendations of people to get together with or automatically initiate interactions.

Potential uses of LSA for educational assessment

LSA can provide automatic ways to objectively evaluate written products and to generate content-customized, objective test items. It appears that it may be feasible to automatically measure, at least approximately, the following:

· The quality and relevance of individual written contributions to group activities.

· How much a student has learned from the materials that only she or he has read.

· The overall correlation of an individual’s contributions to the continuous process and final (textual) product of some kinds of collaborative group activities.

The point of such methods would not be to supplant the professional assessment skills and judgment of teachers. Rather, in face of the virtual impossibility of a teacher devising and grading equivalent tests for each student where each has studied a different, unanticipated subject matter, the intent would be to supplement and contribute to teacher judgment of overall achievement. To this end, LSA would be used by the teacher to produce and score a battery of brief assessment instruments individually targeted (a) to the idiosyncratic knowledge being acquired by each group of project participants; and (b) to the different knowledge sources encountered and activities engaged in by each individual.

1. Automatic writing assessment. Evidence that LSA can assess the quantity and quality of learned knowledge contained in a student’s writing has come from several kinds of studies. The most direct have been explorations of the use of LSA to automatically assign grades to essay exam questions. Predictions of instructor assigned grades were quite good, r=.67, and about the same as the correlation of .68 between two human graders.

Concretely, the application of LSA to assessing student knowledge and expression in exploratory and project-based learning might proceed as follows. At the beginning of a project, someone¾the teacher, a publisher, a curriculum specialist, an independent student¾would collect a large and broad training corpus of text relevant to the overall topic, by assembling electronic text either from textbooks and articles or by an Internet search, followed by some culling and editing, and submit it to LSA. As students found texts they would also be included. To evaluate a student’s knowledge and project contributions, an LSA-based program would be invoked by the teacher or student. It could be asked to perform one or more possible actions. For example, it might produce an estimate of the relevance to the overall topic of each text by a particular student (as always, with unusual pieces flagged for teacher attention). It might be asked to determine the similarity of a student’s computer-composed summary of research findings (or other communications and contributions to the group effort) to individual or group source material or to the group’s final report. It might be asked to score an answer to an essay question posed by the teacher, who might have devised the question after searching on a special subtopic among one or all students’ discovered sources. Note also that LSA could be used to detect instances of too much overlap with particular source materials, suggesting over reliance on a select and paste strategy in writing.

2. Choosing or constructing a summarizing sentence or paragraph. LSA may be able to order, and possibly quantify predictively, the quality of every sentence in a text, and thus score a test item consisting of text on a desired topic from which test-takers are asked to construct a summary sentence.

3. Choosing or producing related concepts. LSA could be used to find related sets of words or phrases from a collection of texts on a topic and to estimate their similarity for concept matching or relating tests.

4. Portfolio assessment. It is conceivable that LSA could be applied usefully as a partial or component scoring technique for text-based portfolio evaluation. One idea would be to use LSA to measure the coherence of student generated text¾by measuring the semantic relatedness of successive sentences, as in the experiments mentioned earlier. Another idea would be to use LSA to measure the degree to which the text produced by students reflects the range of content available in the textual resources with which they have been provided or have selected on their own. Coherence in writing, together with topic relevance in comparison with source texts, not only reflects text-based understanding, but can also be taken as an indication of ability to successfully relate ideas, to reason and to transform knowledge.

We have mentioned here a broad range of conceivable applications of LSA to technology-enhanced education only to suggest the potential richness of the approach. In the next sections, we describe our specific goals for the present project.

4. Educational Intervention

We currently plan the following stages of intervention using our software with LSA in the classroom. Because of our commitment to user-centered design and student-centered activities, we will be responsive to what we observe in the classroom and to the suggestions and interests of students and teachers. Thus, the following plan will serve as a guide to help us focus on our research interests rather than a rigid recipe for our work during the duration of the project:

Stage 1. Students play WebQuest games. We have been exploring this stage in several K-12 classrooms during the past school year. We will continue to work with teachers and students to design new types of games and to use them differently in a variety of classrooms. Games we design can then serve as prototypical models to inspire students to construct their own games. Building different kinds of games also gives us insight into the usability of our software and ideas for new functionality.

Stage 2. Students author their own games for their peers to play. We have just begun to explore this approach and have already found that it makes a great deal of difference. Students not only construct their own knowledge of a topic in order to teach it to peers, they become engaged in a design process to structure the knowledge effectively. We believe that design skills provide important learning capabilities for the information-intensive future. Much of our computer science research has centered on developing computational media to support design, and our educational software takes design as a metaphor for constructivist learning. Thus, we have developed a series of software environments that support learning related to a task at hand (Fischer, Nakakoji, Ostwald, Stahl, Sumner, 1993). When students design games for other students, they engage in authentic, self-motivated tasks, reflect on their own or their peers’ learning processes, participate in important social interactions, and interpret domain concepts from different perspectives (Stahl, 1993). Classrooms in which students play each other’s games become involved in joint construction of knowledge.

Stage 3. Use of open-ended questions. This is where LSA is needed to create "the articulate classroom". At this stage, game authors define answers to scroll questions by writing brief essays. Players then answer the questions with their own brief written responses. LSA mechanisms compare the two texts and judge whether they are sufficiently similar in content. This allows students to express their understandings in their own words. LSA is particularly effective in matching up different ways of saying the same thing using different vocabulary. Questions that required rote recitation of facts like the names of Jupiter’s moons can now be replaced with thought-provoking questions like: What would be the effect of Jupiter's gravity on a space ship that wanted to land on Jupiter? Writing paragraphs on such questions promotes high-level learning processes and develops scholarly communication skills.

Stage 4. Students share games across the WWW. Incorporation of supplementary software we are developing (the Remote Explorium and the Teacher’s Curriculum Assistant, described on the following pages) opens up the knowledge-building community to the world. The articulate classroom becomes a global classroom. A student who has developed a game on an esoteric topic can find other students interested in the same topic by distributing the game on the Internet. In this way, WebQuest games will provide yet another communication medium for students on the WWW. The distribution of games also creates a wealth of educational resources for teachers and students to choose from for their group and individual activities. This stage stresses the potential of the Internet to be an active two-way communication medium, rather than just a static repository of information. Students learn to become actors in the scientific community, not merely consumers of external knowledge.

Stage 5. Theme-centered games incorporating written reports. The original WebQuest theme involves knights from the Middle Ages, deriving from the popular dungeons and dragons games. But WebQuest is built on a very general simulation construction substrate, so the visual appearance and the definitions of agents can be readily changed. To build a game on a new theme would be a major undertaking for an individual student. Although some students might want to do this for a theme they have already begun to explore, it makes more sense for a classroom to work together on this. The process might be as follows:

· The class selects a theme like the solar system. They begin researching the topic on the WWW to collect interesting WWW sites and stimulating questions.

· Students divide up the tasks of constructing background icons, character depictions, agent interactions. For instance, a WebQuest related to the solar system might include icons of the planets, spaceships, astronauts, cosmic ray dangers, space walk challenges, etc. Ambitious students could even build simulations into their games, including, for instance, graphic demonstrations of the effects of different gravitational forces.

· Individual students or small groups design games incorporating the components of the themes.

· Students play each other’s games and increase their knowledge of the subject matter.

· Students critique each other’s game designs and revise their own games.

· The class gets together to reflect on the experience, to discuss what they learned about the topic and to compile reports on the theme.

· The class shares what they have learned by distributing some of their games on the Remote Explorium. They might construct their own WWW site on the theme, including statements of their ideas and pointers to other sites they discovered.

Stage 6. Versions of questions and information sources for people with different background knowledge levels. People construct new knowledge by going beyond their previous understanding and then integrating the new insights into their background knowledge (Kintsch, 1994; Fischer, 1994; Stahl, 1993). Therefore, educational information is most effective for an individual when it falls within the person’s zone of proximal learning. LSA allows us to personalize information sources to students by finding texts that most closely match (or slightly exceed) the student’s own writings. Rich digital libraries can provide selections of alternative presentations on any given topic. For instance, the Remote Explorium could eventually contain many versions of solar system games. These versions could be rated using LSA and indexed in the database of the Teacher’s Curriculum Assistant so that teachers and students could select the most appropriate versions. In addition to selecting entire games, people could find game components such as background features, character agents and question narratives. The Visual AgenTalk programming language used to define character behaviors is user-extensible and students could exchange little subroutines in this language that accomplish interesting interactions. LSA databases could also be exchanged across the Internet. That is, one classroom could collect and author texts on a particular subject, then submit it to LSA to create a database of interrelated terms. These databases can be used by other classrooms to evaluate student essays on the given subject, resulting in ratings of the students’ background knowledge and readiness to learn from resources at different levels.

By the sixth stage¾which we plan to explore in the second half of the proposed project¾teachers and students have a wealth of resources organized into coherent curricula on interesting themes. The resources are available in alternative versions for people's levels of knowledge acquisition, or as constituent components that students can combine in their own constructions. In addition, there are software environments to support the collaborative construction of knowledge using these resources, including mechanisms for evaluating text and matching it to individual learners automatically. These tools will help teachers in their new roles, freed from some of the tedious evaluation of rote tests. They will have to oversee the progress of students and make sure that LSA ratings stay on track, using this information to judge what kinds of high-level guidance and support to provide. Our research will look at how to make most effective use of both teachers and software in the classroom.

The software we are planning to test and refine in classroom use in order to support "the articulate, global classroom" consists of the following three component systems currently being developed in our labs:

· WebQuest, a software environment for the design of educational games.

· Remote Explorium, a WWW site with WebQuest games and other educational simulations that can be down-loaded by users around the world.

· Teacher’s Curriculum Assistant, a software environment for teachers to locate, evaluate, adapt and share educational resources over the Internet.

WebQuest (Perrone, Clark, Repenning, 1996) is an adventure game development environment we developed to research educational software like Carmen Sandiego. It allows a game author to lay out a graphical scene with fields, paths, lakes, islands, etc. The scene can then be populated with active agents, such as heroes, princesses, dragons, locked doors and buried treasures. (See Figure 2.) Scrolls are defined and associated with game obstacles. The scrolls pose questions that a player must correctly answer to get past a dragon or enter a door. The scrolls may suggest WWW sites to explore to find hints and answers to scroll questions. When a player clicks on a suggested site, the software opens a WWW browser displaying that site. Players can browse the WWW or perform WWW searches using standard search engines.

In a typical WebQuest game, a player might adopt a medieval knight character and be confronted by an anachronistic question like: What are the names of the four largest moons of Jupiter? The student would read a WWW page about the solar system, answer the question and then pursue the dragon. The question might also be one that requires more understanding and research, like: What was the right ascension of Mercury during the signing of the Declaration of Independence? (See Figure 1.)

The WebQuest software has the capability to let students construct original educational games for their fellow students. This creative process allows students to explore information on the WWW in self-directed ways and to embed ideas and facts they discover into game boards that they design. Students learn new information while situated within a context of having to incorporate the new information into the conceptual framework of an educational game they are constructing for their peers. Within a particular classroom, students exchange and play games, learning subject matter that has been organized by their peers and providing feedback to the game creators. Both game players and authors develop research skills using the WWW; they also both reflect on the organization of knowledge and the strategic design of the game artifact.

The authoring capability of WebQuest takes advantage of Agentsheets (Repenning, 1994), the programming substrate that WebQuest is built upon. Agentsheets is a substrate we developed for building educational simulation applications. It allows authors to design the appearance and behavior of their own active agents, as well as creating their own backgrounds with which the agents interact. Agentsheets is programmed by game authors entirely through visual manipulations and requires no traditional programming knowledge. It includes an end-user programming language, Visual AgenTalk (Repenning 1995), which allows students to define the behaviors of their agents. We have begun testing the Agentsheets and Visual AgenTalk authoring capabilities in the classroom with very positive responses. Students are enthusiastic about tools that empower them to construct their own software environments. The proposed project will allow us to pursue this research and to enhance it with the capability to analyze the process of knowledge building as evidenced by students' question and answer formulation.

The Remote Explorium (Ambach, Perrone, Repenning, 1995; Stahl, Sumner, Repenning, 1995) allows game authors to share their artifacts with students elsewhere across the WWW. Teachers and students can download and adapt entire games or their constituent components from the Explorium. This software was originally developed by us to facilitate the distribution of educational applications written in Agentsheets. As part of the proposed project, we will extend the Remote Explorium to allow students in different classrooms and different schools to share their WebQuest games over the Internet. Currently, researchers at the University of Colorado can put Agentsheets applications on a WWW page for students elsewhere to download easily. To implement our vision of WebQuest as a collaborative learning project, we will have to extend the Remote Explorium to allow students to post their games to the Web, so that the sharing is bi-directional. We also envision people trading components of games, such as graphical depictions of characters, programmed agent behaviors, or collections of narrative questions and answers related to given themes. This allows students to design their own games while taking advantage of components created by other students. However, experience with Remote Explorium to date demonstrates that teachers and students need additional support to take advantage of the distributed resources. We have designed another program to provide just such support.

 

The Teacher’s Curriculum Assistant (Stahl, Sumner, Owen, 1995) retrieves summary information about games in the Explorium and elsewhere on the Internet. It uses this information to help teachers or students locate the game and curriculum examples on the WWW that best match their pedagogical needs. In addition, it provides curriculum ideas and resources to guide the classroom use of the games.

The problems that teachers have using the Remote Explorium are typical of the plight of people trying to obtain educational resources from the Internet generally:

· There are no effective methods for locating relevant curriculum sites, such as WWW pages containing WebQuest games on specific themes.

· It is difficult to search for items of interest; search engines are too generic and indexes to education sites are too idiosyncratic and anecdotal.

· There is no choice of versions for different ability levels, or if there is it is not systematically organized.

· There are no software support tools for adapting resources to one’s particular needs.

· There are no aids for organizing selected resources into coherent curriculum plans.

· There are no simple mechanisms for teachers and students to share their experiences by posting comments or new games back to the Internet.

We have prototyped a curriculum development design environment to respond to these problems. The Teacher’s Curriculum Assistant maintains a database of information about on-line educational resources. It uses information in the database through six user-interface components: Profiler, Explorer, Versions, Editor, Planner and Networker. The Profiler defines the user’s needs in order to query the database for relevant resources. The Explorer allows a user to browse among related resources and curriculum ideas. The Versions component explains the differences between different versions of the same resource so that the most appropriate one can be chosen. (See Figure 3.) The Editor is used for adapting resources¾e.g., editing a text document. The Planner helps a teacher to arrange resources into a lesson plan and to make adjustments to the plan. Finally, the Networker simplifies Internet access, facilitating the posting of comments and new games as well as handling the downloading of selected resources and the updating of the database. (See Figure 4.)

The Teacher’s Curriculum Assistant was designed based on our philosophy of adapting curriculum and resources to the particular pedagogical needs, learning styles and personal interests of the students and teachers in a classroom. The proposed project will allow us to explore the use of LSA in matching textual materials to the background knowledge of individual students, taking full advantage of the built-in support for multiple versions of resources.

5. Experimental Design

We are developing a suite of educational tools in continuous interaction with classroom experience with the tools. We are not designing a finished product, to give to a teacher and evaluate how it works. Rather, we start with prototypes that have some of the features we think we eventually want to have, obtain feedback about their performance, and gradually modify and elaborate our designs. As we have pointed out above, it is not so much the software tools that we are concerned with, but how they are to be employed effectively in the classroom.

The current version of WebQuest has been used in Boulder middle schools. We intend to expand this use locally, including into high schools. We have close working relationships with a number of teachers in different schools who have used our software in their classrooms and who are eager to try WebQuest. In addition, we have contacted several non-local groups and made plans for possible future cooperation. We would let those groups use our tool in the way they prefer, but obtain data on their project from them as well as collect our own data by sending a project member to visit and observe out-of-town sites at regular intervals.

The kind of data we plan to collect are both observational and experimental. Observational data will come from teachers, students, project members who observe classroom use, along with records automatically collected by the computer systems themselves. A project diary will be used to help us organize and preserve these observations and permit their use at later points in time. While much of the observational data will necessarily be informal and opportunistic, we also plan to develop organized observation protocols to ensure comprehensibility and facilitate comparison. The construction of such a protocol would be one of the research goals for the first project year.

We do not plan any large scale classroom evaluation experiments, which would be premature as well as exceeding the resources of the project. Instead, mini-experiments directed at specific questions that arise in the course of this project will be used. At this point, we can sketch only a few obvious first experiments, but these should make it clear how future experimental and evaluation research in this project could proceed:

· Is WebQuest effective as a tool for learning how to search the WWW? Groups of students with varying amounts of experience using the WWW with WebQuest are compared with equivalent students using the WWW with traditional "how to" instructions. Their success at specified search tasks as well as browsing behavior and browsing strategies will be evaluated. Follow-up questionnaires can be used to assess long-term effects.

· How well does LSA evaluate written student responses? We are hopeful that LSA can make fine enough distinctions to identify plagiarism¾a strong temptation when students can easily cut and paste from WWW pages into their essays. We will have to determine empirically whether scores of very high similarity between an essay and a resource text indicate the likelihood of literal copying. Throughout the project, specified samples of student responses will be scored both by LSA and human graders to evaluate the effectiveness of LSA to evaluate written student responses.

· How effective is LSA in helping students to formulate questions? The precise experiment cannot be outlined at this point, because it depends on just what we shall come up with in this regard and how the WebQuest components will evolve. But eventually formal experimental comparisons can be made not only between our support system versus no support, but also between a teacher-led group discussion and the LSA support system. Such studies would be important not so much because they might tell us that LSA is 50% or 70% as effective as a good teacher, but because it might pinpoint differences in the way a teacher helps and the ways our system can be used. Data like this could be more reliable than informal and fortuitous observations and would help direct the evolution of our system.

· During the use of LSA tools, we can easily and automatically make them available or not to a particular student working on a particular topic or question. By randomizing these assignments and using LSA-based custom evaluation tools we will be able to do almost continuous objective measurement of the effect of the tools on problem solving and knowledge acquisition. Statistical analysis by classical randomized within- and between-subject differences (simultaneously in this design) will be straightforward.

· How can software environments best be used in the classroom? Constructivist approaches like game creation typically require longer time commitments and more individualized work than traditional school schedules can accommodate easily. Solutions to this problem will be investigated by working with teachers and trying different ways to integrate the use of the software into classroom processes. We will try small group projects, independent student efforts, after-school arrangements, etc. in order to allow motivated students to develop exceptional but time-consuming games. We will explore different ways of sharing work¾among groups, between classes, having classes build on previous year’s accomplishments. WebQuest will be introduced into a range of schools, from more traditional to more experimental to see how different solutions can be found in different organizational contexts.

· How can coherence of knowledge be promoted? Without guidance, students authoring WebQuest games will tend to build an unstructured sequence of questions and answers. One of the goals of evaluation will be to examine this possibility and to determine what kinds of constraints can be built into the system so that students construct coherent bodies of knowledge. Although resolving arbitrary relationships between a game situation and questions posed may be an impetus to students' creativity, deeper learning will result if questions build on each other, and motivation may be better sustained if the questions are related in a meaningful way to events in the game. For example, the discovery of an underlying relationship in pieces of topic knowledge encountered might become a goal for progressing through the game. LSA may also be useful here in helping students construct an interrelated network of concepts and ideas from the information collected from multiple sources.

· How can software environments best support learning? In addition to providing challenges and sources of information, software can provide guidance. For instance, the LSA mechanisms can be used to guide students to the most appropriate versions of materials. When software mechanisms determine that a student response is inadequate, they can suggest further sources of information to be consulted. We have used computational critics in many of our other software environments to alert users to relevant information (Fischer, Nakakoji, Ostwald, Stahl, Sumner, 1993), and will try to combine critics with LSA tools in this project. We will observe how effective these techniques are within classroom practice.

· How can the Internet be used as a medium for the collaborative construction of knowledge? In the later years of the project we will investigate the effectiveness of tools like the Remote Explorium and the Teacher’s Curriculum Assistant in turning the WWW into a bi-directional medium in which students contribute knowledge as well as consume it. It is premature to determine how specific functionality of this software will be evaluated.

· Where is the best boundary between what must be known and what can be found when needed? In general, we want to further our understanding of an important conceptual problem that must be dealt with if technology such as that proposed here is to be used effectively in education. The problem is the relation between external memory and internal memory. Without some internalized knowledge, external information sources cannot be used effectively. Over-dependence on external memory may discourage the construction of internal knowledge. Clearly, one cannot teach all the knowledge a person might at some point need. Is there something one must know in order to be able to understand what one looks up, and if so, what is that essential knowledge or skill that we need to teach? To what extent does this involve general knowledge? To what extent is it tied to specific domains? Our Web-based information retrieval capabilities will be excellent in a few years¾we need to make sure that our understanding of the conceptual issues concerning the knowing / finding tradeoff keeps up with our technological capabilities.

The primary responsibility for evaluation methods will rest with Drs. Tom Landauer and Walter Kintsch, two experimental psychologists with a great deal of experience in research like this. They will also be the LSA experts of the project. The development of the WebQuest system and its integration with Remote Explorium and Teacher’s Curriculum Assistant will be under the direction of Dr. Gerhard Fischer, a computer scientist experienced in software research, in cooperation with Dr. Gerry Stahl and Corrina Perrone. The task of integrating WebQuest into classroom activities will be directed by Dr. Eileen Kintsch in cooperation with David Clark, a Boulder teacher now using WebQuest who is an authority on student use of the Internet (Clark, 1995).

 

6. Budget

Year 1 Year 2 Year 3 Year 4
PIs:
G. Fischer, 5%AY

$5,000

$5,200

$5,408

$5,624

T. Landauer, 10%AY

$10,000

$10,400

$10,816

$11,249

W. Kintsch, 5% AY

$5,000

$5,200

$5,408

$5,624

Reesearch Staff:
E.Kintsch, .5

$20,000

$20,800

$21,632

$22,497

G. Stahl, A. Repening, C. Perrone

$27,500

$28,600

$29,744

$30,934

GRA, Comp.Sc, .5

$15,000

$15,600

$16,224

$16,873

GRA. Psychology, .5

$13,500

$14,040

$14,602

$15,186

Secretary

$5,000

$5,200

$5,408

$5,624

Total salary & wages

$101,000

$105,040

$109,242

$113,611

Fringe benefits

$14,853

$15,447

$16,065

$16,708

Total personnel

$115,853

$120,487

$125,307

$130,319

Equipment:
SPARCstation upgrade

$5,000

$0

$0

$0

PCs for schools

$5,000

$5,000

$5,000

$0

Teacher release time

$15,000

$15,000

$15,000

$15,000

Travel

$8,000

$8,000

$8,000

$8,000

Communication

$5,000

$5,000

$5,000

$5,000

Supplies

$3,000

$3,000

$3,000

$3,000

Contract key and scan

$2,000

$2,000

$2,000

$2,000

Tuition reimbursement

$12,000

$12,480

$12,979

$13,498

Computer costs

$4,500

$4,500

$4,500

$4,500

Consultants

$5,000

$5,000

$5,000

$5,000

Total other

$64,500

$59,980

$60,479

$55,998

Total direct

$180,353

$180,467

$185,786

$186,317

Overhead(10%pers.)

$11,585

$12,049

$12,531

$13,032

Total

$191,938

$192,516

$198,316

$199,349

 

7. Bibliography

Ambach, J., Perrone, C., Repenning, A. (1995). Remote Exploratoriums: Combining networking and design environments. Computers and Education. Special Issue on Education and the Internet. 24 (3), 163-176.

Berry, M. W., Dumais, S. T. and O'Brien, G. W. (1995). The computational complexity of alternative updating approaches for an SVD-encoded indexing scheme. In Proceedings of the Seventh SIAM Conference on Parallel Processing for Scientific Computing.

Clark, D. (1995). Student’s Guide to the Internet. Alpha Books.

Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science. 41 (6), 391-407.

Eden, H., Eisenberg, M., Fischer G., Repenning, A. (1996). Domain-oriented design environments: Making learning a part of life. Communications of the ACM. 39, (4), 40-43.

Fischer, G. (1995). Distributed cognition, learning webs and domain-oriented design environments. In Proceedings of the Conference on Computer Supported For Collaborative Learning (CSCL’95). 125-129.

Fischer, G. (1995). Conceptual frameworks and computational environments in support of learning on demand. In DiSessa, A., Hoyles, C., Noss, R. (Eds). (1995). The Design of Computational Media to Support Explanatory Learning. 463-480

Fischer, G. (1994). Turning breakdowns into opportunities for creativity. Knowledge-Based Systems. Special Issue on Creativity and Cognition. 7, 221-232.

Fischer, G. (1991). Supporting learning on demand with design environments. In The International Conference on The Learning Sciences. 165-172

Fischer, G., Lemke, A. (1991). The role of critiquing in cooperative problem solving. ACM Transactions on Information Systems. 123-151.

Fischer, G., Lindstaedt, S., Ostwald, J., Schneider, K., Smith, J. (1996). Informing system design through organizational learning. In Proceedings of the Second International Conference On The Learning Sciences.

Fischer, G., Nakakoji, K., Ostwald, J., Stahl, G., Sumner, T. (1993). Embedding critics in design environments. The Knowledge Engineering Review Journal, Special Issue on Expert Critiquing. 8, (4), 285-307

Fischer, G., Nakakoji, K., Ostwald, J., Stahl, G., Sumner, T. (1993). Embedding computer-based critics in the contexts of design. In Proceedings of InterCHI ’93. Conference on Human Factors in Computing Systems.

Foltz, P. W., Kintsch, W., & Landauer, T. K. (1993, January). An analysis of textual coherence using Latent Semantic Indexing. Paper presented at the meeting of the Winter Text Conference. Jackson, WY.

Kintsch, W. (1994). Text comprehension, memory, and learning. American Psychologist, 49(4), 294-303.

Koschmann,T. , A.C. Myers, et al. (1994): Using Technology to Assist in Realizing Effective Learning and Instruction, The Journal of the Learning Sciences, 3(3), 227-264

Landauer, T. K., & Dumais, S. T. (in press). A solution to Plato's problem: The Latent Semantic Analysis theory of acquisition, induction and representation of knowledge. Psychological Review.

Perrone, C., Clark, D., Repenning, A. (1996). WebQuest: Substantiating education in edutainment through interactive learning games. In Proceedings of WWW5.

Repenning, A. (1995). Designing domain-oriented visual end user programming environments. Journal of Interactive Learning Environments.

Repenning, A. (1994). Programming substrates to create interactive learning environments. Journal of Interactive Learning Environments. 4 (1) 45-74.

Stahl, G. (1993). Supporting situated cognition. In Proceedings of the Cognitive Science Society: A Multidisciplinary Conference on Cognition. 965-970.

Stahl, G., Sumner, T., Owen, R. (1995). Share globally, adapt locally: Software to create and distribute student-centered curriculum. Computers and Education. Special Issue on Education and the Internet. 24 (3), 237-246.

Stahl, G., Sumner, T., Repenning, A. (1995). Internet repositories for collaborative learning: Supporting both students and teachers. In Proceedings of Computer Support for Collaborative Learning (CSCL’95).

Vygotsky, L. S. (1968). Thought and Language . MIT Press. (Original work published 1934).

Go to top of this page

Return to Gerry Stahl's Home Page

Send email to Gerry.Stahl@drexel.edu

This page last modified on August 01, 2003