Proposal Information

 Proposal Number

                0106950

 Proposal Title

                ROLE: The Role of Computational Cognitive Artifacts in Collaborative Learning

 Received on

                12/01/00

 Principal Investigator

                Gerry Stahl

 CO-PI(s)

                Robert Craig

                Curtis LeBaron

 Institution

                U of Colorado Boulder

 Program Information

 NSF Division

                DIV OF RESEARCH, EVALUATION AND COMMUNICATION

 Program Name

                RESEARCH ON LEARNING & EDUCATION

Proposal Status

Status As of Today Dated: 04/26/01

This proposal has been declined by NSF. The Principal Investigator has been sent information about the review process and anonymous verbatim copies of reviewers' comments.

==========================================

Panel Summary #1

This proposal suggests a qualitative methodology for making students thinking explicit and studying computer-supported collaborative learning and how this methodology is applied to science education. This

research proposed to synthesize and recast several social cultural views of learning into a new framework fo "computational cognitive artifacts" and use this methodology to study three different learning

environments which have a computational artifact. A positive aspect of this proposal is that CSCL 2002 will coincide at these researchers' institution institution, providing more opportunities for further discussion

and insight.

 

There are, however, several concerns with this proposal.

 

While the careful detailed analysis of learning around the computer using videotape/digital video is an appropriate methodology, the panelists recognize that micro-ethnography is not a novel practice when

studying discourse and mediated learning.

 

Researchers are not explicit in how computational artifacts are different than other non-computational artifacts and how this relates to the CCA framework. They do not carefully distinguish between different

kinds of computational artifacts, their representations, manipulable/dynamic forms and affordances for supporting science learning.

 

Three distinct learning environments are selected for study with little rationale for why these environments were selected for use. There were no compelling hypothesis being tested. Researchers need to specify

the features of tasks being asked of students in the study, what specifically is being measured  in the pre-post test.

 

If design principles are desired from the methodology to impact the design of cognitive computational artifacts, then the researchers should be applying design experiments approaches. Even if principles were

derived from this work, the panel was not convinced that principles from this research could be generalizable to other contexts.

 

This work is unlinked to cooperative learning literature, technology design experiment methods (Collins, Hawkins, diSessa, Bielacyzc). This work seems to have traditions from CSCW in which environments

are studies with and without computers, but like the weakness of CSCW research, needs to address science learning.

 

From a technology standpoint, there is nothing novel about the particular technologies selected or how they will become innovative from this work.

 

The team seems interdisciplinary and have a good record of publication at L3D. However, such a large team could also hinder the focused qualitative study proposed. The proposal is missing scientists,

science education researchers, or science educators. It is not clear how this will have a broad impact on the SMET learning and/or research community.

=========================================

Review #1

What is the intellectual merit of the proposed activity?

The combination of expertise among the investigators is impressive, although no content experts are described. The team includes individuals from various branches of computer science, cognitive science, and

communications. Funds for consultants are included in the budget and a letter from a biologist is attached. However, it is not clear how scientists or science educators will be involved. It is also not clear how the

generalizability of the results will be affected by the fact that the different kinds of interactions are associated with different instructional programs and different student populations. Ideally, for a study of the role

of computer software in learning the same content would be taught different ways to students from the same population.

What are the broader impacts of the proposed activity?

The dissemination plans described are thorough. The investigators will be organizing a conference on computer-supported collaborative learning in 2002. This is a good example of the integration of research

and education, as the investigators will be able to apply their findings in the design of their own instructional tools as well as make it available to others. However, there is a risk that the study will be too closely

tied to specific software programs.

Summary Statement

This proposal describes a research study of three existing software programs previously developed by project team members. It describes an in-depth qualitative study of the interactions of students with the

three software programs. The study will be designed to investigate computer-supported collaborative learning and the role of computational cognitive artifacts in the learning of science. These are important

goals and high ROLE priorities.

=================================

Review #2

What is the intellectual merit of the proposed activity?

The project proposes to analyze the collaborative efforts of small groups of students in order to determine whether students can understand the use of educational artifacts and how student learning can be

scaffolded. Groups of students will be videotaped as they interact with software and transcripts from these sessions will be analyzed in terms of how students learn, how they use the artifacts, what problems

arise that would drive redesign of the software.

 

The proposed work would provide a very detailed analysis of how particular groups work with a number of software tools. The work is situated within a more socio-cultural view of learning in which artifacts

are a central component of knowledge building. The work proposed can add to our knowledge of how to make students' thinking visible and what can be learned from this. The proposed outcomes of the

project are to revise the software and revise the theory. The model of collaboration, however, that underlies this work is not well specified.  Features of the task requirements (e.g., one is working for credit

versus during one's free time) and the actual software (e.g., availability of video) will influence the kinds of interactions students have but there is little recognition of the potential for varied responses to different

types of software. There is little in the line of a model of learning specified. Basically, there will be an in-depth analysis of groups but without much in the way of a theory to guide the analysis. There is a

substantial body of work related to group learning and it is not used here to influence the research question or design. The analysis of learning gains by students uses a weak pre-post design. Presumably time on

task with the Virtual Labs will result in some gains. These data would not provide strong evidence of project effectiveness.

What are the broader impacts of the proposed activity?

This will depend on how the specific artifacts used here are considered within some family of artifacts. If the software is broadly used, there will be a broad impact but it is not clear how widespread the use of

the particular software is or will be. It is unclear that generalizable principles can be derived from this work. However, new methods for analyzing complex interactions may be developed as a result of this

work.

Summary Statement

The project proposes to analyze the collaborative efforts of small groups of students in order to determine whether students can understand the use of educational artifacts and how student learning can be

scaffolded. The work is exploratory and it is unclear how generalizable the results would be. The analysis of learning gains by students uses a weak pre-post design. Presumably time on task with the Virtual

Labs will result in some gains. These data would not provide strong evidence of project effectiveness.

===========================================

Review #3

What is the intellectual merit of the proposed activity?

Computer-supported collaborative learning is a relatively new area of research with only a handful of theoretical frameworks guiding its study and design. This research proposes

to better understand the role of computational artifacts in mediating learning as well as to develop a theoretical account of collaborative learning when the artifacts are education software environments. It is

useful to have a theoretical account of CSCL.

 

The researchers have chosen three distinct learning environments to study: SimRocket, Virtual BiologyLab, and WebGuide. The learning environments different in the roles of the students, teachers, instructional

goals, social contexts of use, and particular technology. Without a conscious purpose for selecting such environments to sutdy, it is not clear if the new theory of learning and a methodology of design. The

rationale for selecting these environments to study is because they exist rather than they support  a particular aspect of collaborative learning.

 

I am concerned about the lack of subject-matter expertise on the team's part in SMET area. The proposal lists a large number of members (9+) from the Center for Lifelong Learning and Design, but if the

proposed work needs focused study, a large diverse team will detract from this. Also, the researchers are strong in studying external communicative acts, but need expertise in HCI, cognitive design, and SMET

learning in context.

 

The researchers seem to treat the artifacts as one in the same. The do not carefully distinguish between different kinds of computational artifacts, their representations, and forms and affordances for support

particular kinds of learning.

What are the broader impacts of the proposed activity?

They claim the research is novel because the particular methodology of micro-ethnography, but this seems identical to video-interaction analysis which is over 7 years old (see Charlotte Linda, Jeremy

Roschelle, Jim Greeno's work). This is useful for an assessment methodology, but the researchers want to use this as a way to develop design principles for guiding cognitive design of interactive learning

environments. This is a big leap of faith. The researchers do not seem familiar with design experiments practices nor practices of instructional design which would seem to be more powerful approach to making

decisions about iterative (progressively refined) computational environments.

 

The particular approach of detailed video-based discourse analysis is useful for deeper insights into "making learning visible" and would be useful to other researchers, but would be an expensive solution if it

were to scale as a method to "overcome the traditional problem of educational assessment"

 

The methodology could be useful for the CSCL community to study collaborative learning settings. The approach, however, does resemble

video-interaction analysis which is over 7 years old.

 

The theoretical account of CCA would be useful as an alternative to activity theory, which is the current popular theory among CSCL researchers.

 

This work does not directly impact SMET learning and teaching.

This work might generate some design principles for creating technology-based collaborative learning environments, but these principles might be tied to particular contexts too.

Summary Statement

The theoretical account of CCA (computational cognitive artifacts) would be useful as an alternative to activity theory (a popular theory among CSCL researchers). This work, however, does not directly

impact the ROLE objectives of SMET learning, teaching, policy or practice.

 

It's good to see researchers taking CSCL research seriously with an eye towards developing theoretical accounts of the role of artifacts, but seems to take a step backwards from situated learning which takes

into consideration not just artifacts, but people, activities, processes, and social contexts of use. The proposal does not directly address the ROLE objectives.

=================================

Review #4

What is the intellectual merit of the proposed activity?

The importance of the proposed activity is potentially high: lots of

researchers analyze videos of educational interactions and might

possibly benefit from exemplars of the proposed micro-ethnography

analysis, as well as an improved methodology for designing and

analyzing educational interactions.  That said, I'm doubtful that this

proposal will produce such a useful methodology -- it seems instead

that it is starting with a methodology (a process of analysis) and

proposing to apply it to three different systems, and then it will be

able to conclude whatever one can afterward about things in those

systems that were good and bad.  The goals are lofty, but the example

given in the proposal is relatively mundane and does not seem to

provide any particular new leaps beyond current methodologies.  What

I'd like to see more clearly articulated is an example where the

proposed methodology has led to new insight, and thereby where further

development of it might produce additional new insights.

 

Team qualifications: The PI's are the originators of the software and

have lots of experience in designing and analyzing interactions with

educational technology.  There are a huge number of consultants (9),

which seems a bit excessive; although, perhaps most of the learning

produced by this proposal will then take place during the meetings

when this team is gathered.

 

The creativity and originality of the work is minor -- it is

essentially the standard method already: design an interaction,

videotape and analyze, iterate the design, and test if students now

know more.  The only thing different is that the authors give the

impression that this micro-ethnography technique hasn't been applied

to educational interactions before, or if so, only in a limited way

(medical problem-based learning) which can be expanded upon here.  It

would be valuable to this reviewer to be told what they learned in

that medical application -- what did the proposed technique give that

prior techniques haven't?

 

The discussion about computational artifacts is not a new way of

looking at learning, and is so broadly applied here that it doesn't

seem to buy anything -- it just seems obvious.  And, as the authors

acknowledge, it only applies to some kinds of learning.  How has

appeal to this viewpoint made a difference in the actual design of

your artifacts for learning?

 

The conception and organization of the proposed activity seems

limited, and I'm troubled by some of the claims -- for example, "Our

approach will make visible the learning that is taking

place....thereby indicating what is needed to help students learn how

to take advantage..."  I find this statement to be a bit too strong

and unjustified.  Will it really make what they learned visible?  Will

it really indicate what is *needed*?

 

What I'd rather see is this methodology compared to others -- on the

same data -- and a team of experts take the results of the different

methodologies, and see how helpful the results are.  Thus we might

learn about the advantages of using this methodology over others.

 

Access to resources appears to be fine.

What are the broader impacts of the proposed activity?

The activity potentially enhances a researcher's discovery and

understanding of a student's interaction with technology, but it's not

clear if it does so any better than current techniques.

 

It does not appear to be a priority of this activity to promote

teaching or training, or to include underrepresented groups.

The proposed activity does not appear to enhance infrastructure for

research and education...(facilities, instrumentation, networks,

partnerships...) nor does it appear to have direct benefits to

society.

Summary Statement

The application of the micro-ethnography technique to this kind of analysis may be really great; alas, it is not argued here in any compelling way, although it may be so in the references.

====================================

Review #5

What is the intellectual merit of the proposed activity?

This project would perform observations of students interacting with software artifacts in order to refine a methodology called micro-ethnography for assessing the effects of the artifacts on learning.

 

Strengths of the proposal are the following:

1. The suggested protocol for making and analyzing observations of students is a good one.  Yet the originality of the sequence is not apparent.

2. Interdisciplinary team with good record of publication.

3. Strong institutional resources, such as well-respected colleagues with related interests.

4. Nice writing and philosophical perspective on learning.

 

Weaknesses of the project are the following:

1. The notion of computational cognitive artifact, promoted as central to the  project, adds nothing new in terms of insights, while it confounds distinctions among software categories such as tool,

communication medium, and document.  Very contrived.

2. No compelling hypothesis for the research is given.  In fact, no hypotheses are explicitly stated. There are perhaps some implied ones, such as that students may learn better if they get to design their own

virtual rocket than if they only get to select one from a list.  But such implied hypotheses are not developed in the proposal.

3. No compelling software development is proposed.  A new learning environment?  An innovative system for analyzing and archiving digital video in terms of evidence of learning?

4. The software artifacts to be studied (SimRocket, etc.), while possibly typical of educational software used today, do not seem to be cutting-edge examples.

5. Little is explicitly stated about what will be done, other than to describe the observation and analysis protocol.

6. No means of transferring the methodology to be refined is given (except, presumably, publication).  Couldn't analysis tools be developed and distributed?

Other comments:

T. Koschmann is described as a co-PI in the text, but only a consultant in the formal arrangements.  So there is a little confusion as to his level of responsibility for the research.

 

The notion of a computational cognitive artifact is contrived and does not seem to clarify thinking about learning with tools, documents, and computer mediated communication.  In their definition on page 2 of

the proposal, they are claimed to be "computational" if they change the display when the user interacts with them.  This admits hyper-linked web pages as potential CCAs.  Next, they are said to be "cognitive to

the extent that they can become part of the user's thinking."  This is so all-inclusive as to be useless. The proposal then adds that artifacts may also be cognitive in the "sense that they can be internalized in the

user's mind so that he or she can make use of them as a mental metaphor or representation..."  Since this is true of any phenomenon or mechanism, it also is useless in providing any constraint on the artifact

being defined.  The final sense of the CCA, that it serve as a kind of tool for computation or workspace for manipulating representations is satisfied by such traditional devices as calculators and sheets of paper.

 Perhaps the notion of a CCA would be more compelling if a CCA had to have all or at least a certain number (greater than one) of the capabilities mentioned.    While there may not be great harm in mixing up

all kinds of software for learning into a new term, by making this central to the concept of the project, the reviewer begins to look for the emperor's new clothes.

 

The claim on page 2 that "This project is unique in bringing together educational software developers and specialists in the micro-analysis of interaction to develop..." seems somewhat too bold.  There are many

groups that develop educational software and do careful analysis of how students use it.  The methodology proposed appears promising but the proposal didn't thoroughly convince me that it is as firmly

grounded as claimed.

What are the broader impacts of the proposed activity?

The broader impacts are not clear.  Were the methodology to be successfully refined then it would have to be accepted by others in order to have a big impact.  It's not clear that it would be sufficiently different

from existing software development and test methodologies that it would necessarily catch on.