Submitted to : Journal of Architecture and Planning Research

For the special issue on: Computational Representations of Knowledge

Edited by: Mark Gross

For publication in: June, 1993


Theorists of design methodology have described facets of design and problem solving that call for computer support. However, their assessments conflict in fundamental ways with the techniques of artificial intelligence, the discipline within computer science concerned with these issues. For instance, pivotal writings about design have argued as follows:

* While computers should be used to help manage the complexity of today's problems, they should not replace or restrict the role of human intuition. (Alexander)

* The conceptualization of design problems should be allowed to evolve through public deliberation; computers can support communication and critique as long as they do not impose closed frameworks. (Rittel)

* Designers construct the design situation, including its patterns and materials; computers can aid in this if they do not restrict things to pre-established ways of viewing. (Schoen)

These statements express a troubled tension between (human) interpretation and (computer) representation.

Artificial intelligence traditionally plays upon the computational power of formal representations. A recent shifting from autonomous expert systems to critiquing systems may help to re-establish the role of people in problem solving, but still retains too heavy a reliance on rigid, objective representations. Many of the fields most in need of computer support are exploratory domains which cannot be reduced to systems of formal rules and manipulations of primitive symbols. Lunar habitat design is one example of this.

Hermes is a computer system to support interpretation in the design of lunar habitats. This research prototype features a special language for defining terms, conditions, critics, and queries to display design information from the human user's interpretive perspective. Hierarchies of interpretive contexts facilitate the sharing of these perspectives. All textual, graphical, and other information is integrated and inter-related by a form of hypermedia incorporating these language and context mechanisms. This provides a computationally active medium for expressing, storing, communicating and critiquing design interpretations..

The message of Hermes is that computers can support human creativity in design rather than automating or rigidifying the design process. To do this, a new approach to software is needed that heeds the deeper principles of design methodology and the nature of human interpretation.

Design Methodology from the Perspective of Computer Support

Alexander: balancing human intuition with computation

Deliberation on the question of whether and how computers should be used to support the work of designers has raged for several decades now. The issues go to the heart of what design is and should be. In his now classic Notes on the Synthesis of Form, Christopher Alexander reviewed the history and even the prehistory of design in order to argue that the field has reached a second watershed in the mid-twentieth century. The profession of design had originally emerged when society started to produce new needs and innovative perspectives too rapidly to allow forms to be developed through "unselfconscious" activities of slowly evolving traditions. Now, the momentum of change has reached a second qualitatively new stage:

Today more and more design problems are reaching insoluble levels of complexity. This is true not only of moon bases, factories, and radio receivers, whose complexity is internal, but even of villages and teakettles. In spite of their superficial simplicity, even these problems have a background of needs and activities which is becoming too complex to grasp intuitively. (Alexander, 1964, p.3)

The management of complexity must become a primary concern of the field of design. The level of complexity that Alexander had in mind is characterized by the fact that it exceeds the ability of the unaided individual human mind (intuition) to handle it effectively. Various methodologies can help, and this is where the abstract logical structures, diagrams or patterns that Alexander proposed come in. He saw a major advantage of the systematic use of such logical structures in what he referred to as a "loss of innocence".

When design first became a profession with rules that could be stated in language and taught, there was, according to Alexander's account, a first such loss of innocence. More recently, when the Bauhaus designers recognized that one could design for mechanized production, another accommodation was made with changing times. The use of systematic methodologies to help manage complexity would, Alexander claimed, entail an analogous acceptance of the limitations of the individual designer's intuitive powers. This would bring with it a significant opportunity for progress of the profession. When the design process is formulated in terms of logical structures it becomes much more readily subject to public criticism than when it is concealed in the mysteries of the lonesome genius' artistry, just as the earlier formulation of previously unselfconscious design into explicit plans, articulated processes and stated justifications laid the basis for a science of design which could be refined through on-going debate. Loss of innocence entails the removal of an outmoded barrier to the kind of critical reflection required for a profession.

But Alexander did not see the issue one-sidedly. He did not propose that design methods substitute for the practice of design or for the designer's practical intuitions. Rather, he recognized that intuition was necessary, and argued for a proper balance: "Enormous resistance to the idea of systematic processes of design is coming from people who recognize correctly the importance of intuition, but then make a fetish of it which excludes the possibility of asking reasonable questions." (ibid, p.9) Alexander felt that the fetishism of intuition as some kind of inalienable artistic freedom of the designer functioned as a flimsy screen to hide the individual designer's incapacity to deal with the complexity of contemporary design problems. As a consequence of the designer ignoring these limitations, the unresolved issues of complexity get passed down to engineers who have been trained to work out details rather than to grasp complex organization synthetically; the product that results tends to be a monument to the personal idiom of the creator rather than an artifact with a good fit to its function.

The questions posed by Alexander three decades ago for design methodology generally still confront the particular task of figuring out how best to use computers for supporting the work of designers. Consider his first example above, that of designing a moon base. Clearly, this is an overwhelmingly complex task. One needs to take into account technical information about supporting humans in outer space, including issues that may not have previously been thought of and investigated (such as the practicality of using lunar rocks as building materials). One must also consider the mission goals of the base, both stated and implicit. Then there are social and psychological issues concerning the interactions among groups of people who are confined in an alien environment for a prolonged period of time. All of these factors interact with the more common issues of designing a habitat for working, eating, socializing, and sleeping -- resulting in a design problem of considerable complexity.

This paper will focus for its example on the specific project of developing a computer system for lunar habitat design. A primary concern will be to fashion the system so that it supports the intuitive powers of human designers. That is, the system will not be intended to replace the human designer, as has been the goal in traditional artificial intelligence and expert systems. At the same time, it should not simply provide computational power that is tangential to the process of design. Rather, the computer system should simultaneously aid the designer to manage the complexity of the project and to articulate his or her intuitions about the emerging design. In keeping with the balance called for by Alexander, the computer should give free reign to the human designer's intuitive powers even while it helps to document the central issues and decisions of the design process so they are rendered publicly available.

Rittel: tackling wicked problems through argumentation

When Horst Rittel declared in his Dilemmas in a General Theory of Planning that "planning problems are inherently wicked," (Rittel, 1972, p.10) he thereby spelled out that characteristic of planning and design tasks that has subsequently become the central source of perplexity in trying to imagine a computer system that can effectively support the challenging aspects of design. For, computer programs have traditionally been devised in accordance with the classical paradigm of "tame" science and engineering problems -- precisely the paradigm that Rittel argued is not applicable to the problems of open societal systems with which planners and designers are generally concerned. This paradigm assumes that a problem can first of all be formulated in a clear, unambiguous and exhaustive manner. Then, based on such a problem statement, all possible solutions can be evaluated to see which are optimal solutions to the problem. Computer programs based on this paradigm try to represent in advance the space of problems and solutions for a well-defined type of design problem in an explicit and exhaustive manner. Their contribution to solving a problem is to take a complete statement of the problem as input and to compute the optimal solution to it by means of a search through the set of all possible solutions.

Rittel claimed that the wicked problems of planning could not be thoroughly understood in the first place unless one already had ideas for solving them. Suppose, for instance, that you are asked to plan a mission to the moon for four astronauts for a period of 45 days. According to NASA, the purpose of the mission is to explore long-term stays for crews of international backgrounds and mixed gender; there is to be some scientific research and some site work to prepare for future moon bases. In thinking about the design of the lunar habitat for this mission, you might begin to discuss the importance of privacy issues with other people on your design team. You might feel that not only was some physical privacy needed for cultural reasons, but psychologically there would be a need to structure a careful mix of public and private spaces and opportunities. These privacy issues might become paramount to your design even though they had not been included in the original problem statement. In this way, the set of issues to be investigated and concerns to be balanced would emerge and evolve as the planning process took place.

In opposition to the then dominant methods of operations research which tried to compute optimal solutions from static and well-defined problem statements, Rittel called for a model of planning as "an argumentative process in the course of which an image of the problem and of the solution emerges gradually among the participants, as a product of incessant judgment, subjected to critical argument." (ibid, p.13) This is a very different model of the profession. In the operations research approach, it was assumed that problems could be formulated up front and that the dimensions of possible solutions could, at least in principle, also be enumerated once and for all. Solving a planning or design problem then consisted in making that combination of choices among the given options that maximized some objective quantitative measures of the criteria specified in the problem statement. In no sense should the value of a solution be determined by the process of its discovery or by the individuals involved in formulating it. This was a model made for computerization. In fact, one might suspect that this model was influenced by the computer model in the first place: only those elements were accepted as scientific and objective which could be easily reduced to algorithmic processes.

By contrast, Rittel rejected the notion that even the underlying concepts in terms of which a problem or its solution could be formulated were objective in this sense. The language used in real, significant planning processes is itself the result of discussion and debate among various parties, each of whom uses subjective judgments to criticize hidden assumptions and to reconstrue implicit meanings of terms. No one view has a necessary priority; every view must be capable of standing up to critique by opposing views. Solutions arise through this process of critique, in which new issues and possibilities can arise at any moment and new criteria can be introduced or old concerns reinterpreted. Rather than worshipping some theoretical notion of objectivity, Rittel's approach recognizes that people's perspectives on problems are based in subjective conditions such as their individual value systems and political commitments or their personal roles vis a vis the proposed solutions:

For wicked planning problems, there are no true or false answers. Normally, many parties are equally equipped, interested, and/or entitled to judge the solutions, although none has the power to set formal decision rules to determine correctness. Their judgments are likely to differ widely to accord with their group or personal interests, their special value-sets, and their ideological predilections. (ibid, p.15)

Consider again the concept of privacy in the lunar habitat. A design team might start from the idea of visual privacy. Through discussion of the implications of life in this confined space, they might want to include protection from the noise of flushing toilets and snoring neighbors. But then the team member concerned with medical contingencies might introduce a notion of privacy for an injured astronaut who needs to recuperate. A psychologist on the team might insist that crew members have an opportunity to communicate in private via radio with family members back on Earth, or that there be ways for pairs of astronauts to confide in each other without being monitored by ground control. Because the crew will be international, a sociologist would bring up culturally diverse definitions of privacy that must be taken into account as well. Different members of a design team come to the common task with different perspectives; their constructive criticisms of each other are part of what makes a team more insightful than the sum of its parts. Given a methodology which builds on the strengths of design as an argumentative process, these differences can contribute to a robust solution that takes into account a variety of competing and interacting insights, not all of which could have been anticipated in advance.

Computer support for planning and design processes as Rittel conceived of them must allow team members to articulate their individual views and judgments, to communicate these to each other, and to forge shared perspectives. It must support deliberation or argumentation. Rittel himself made some initial attempts to define computerized issue base information systems, leading to recent systems like gIBIS (Conklin, et al, 1988) and Phidias (McCall, et al, 1991). Somehow, the dimensions of the design problem must be allowed to emerge and change as different perspectives are brought to bare, as initial approaches are subjected to critique, and as solutions gradually emerge. Computer systems may be useful for storing, organizing and communicating complex networks of argumentation -- as long as they do not stifle innovation by imposing fixed representations of the ideas they capture.

Schoen: dialogs of discovery

Alexander and Rittel have suggested the importance of the individual designer's intuitions and of public processes of deliberation for the development of good design. This is at least implicitly a rejection of the model of technical rationality based on the methodology of the natural sciences. Donald Schoen made this rejection even more explicit in his influential study of the design profession, The Reflective Practitioner (1983). Here he argued that much design knowledge is tacit, rather than being rule-based. He viewed the design process as a dialog-like interaction between the designer and the design situation, in which the designer makes moves and then perceives the consequences of these design decisions in the design situation (e.g., in a sketch). The designer manages the complexity that would be overwhelming if all the constraints and possibilities were formulated as explicit symbolic rules by using professionally-trained skills of visual perception, graphical sketching and vicarious simulation.

Schoen recently took up the question of computer support for design in a paper with the descriptive title, Designing as reflective conversation with the materials of a design situation. In this article he argued for a necessarily limited role for computers in design because one of the most important things that designers do is to create the design situation itself. Not only is this something that computers cannot do by themselves, but it also precludes computer programmers from pre-defining the design situation for the computer.

Before trying to discuss potential computer roles, Schoen takes time to review several experiments supporting his thesis that designers construct the design situation. In one experiment, several experienced architects are shown a 14-sided, dimensioned polygon with door locations indicated, and asked to design a library with that shape as its footprint. One architect saw the figure in terms of simple end entrances and complex middle entrances; another saw it as three pods surrounding a middle; a third saw two Ls back to back. Clara, another subject, discovered a five foot displacement in the layout which complicated the spatial relationships considerably for her. Schoen concludes from these and other studies that designers construct the problem by seeing the situation as defined in a certain way:

In one sense, the 5 ft displacement that Clara noticed is there to be discovered. However, not everyone who tried the library exercise discovered it. Clara did. She noticed it, named it, and made a thing that became critically important for her further designing. In this sense, her treatment of the library exercise shows her not only discovering but constructing the reality of a design situation. For designers share with all human beings an ability to construct, via perception, appreciation, language and active manipulation, the worlds in which they function. . . . Every procedure, and every problem formulation, depends on such an ontology: a construction of the totality of things and relationships that the designer takes as the reality of the world in which he or she designs. (Schoen, 1992, p.9)

Other experiments showed that designers also construct the materials, site, and relationships (or prototypes) in a similar way to how Clara constructed the crucial patterns of the project. In this sense, then, there is no given design problem which is explicitly and exhaustively defined before the designer comes to it. Correspondingly, there can be no well-defined problem space for the designer (or for some automated version of the designer) to search through methodically. Rather, the designer's subjective, personal or intuitive appreciations shape the problem by constructing its patterns, materials and relationships. The design project is solved by the designer experimenting with tentative moves within the constructed design situation and discovering the consequences of those moves.

Clearly, a computer program cannot on its own construct a design situation the way an architect does, picking out, naming, and focusing upon critical patterns, materials and relationships. To the extent that the role of a designer includes applying intuitive, perceptual and linguistic skills to see the situation creatively and to converse with it reflectively, a computer cannot do what a human designer does. Assuming that Schoen is correct that these skills are necessary for real design, a computer can also not accomplish the design task using alternative methods to those used by humans, because programs as we know them are based on predefined representations of fixed and strictly delimited ontologies. Computer programs for design are therefore limited to solving problems in well-defined microworlds or else working with human designers to support their human skills.

The hermeneutics of design

Adrian Snodgrass and Richard Coyne of the Faculty of Architecture in Sydney have begun to articulate a philosophical basis for artificial intelligence in design by arguing that design is hermeneutical. "Hermeneutics" is the study of interpretation, and today refers primarily to the philosophy of Martin Heidegger and its explication by Hans-Georg Gadamer. Snodgras and Coyne argue that design is a human science in contrast to a natural science, and therefore must be founded on human understanding rather than on objective method. This has profound implications for the attempt to provide computer support for design, as well as for the more general attempt to comprehend the design process. The ideas of hermeneutic philosophy provide a conceptual framework for further explicating Alexander's ideas about intuition and public critique, Rittel's views on wicked problems and the need for argumentative processes involving personal interests, and Schoen's analysis of tacit knowledge and the designer's dialogue with the constructed design situation.

As a human science, design is based in human understanding gained through processes of interpretation, rather than being based in knowledge, that is, in propositions and explicit rules. In fields like design, claims are not proven by appeal to objective facts and rigorous methods, but by reference to further interpretations (Rittel's argumentative process). A given claim reflects a certain interpretation of the design situation, a certain way of seeing it or constructing it, as Schoen would say. It is always legitimate to question a design move and to demand some justification. But the justification will always be from the perspective of an interpretation, which can be questioned further. There are no axiomatic starting or stopping positions, such as those sought by the rationalist tradition. No claims form absolute starting positions for arguments which cannot themselves be questioned; the chain of justifications based on interpretations ends only when one concedes that the argument is plausible or convincing from the perspective that one has been persuaded to adopt.

The model is indeed one of persuasion, not of hypothesis testing. One is always already in an interpretive context. From within this context, one then understands new arguments, claims and interpretations. Being in an interpretive context is not like tentatively accepting a propositional hypothesis that one may later flatly reject as false based on some discovered objective facts. It is more like having a framework through which one can first of all understand arguments and facts, and thereby modify ones own framework. In Heidegger's terminology, we are always already thrown (Geworfen) into a certain way of being in the world, and from this position we project (Entwerfen) new interpretations of the world. We project a future based on our past history. Interestingly, the German term for projecting is also used in its noun form for a project, design or sketch: a design is a projection of a possible future artifact.

According to Heidegger (1927), the projecting of interpretation takes place based on three dimensions of preliminary understanding:

* Pre-judice: we already have a wealth of tacit, culturally acquired skills and practices that we bring with us as historical (thrown) beings.

* Fore-sight: we see our situation in terms of a conceptual framework and language in terms of which things can be disclosed to us.

* Pre-conception: we have a tentative expectation of what it is that we are about to interpret.

Most of the time we form interpretations without being aware of this three-fold background of assumptions. That is why the interpretive process of design seems so mysterious and intuitive. As we are forced to justify or reflect critically upon the assumptions of our interpretive stance, we gradually make more and more of the underlying background explicit. We can be prompted to do this by what Schoen calls "breakdowns" in the design process. For instance, we make a move in our design sketch and then we see that a problem occurs as a result. Perhaps seeing the problem brings to our attention a certain need or constraint of the project which has been violated and that we were not formerly aware of. Breakdowns of relationships in our situation are a common way in which our circumstances are explicitly disclosed to us. Dialog with other team members is another way in which tacit assumptions are brought to light, in explaining and arguing for ones own views in order to bridge the gap to someone else's perspective. Critical self-reflection while engaged in a design task is yet another way:

The process of design is thus a disclosure, in two senses. Firstly, it is a disclosing of the artifact that is being designed; and secondly, and simultaneously, it is an unfolding of self-understanding, since it reveals one's preunderstandings. It uncovers the preconceptions that are constitutive of the design outcome, and at the same time brings to light the prejudices that are constitutive of what we are. (Snodgrass & Coyne, 1990, p.15)

This conception of design as a dialog which discloses involves a very different notion of language than that of the natural sciences. "On the one hand there is the model of formalized language, the language of primary units that are combined according to the rules of logic to form meaningful structures; and on the other hand there is the metaphor of the language of conversation, which is the language of interpretation." (ibid, p.16) This presents a serious problem for any attempt to provide computer support for design. Computers speak the formalized language, while designing requires the language of conversation. Computer programs consist quite precisely of algorithms encoded in a formal language, data structured as primary units, and operations performed in accordance with the rules of logic. Even software environments like Janus (Fischer, et al, 1990) which try to end-run this problem by communicating with designers via graphical images which represent objects in the design domain provide only a fixed palette of primary units whose semantics are not open to debate.

The Approach of Artificial Intelligence

Simon: searching through the solution space

Most work in the history of artificial intelligence (AI) can be characterized as an attempt to create computer programs to solve problems by using formalized language, primary units and the rules of logic. Herbert Simon, a major proponent of this tradition, tried in a well-known article on The Structure of Ill-structured Problems to finesse his way around Rittel's argument. Rittel had claimed that most interesting problems are wicked problems which are not susceptible to solution by methodical search through some purported solution space, partly because the definition of the problem shifts as the solution develops. Simon's strategy revolves around the example of a chess playing program. He argues that the problem for this program shifts from move to move as the features of the board (attacks, opportunities, strengths) change. So even chess is a wicked problem. Yet, a computer can play chess using traditional AI techniques. Therefore, wicked problems can be solved by these techniques. QED.

Of course, this is mere sleight-of-hand. The point is that chess is a well-defined domain with explicit, unambiguous rules. In no sense does a chess program reinterpret the rules as the game proceeds. The representation of game states and therefore the universe of possible chess moves is fixed for all games. When Simon finally does consider domains in which the situation must be interpreted, he goes off on spurious tangents to discuss problems of information retrieval, natural language processing and perceptual pattern recognition. With typical AI bravado, he was as reassuring that these still open computerization problems would be solved as he was that ill-structured problems in general could be solved with mechanisms that are not qualitatively different from the ones already being used in AI schemes.

For a brief moment at the end of the article, Simon allows a glimpse of the real issue. If a program needs to acquire external information about the problem situation, then it must force that information into its fixed representational framework. Simon admits that this is a weakness, but concludes that it is really for the best:

[The process of acquiring external information] is an aid [to the process of understanding that information] because it fits the new information to formats and structures that are already available, and adapts it to processes for manipulating those structures. It is a limitation, because it tends to mold all new information to the paradigms that are already available. The problem-solver never perceives the Ding an sich, but only the external stimulus filtered through its own preconceptions. . . . The world perceived is better structured than the raw world outside. (Simon, 1973, p.163)

The whole point of Rittel's analysis of wicked problems was that there is no adequate set of formats and structures already available before one acquires the information about a situation. Rather, an argumentative process is needed to respond to the flow of information in ways which transform the paradigms that were already available. Schoen's reflective conversation with the materials of a design situation makes no sense if the materials have been fit to a mute format. Although Heidegger would agree that the world is perceived through existing preconceptions, he would not agree that this is a "better" structure if the tentative original expectations are not allowed to respond and be transformed by the raw world.

Perhaps Simon realized that planners and designers need to take approaches that are qualitatively different from the methods of traditional AI, but he could not imagine how to extend computer technology to support those activities. More recently, in a lecture on Social Planning, he recited a series of anecdotes that illustrated how complex planning processes hinge in large part on not assuming a fixed representation of the problem, but letting it evolve with the solution. For instance, in establishing the Marshall Plan after World War II the people involved in setting it up proposed six different and largely contradictory conceptions for its role. Simon underscores the observation that different conceptualizations of the problem would imply various ways of organizing the agency, and consequently quite different programs emphasizing different results. He concludes that "what was needed was not so much a 'correct' conceptualization as one that could be understood by all the participants and that would facilitate action rather than paralyze it." (Simon, 1981, p. 166) What was needed, in other words, was an argumentative process among the participants to reach a common understanding, not some formally rigorous representation framework. Although Simon manages to propose a series of methodological approaches to issues of social planning, these are strikingly less formal than the tools he had proposed for well-structured domains. Significantly, he did not discuss the possible implementation of AI programs that might be able to support these methods of social planning.

From expert systems to critiquing

It seems clear that planning and design problems cannot be solved by means of automated methods without the active involvement of humans. Whether one thinks of Alexander's references to intuition, Rittel's insistence on the role of personal interests, Schoen's emphasis on tacit knowledge, or Snodgrass and Coyne's focus on interpretation, one finds the essence of designing in skills that are distinctively human. These skills are to be strictly contrasted with the modus operandi of computer programs. During the past decade, AI research has begun to explore ways of supporting human expertise with computer systems that preserve a central role for people. This can be seen in the shift from autonomous expert systems programs to "expert critiquing systems."

In his survey of expert critiquing systems, Barry Silverman defines the term "critic" as a computer program that critiques human-generated solutions. Thus, rather than the program coming up on its own with a solution by following a set of rules that have been gleaned from domain experts, a critic program responds to a solution proposed by a human user of the program. Consider, for instance, an expert system such as Simon discussed for playing chess. It would operate by accepting as input a board position and responding with an optimal move. A chess critic, by comparison, would allow a human user to make a move in response to the board position and would then critique that move. The critic might say that the proposed move violated the rules of chess, or that it put the player in some danger, or that it missed the opportunity for some better move. Most often, the critic would probably be silent and let the human continue to play uninterrupted. The idea of using critics is to allow human intuition to guide the solution process -- recognizing the appropriate role of the human -- while at the same time bringing to bare the computer's ability to recall facts, rules and constraints which the person might easily have forgotten.

As Silverman's presentation makes clear, critics are a straight-forward modification of expert systems. They require the same ability of the computer to solve the problem, but merely delay the announcement of the computer's solution until the user has had a chance to try:

The conversion from an expert system to a critiquing process primarily involved adding a differential analyzer that would: suppress the expert system's diagnosis until after the user had also input his or her own diagnosis (the machine would request that input), compare its diagnosis to that of the human user, and determine if the human deviated significantly enough from the machine's ("optimal") diagnosis and plan, to warrant interrupting the human to explain the problem it had uncovered. (Silverman, 1992, p.111)

This approach can be effective in simple, well-defined domains which can be captured in a number of explicit rules or look-up tables. Spelling checkers can be viewed as a particularly successful example, with grammar checkers being more interesting examples, but less useful tools. Perhaps the best application is intelligent tutoring programs, where the user is not likely to be aware of even the rules of a domain which can be formulated in expert system rule bases. AI systems are really only "intelligent" compared to novices who are learning the basic rules, not to domain experts whose skills far exceed the realm of rules.

As the name suggests, critics can represent a first step in a paradigm shift toward the model of critiquing as a dialog process. In fact, Silverman claims critiquing should be a two-way, interactive, communicating, view-sharing process. Unfortunately, when one looks at the implementation details he proposes, this dialog reduces at best to a limited user model in terms of which the program's explanatory output is adjusted to the represented skill level of the user. In other words, the program somehow classifies the user (perhaps by asking the user to select a skill level) and then prints out the text that had been programmed as an explanation for the current "user deviation" for that level user. This is scarcely an argumentative process in Rittel's sense or a dialog in the hermeneutic sense.

In fact, the work Silverman reviews is still very much in the rationalist tradition. Most critic systems require that the domain be well-defined in terms of the following criteria: explicit rules can be specified for each type of wrong answer; the rules for assessing user solutions are objective; only one or two possible correct solutions exist for each task; and subtasks can be critiqued independently of each other. Silverman's own contribution to the theory of the critic approach is to emphasize the importance of clarity (a watchword of rationalism since Descartes). The first thing that critics should do in his opinion is to eliminate ambiguity. "Ambiguous statements which have more than one meaning cannot be clearly confirmed logically," he warns, "nor can they be completely disproven empirically. They may be true according to some interpretations." (ibid, p.107) Although Silverman's critics have introduced people back into the problem solving loop, they have not opened the loop wide enough to permit true dialog among competing and ambiguous interpretations.

Lunar Habitat Design: An Exploratory Domain

The research being reported in this paper is an attempt to go beyond expert systems and expert critiquing systems to develop an approach to computer software design which can support the design process as described above, including the ambiguity of competing interpretations. This work was initiated at the request of a design firm which, among other things, contracts with NASA to do lunar habitat design. As it turns out, the domain of lunar habitat layout is a particularly rich one to investigate from the perspective of providing computer support, and findings in this very specific domain promise to have broad generality for design, particularly for high-tech architecture.

The need for computer support of lunar habitat design was originally suggested by the sheer volume (complexity) of knowledge required -- far more than people could maintain in their heads or even locate easily in manuals. In fact, the manuals themselves seem to suffer from a lack of computer-supported maintenance, raising serious questions of how to interpret official regulations consistently with each other. There are voluminous sets of NASA regulations for all Man-In-Space designs, ergonomic standards, and specific project contractual obligations which must be adhered to by designs. Furthermore, there is a concept of traceability, meaning that there must be documentation tracing how the regulations are incorporated in the design.

But the complexity of lunar habitat design is not just a matter of the quantity of information. Requirements, components and rationale all have to be reinterpreted within the Gestalt of the evolving design. This is an application realm in which, for instance, most physical components require some amount of customization. One cannot simply select a stock sink or bed from a catalog, because of gravitational or volumetric considerations. Even pumps and fans have to be re-thought. So the idea of representing standard parts with schematic icons or fixed items from a palette is inadequate. One wants to start from existing components, but one then needs to be able to modify them freely to account for differences in the lunar setting. Furthermore, there are many design interactions among components that are placed close together -- partially because space is at a premium and because things must work together to form a coherent environment for habitation. This means that design of a given part is very much situated in its context, in terms of neighboring components (e.g., buffers for sounds), design concerns (privacy), and projected usage issues (traffic flow). The computer representation of the design must function as the unique world in which situated design can take place effectively. The notion of a programmer defining in advance a formal language of terms and graphic primitives representing design concerns and physical components is out of the question.

Elements of lunar habitats should be similar to familiar products to facilitate manufacture and to give astronauts a sense of being at home, but they must also be different to meet the severe constraints of their context. This means that models and rules of thumb must be searched for in many other domains (houses, submarines, Antarctic labs) and then applied to the lunar setting. Such application is not a mechanical process; it must be done by the creative and synthetic minds of humans, with computer systems merely presenting the relevant elements. Even the determination of what might be relevant must involve the human designer, for this is also very much a matter of interpretation based on a deep understanding of the semantics involved. To support the subtlety of the communication between the computer system and its users, the users must be able to develop a language which operationalizes their evolving interpretations in ways which can be used by the software.

At the same time, the development of such a language can provide a basis for shared understanding among groups of designers, whether or not they are working together physically or temporally. For instance, a designer who is considering an old design for adaptation into a new project can learn about the old design through the language which was developed with it -- including the formulations of critics specific to that design. Aspects of this approach related to supporting collaborative work among groups are particularly critical in this domain because each successful design must undergo the scrutiny of many teams. Generally, the only communication between these teams is the design document itself. Thus, it is important that the design include effective documentation of the rationale and interpretive stance behind it.

A high-tech design goes through many stages of development, involving different design teams. Architects, designers, a variety of engineers, and administrators all work on the designs from their own viewpoints. Successful designs are sent to other contractors around the country for detailing, mock-up, testing and construction. At each stage, the design is modified, based on people's understanding of the design and its rationale. If a creative design concept is to survive this argumentative process, with tight cost, weight and volume constraints at every stage, strong rationale must be communicated; a schematic or a pretty picture will not suffice. In fact, a typical product of lunar habitat design consists of a small booklet predominated by textual explanations of rationale, rather than simply detailed drawings.

Because designers do not have personal experience with life in lunar habitats, knowledge stored in previous related designs (including Skylab, the Shuttle, previous trips to the moon) is invaluable. Old designs are re-used extensively. To the extent that design rationale of the old designs has been captured, it is vitally important. Consequently, it is likely that design rationale will increasingly become an integral part of design. This should add tremendous power to practitioners who take it seriously and those who use computer tools that support rationale capture. Such a development represents a significant break with the tradition of CAD programs, which are purely graphical and embody very little semantics. However, it has impressive precedence in other fields like science, mathematics and philosophy, where written theories, proofs and arguments were refined through processes of public critique and grew into extensive bases of shared knowledge impossible in non-literate cultures.

Lunar habitat design is not a field in which one could expect to interview an expert and come up with a set of formal rules and elements to define a comprehensive system of knowledge. Workers in this field are attempting to explore a new domain and to begin to map out the potential problem space. A goal of researchers is to sketch in parametric curves that would indicate how designs have to change depending on such parameters as number of astronauts, length of mission duration, or payload delivery capacity. (Cf., e.g., Design Edge, 1990; Moore, et.al., 1991; Kazmierski, et al, 1992) But even the most important parameters remain undefined and open to interpretation and debate. For instance, no NASA guidelines cover privacy issues, but this is an increasing concern of thoughtful designers and a topic for vigorous political debate and even power struggles within NASA. (Compton, et al, 1983)

In the lunar habitat design sessions studied for the current research, privacy issues were in fact the first real concerns to surface. They structured how the designers constructed their task. Related questions of social interaction dominated questions of physical layout, indicating that social planning was necessarily a significant aspect of the designing. When the geopolitics (or solar system politics) of NASA's goals are reflected in the deliberations, the result is truly a wicked problem in Rittel's full sense.

In relatively unexplored domains such as lunar habitat design, the purpose of design attempts is not to find optimal solutions within a known problem space, but to begin to create a solution space in the first place. The most important role of computer support for such domains may be to capture the ideas that are being generated. Terms and critics which are formulated on the spot during this design exploration process are expressions of what a designer may want to pay attention to. So, for instance, the important criteria for the critics is not the rigor of their computations in the sense of some rationalist engineering ideal, but their ability to capture the designer's interpretive intent. The computer system as a whole should not primarily be an autonomous equation solver, but a powerful medium of external memory to empower people's creativity. An appropriate software environment for this domain would be one designed to capture new and evolving knowledge, rather than one which simply incorporates predefined knowledge representations and systems of production rules.

Hermeneutic Software Design

A system for interpretation in design

The computer software for lunar habitat design is part of an effort to define an alternative to knowledge-based expert systems. The new approach is called "hermeneutic" software design because it is interpretation-based. It proposes a model of the computer as a medium within which designers can construct, interpret, converse with and communicate about design artifacts. The system does not claim to incorporate extensive knowledge of the domain in the sense of an expert system's elaborate set of universally-valid production rules or an expert critiquing system's battery of objective critics. Rather, the system provides an environment in which people can view evolving designs from perspectives which are important to them.

Interpretation-based systems are still domain-specific like expert systems in certain ways. First of all, the structure, implementation and interface of the designing environment is crafted in response to the nature of the domain. For instance, the system for lunar habitat design, which is named Hermes, adopts a different approach to providing a palette of building components than a corresponding system for kitchen design would, due to a difference in the domains. Kitchen appliances are stock items which are installed as they come out of the box, whereas lunar habitat components must generally be modified or even redesigned to work properly in the habitat.

Another domain-specific aspect of an interpretation-based environment is that it is always already seeded with a considerable amount of information about the domain, primarily in the form of examples of interpreted designs. There are also a variety of useful terms, critics, and queries that have been defined in advance. From a theoretical viewpoint, their "seed" embodies a form of history: we always interpret from the background of past experiences and interpretive traditions, which we initially accept uncritically. In practical terms, it is much easier to design and create new perspectives by starting from and then modifying existing ideas and expressions.

A knowledge-based system would typically be seeded with information that purports to capture an objective understanding of the domain. For instance, it might contain an issue base which contains the primary issues of design in the domain along with the standard options for resolving the issues, a palette of the basic primitive components, and a catalog of prototypical solutions. By contrast, an interpretation-based seed would provide tools for building interpretive perspectives of domain artifacts; it would include issues, palette items and artifacts that have been constructed under different interpretations in past design projects. The Hermes seed, for example, consists of such information from a series of lunar habitat design sessions that were captured on videotape during preliminary research on the domain, and then modeled in the Hermes system. Additional examples were added from published designs of lunar habitats. Then, an issue-base was constructed to provide a structure to the complex of inter-related rationale issues.

Because lunar habitat design is an exploratory domain, there is no such thing as a comprehensive or objective view of the field. Case studies from particular interpretive perspectives provide the only base on which new design efforts can be built. Although it is often possible to systematize the information in a seed or in a system that has accumulated more designs through use, the result of this kind of "re-seeding" can make no claim to objectivity or comprehensiveness. The act of reorganizing itself proceeds under a certain interpretation or mixture of interpretations of the field, and interesting future designs will focus on new approaches and concerns that were not previously thought of.

Hermes includes a language in which designers can express their concerns. There is also a system of interpretive contexts that can be used for grouping together a set of definitions in the language. Together, these features support the creation and sharing of interpretive perspectives on design artifacts in the computer system.

A language for disclosure and computation

Language is the ultimate medium for interpretation. Our ability to use language is what allows us to disclose things as certain kinds of things, and thereby to comprehend them. This is what Gadamer has in mind when he claims that "being, which can be understood, is language" (Gadamer, 1960). As noted above, it is in the reflective conversation with the materials of the design situation that the artifact and the designer are both disclosed as what or who they are. This happens in the process of explication in which what was tacitly anticipated becomes expressed in language.

A primary stage of language use is naming. Accordingly, the Hermes system allows all objects in the design environment to be named by the system user. Graphical objects in a drawing, textual statements in the rationale, critics, etc. can all be named. This gives the user the ability to refer to them in other statements, such as critics and queries, and to access annotations attached to them.

Perhaps the next most basic use of language is for categorization For instance, statements in the Hermes issue base (or any other objects in the system) can be categorized by the user when they are created. The links connecting them are also given a type. Thus, an "answer" might be related to an "argument" via a "justification" relationship. Then one can request a display of all the "argument" statements that are related by "justification" links to "answer" statements of a given "issue" statement. Queries like this are fundamental to the ability of Hermes to support interpretation. Of course, the types themselves are created by users as are all its terms and constructs (predicates, conditional phrases, filtering clauses, interpretation expressions, critics, queries, etc.).

Because Hermes needs to display information in accordance with interpretations that are not pre-defined but are defined by the user, all displays must be computed dynamically. This is done with queries as opposed to the page-based approach of many hypertext systems. In a system like HyperCard, a presentation of design rationale might contain a page full of issues. Embedded with an issue would be a button for its justification. Clicking on that button would bring up another page of text presenting the justification. In Hermes, however, the justification must be recomputed based on the current interpretation. This is done by executing a query based on the information desired (e.g., justifications of an answer to a certain issue) and based on the definition of the current interpretation. The results of the query are then displayed, in place of a pre-formatted page.

The Hermes Disclosure Language defines all displays of information in the system. In a sense, it is a query language which searches through the database of design drawings and textual rationale to select and format data for displays. The language is specifically defined to correspond to the representation of information in the system. Furthermore, it is designed to be as English-like as possible, to make it easy for users to interpret. At the same time, it must be structured for the computer software to operate upon it, and to do so in an efficient manner. So the language is itself a computer representation of user intentions.

The disclosure language is integral to Hermes. Computations can be defined by users in the language. For instance, a calculation of total private space in a lunar habitat could be expressed using a predicate for privacy, some measurements of graphical objects in the drawing, and arithmetic operations. Critics are also defined in the language, as are display queries. The language allows critics and queries to be built up modularly from component definitions of predicates, filtering clauses, etc. So, the calculation of total private space could be named and referred to in a critic which checked that the result of that computation was at least a certain amount per astronaut. A query could request that all private spaces in the drawing be displayed, or highlighted, or shown in red. (Cf. Stahl, 1991, for a detailed discussion of the language.) These definitions can be modified in different interpretive contexts, changing the effects of calculations, queries and critics in those contexts.

Interpretive contexts for shared perspectives

Hermes allows its users to define contexts and switch between them. These contexts provide a system for establishing, organizing and sharing interpretation. A given context might contain an inter-related set of language constructs (types, predicates, expressions, critics, queries) that articulate an individual's or group's perspective on design.

Figure 1.

A context simply consists of a name for the context and a list of old contexts that are inherited by the new one. This creates a hierarchy of contexts. In Figure 1, for instance, a new context is being created which will inherit from the context named "Gerhard Fischer". Since Fischer's context inherits from Eisenberg's, which inherits from Lewis' and others', the new context will automatically have access to any information in these other perspectives.

Suppose that Lewis defined a clause in the language for entering or displaying "deliberation" as a tree of issues with their answers with their argumentation. Then this definition would be active in the new context as well. However, Eisenberg or Fischer could have redefined this definition in their context. That would not change the definition for Lewis, but it would effect the definition inherited by the new context. Of course, the new context could redefine the term again. In this way, contexts can share definitions. They can also make modifications which do not effect the original definitions that they share. However, if the definition that is inherited is at some point changed, for instance by Lewis, then that change is carried through to the new context.

Figure 2.

Here is an example of contexts being used for different interpretations. In Schoen's experiment with architects designing a library from a given footprint, each of the subjects saw the building space differently and therefore defined the design task differently. Figure 2 shows how the space might be graphically represented in Hermes in the four cases reported by Schoen. If the four architects had defined their own contexts and inherited a context that contained the original sketch of the building footprint, then they could have modified the sketch in their own context. If one then asked the system to disclose the sketch of the library, it would appear as appropriate for whichever context one was in. The different graphic representations would form the basis for the development of different terms for discussing the design and for formulating different rationale.

Eisenberg modified the answers and arguments in his context. By selecting a different context, we disclose a different display in response to the same query. From the rationale given, we might infer that Lewis is designing from an interpretive perspective primarily concerned with traffic flow and only secondarily with privacy issues. Eisenberg is particularly concerned with establishing a separation of public and private areas, and also with minimizing the considerable amount of space taken up by the bunks.

The graphic representation of the lunar habitat has also changed from Figure 3 to Figure 4. The bunks have been rotated and rearranged. An additional area of stowage has been added. (It is not clear from the floorplans shown, but the bunks and stowage are at loft level, above the normal work area and corridor.) Although they look like two different designs, they are really just two views of one object, the lunar habitat design that is being worked on jointly and viewed in two different contexts.

Figure 3.

Figure 4.

These figures also illustrate how interpretive critics can work. Assume that Lewis has finished his design. Then Eisenberg comes along with his concern for the separation of public and private spaces. He defines a critic that tests for the separation of these spaces and displays the message that appears in the Critique window of Figure 3. At that point, Eisenberg decides to create his own context and to inherit in it the work of Lewis' context. He then makes the changes to the drawing and rationale and tests his new version with the critic. Now the critic responds with, "No problems were found for this design", as in Figure 4. Eisenberg now has a new critic which he can use to test other drawings that may be in the database to see how they hold up under his interpretational perspective.

Inheritance of contexts is a powerful mechanism for a couple of reasons. First, a new context can easily and instantly acquire definitions of terms in the disclosure language, textual contents in the issue base, and graphical figures of designs from as many other contexts as it wants. This is done using techniques of virtual copying that require no overhead of time or computer memory. Second, procedures of the disclosure language and data in displays are computed dynamically. That means that queries which display information always use the definition of the procedures and data which correspond to the currently active context. So, if Fischer is interested in viewing design rationale with a different interpretation of deliberation, he can modify the definition, and then all displays whose query uses that term at some level will be changed to correspond to the new interpretation.

Hypermedia for representation and integration

Modern scientific knowledge has been made practical by the medium of written language. (Donald, 1991; Norman, 1993) Writing provided an external memory for people, overcoming the limitations of human memory, especially short-term working memory. It let them put down their ideas where they and other people could view them, criticize them, and refine them. It facilitated the communication of ideas and the evolution of shared perspectives. The Hermes design environment -- named after the wing-footed Greek messenger god credited with discovering both spoken and written language -- aims to extend the medium of external memory from static paper to a highly computational medium. The idea is to represent design ideas, graphical concepts, rationale and interpretive perspectives in a system which can dynamically make use of these representations to produce displays which disclose new views of the design situation for people to react to.

Traditional AI always sought clever representation schemes which allowed an automated system to solve the problems of well-known and narrowly-defined domains. The perspective on design methodology presented in the first part of this paper argues for a more flexible representation style which empowers the intuitive skills of humans rather than trying to replace them with algorithmic computations. An English-like language in which all terms are defined by the users is one way to do this. A system of personal and shared interpretive contexts in which people can collect drawings, argumentation and language expressions which correspond to their unique interpretive concerns is a second way. In Hermes, the disclosure language and the interpretive context mechanisms are used to define an extended form of hypermedia, in which data of any medium and their associated procedural methods can be unified in one representation system.

In a hypermedia system, various kinds of media like text, line drawings, pictures, numbers, conditional propositions, sounds, video clips, and animations can be stored as nodes within one database. Each medium has methods associated with it. For instance, each medium would have its own display method, so that text would be displayed as lines of characters in a certain font and size, which wraps to the next line when it reaches the right margin; numbers would be displayed in a certain decimal format; drawings would be displayed graphically. In Hermes, the types of nodes (e.g., "issue") and the components of the language (e.g., "deliberation") are also stored in nodes.

Hypermedia consists of these nodes and links between pairs of nodes. The links can have types just like the nodes. By means of the links, nodes can be attached to each other. Thus, in Hermes, any node can have textual annotation attached to it. It could also be linked to the name of its creator and the date it was last modified. Various procedures are available to the user of hypermedia to navigate through it. In Hermes, viewing of information in the system is controlled primarily by means of the disclosure language, and always takes into account the active context. Through the language and context mechanisms, Hermes gives its users extensive access to all the information and control over its presentation.

Hermes' extended form of hypermedia provides an integrated representation for the various kinds of data and procedures which are needed to support design. This representation unifies all the information in a single medium which facilitates complex inter-relationships, is computationally active, supports shared interpretive contexts, and promotes control by human users. Thereby, Hermes can support creativity in design, rather than trying to automate or rigidify the design process. With its disclosure language and its interpretive contexts, Hermes illustrates the approach of hermeneutic software design based on principles of design methodology and on the nature of human interpretation.


The perspective on design methodology and the approach to computer support for design presented here grew out of ideas of Ray McCall of the School of Environmental Design, Gerhard Fischer of the Department of Computer Science, and other members of the Human-Computer Communication research group at the University of Colorado at Boulder. The hermeneutic approach stems from Hans-Georg Gadamer's classes at the University of Heidelberg in 1967/68.

The research in providing computer support for the domain of lunar habitat design was supported in part by a grant to Ray McCall from the Colorado Advanced Software Institute (CASI) for 1991-92 in collaboration with Johnson Engineering, Inc. of Boulder. CASI is sponsored in part by the Colorado Advanced Technology Institute (CATI), an agency of the State of Colorado. CATI promotes advanced technology education and research at universities in Colorado for the purpose of economic development.

This research was also supported by the National Science Foundation under grant No. IRI-9015441.


Alexander C (1964) Notes on the Synthesis of Form. Cambridge: Harvard University Press.

Compton WD, Benson CD (1983) Living and Working in Space: A History of Skylab. Washington, DC: NASA.

Conklin J, Begeman M (1988) gIBIS: A Hypertext Tool for Exploratory Policy Discussion. Proceedings of the Conference on Computer Supported Cooperative Work. New York: ACM. 140.

Design Edge (1990) Initial Lunar Habitat Construction Shack. Design control specification. Houston, TX.

Donald M (1991) Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition. Cambridge: Harvard University Press.

Fischer G, Lemke A (1988) Construction Kits and Design Environments: Steps Toward Human Problem-Domain Communication. Human-Computer Interaction, 3, 3, 179.

Gadamer H-G (1960) Wahrheit und Method [Truth and Method]. Tuebingen: Mohr.

Heidegger M (1927) Sein und Zeit [Being and Time]. Tuebingen: Niemeyer.

Kazmierski M, Spangler D (1992) Lunatechs II: A Kit of Parts for Lunar Habitat Design. Unpublished project report, College of Environmental Design, University of Colorado at Boulder.

McCall R, Bennett P, d'Oronzio P, Ostwald J, Shipman F, Wallace N (1990) Phidias: A PHI-based Design Environment Integrating CAD Graphics into Dynamic Hypertext. Proceedings of the European Conference on Hypertext (ECHT '90).

Moore GT, Fieber JP, Moths JH, Paruleski KL (1991) Genesis Advanced Lunar Outpost II: A Progress Report. In Blackledge RC Redfield CL Seida SB (Eds.), Space -- A Call for Action: Proceedings of the Tenth Annual International Space Development Conference. San Diego, CA: Univelt, 55.

Norman D (1993) Things That Make Us Smart. Reading, MA: Addison-Wesley.

Rittel H, Webber M (1972) Dilemmas in a General Theory of Planning. Working Paper No. 194. University of California at Berkeley.

Schoen D (1983) The Reflective Practitioner. New York: Basic Books.

Schoen D (1992) Designing as Reflective Conversation with the Materials of a Design Situation. Knowledge-Based Systems, 5, 3.

Silverman B (1992) Survey of Expert Critiquing Systems: Practical an Theoretical Frontiers. Communications of the ACM, 35, 4, 106.

Simon H (1973) The Structure of Ill-structured Problems. Artificial Intelligence, 4, 181.

Simon H (1981) The Sciences of the Artificial. Cambridge: MIT Press.

Snodgrass A, Coyne R (1990) Is Designing Hermeneutical? Working paper. Faculty of Architecture, University of Sydney.

Stahl G (1991) A Hypermedia Inference Language as an Alternative to Rule-Based Expert Systems. Technical Report CU-CS-557-91. Computer Science Department, University of Colorado at Boulder.

Go to top of this page

Return to Gerry Stahl's Home Page

Send email to Gerry.Stahl@drexel.edu

This page last modified on August 01, 2003