The central goal of this chapter is to describe six principles that have proven to be especially useful to help researchers (or teachers, curriculum designers, or assessment specialists) create thought revealing activities that we refer to as model-eliciting activities. The first section describes general characteristics and purposes of model-eliciting activities. The second section gives examples of several model-eliciting activities, and it also describes typical solutions that students generate to such problems. The third section states several disclaimers about model-eliciting activities, which tend to cause confusion if they are not addressed early. The fourth section describes difficulties that model-eliciting activities were designed to address in research; and, the fifth section describes principles for designing productive model-eliciting activities.
Finally, the last sections describe several common misconceptions or questions that often are raised about how model eliciting activities can be used in teaching, research, or assessment.
This chapter is especially concerned about thought revealing activities that focus on the development of constructs (models or conceptual systems that are embedded in a variety of representational systems) that provide the conceptual foundations for deeper and higher order understandings of many of the most powerful ideas in precollege mathematics and science curricula.1 Therefore, the activities that are emphasized herein are not only thought revealing, but also model-eliciting. That is, the descriptions, explanations, and constructions that students generate while working on them directly reveal how they are interpreting the mathematical situations that they encounter by disclosing how these situations are being mathematized (e.g., quantified, organized, coordinatized, dimensionalized) or interpreted.
The kind of thought revealing activities discussed in this chapter are useful for instruction and assessment as well as for research; and, they also are useful for investigating (or promoting) the development of teachers as well as students. How can they serve such diverse functions? First, to learn more about the nature of students' (or teachers') developing knowledge, it is productive to focus on tasks in which the products that are generated reveal significant information about the ways of thinking that produced them.2 Second, if intermediate solution steps are externalized in forms that can be examined (by researchers, teachers, or students themselves), then the by-products of these learning or problem solving activities often generate trails of documentation that go beyond providing information about final results; they also reveal important information about the processes that contributed to these results?3 Third, when a series of trial ways of thinking is externalized, tested, and refined or extended repeatedly, such thought revealing activities often go beyond providing documentation about completed learning or problem solving experiences; they also support the productivity of ongoing learning or problem solving experiences. Therefore, thought revealing activities of this type not only help to document development; they also promote development. Furthermore, because one of the most effective ways to help teachers improve their teaching practices is to help them become more familiar with their students' ways of thinking, thought revealing activities for students often provide ideal contexts for analogous types of thought revealing activities for teachers.
Of course, if our only goals were to investigate students' understandings of behavioral objectives of instruction, then model-eliciting activities would not be needed because proficiency with low level facts and skills tend to be directly observable using the kind of problems that are emphasized in traditional textbooks and tests. But, even when these latter types of traditional problems are modified by asking students to explain the reasoning they used to arrive at their solutions, this chapter explains why such problems are seldom effective for revealing students' understandings of the process objectives of instruction. Moreover, they tend to be completely inadequate for investigating the kinds of cognitive objectives of instruction that are the central concerns of this chapter.
For most problems or exercises that occur in mathematics textbooks and tests, the problem solvers goal is merely to produce a brief answer to a question that was formulated by others (within a situation that was described by others, getting from givens to goals that specified by others, and using strings of facts and rules that are restricted artificially by others). But, for model-eliciting activities, the heart of the problem is for students themselves to develop an explicit mathematical interpretation of situations. That is, students must mathematize situations.
Mathematizing involves making symbolic descriptions of meaningful situations, whereas the kinds of word problems that are in traditional textbooks and tests tend to emphasize almost exactly the opposite kind of processes. In other words, beyond difficulties associated with relevant computations, the problematic aspects of most textbook problems usually involve making meaning out of symbolic descriptions.
WHO FORMULATED OUR CURRENT CONCEPTION OF MODEL-ELICITING ACTIVITIES?
Because the goal of this chapter is to describe six principles that have proved to be especially useful in creating thought revealing activities for research, assessment, and instruction, it is significant to emphasize that the principles described in this chapter were not invented by researchers sitting in laboratories. Instead, they were proposed, tested, and refined by hundreds of teachers, parents, and community leaders who worked with researchers in a series of 15-week, multi-tiered teaching experiments of the type recounted in the chapter on multi-tiered teaching experiments (Lesh & Kelly, chap. 9, this volume). The goal of these projects was to develop activities for instruction and assessment with the following general characteristics: .
Among the tasks that participants in the authors' studies ranked highest on the aforementioned criteria most were similar to case studies used in professional schools in fields ranging from business to engineering and medicine. These tasks were not merely surrogates for activities that are meaningful and important in real life situations. They directly involved samples of work taken from a representative collection of complex tasks that are intrinsically worthwhile, rather than only providing preparation for something else. Highly rated activities also tended to require no less than a full class period for students to complete, and realistic tools and resources generally were available (including calculators, computers, consultants, colleagues, and "how to" manuals). Also, teams of people with diverse expertise often worked together to generate appropriate responses. Consequently, planning, communication, monitoring, and assessing were important processes in addition to the computational and factual recall abilities traditionally considered to be the (sole) components of mathematical thinking.
EXAMPLES OF MODEL-ELICITING ACTIVITIES
An example of a model-eliciting activity is given in Appendix A; it is called The Sears Catalogue Problem (Lesh & Zawojewski, 1987). As with other model-eliciting activities that the authors have emphasized, The Sears Catalogue Problem was designed to require at least one full class period for three-person teams of average-ability middle school students to complete. To set the stage for the problem, the data that students are given includes two newspapers (which include "back to school" mini-catalogues), one published recently and one published 10 years earlier. The 10-year-old newspaper includes an editorial article that describes the typical allowance that students in a given community received at the time that the old newspaper was published. The goal of the problem is for students to write a new newspaper article describing to readers what a comparable allowance should be expected to be 10 years in the future. The new newspaper article should be persuasive to parents who might not believe that students' allowances should increase, like adults' salaries, to reflect increasing prices that are revealed in the newspapers and mini-catalogues.
Other examples of model-eliciting activities are given in the chapter on operational definitions (Lesh & Clarke, chap. 6, this volume). Also, in chapter 23, this volume, the complete transcript is given of a solution that was generated by one group of average-ability seventh-graders who worked on a problem that we call "The Summer Jobs Problem" (Katims, Lesh, Hole, & Hoover, 1994b). The goal of The Summer Jobs Problem might be described (by a scientist) as focusing on the formulation of an operational definition that describes the problem solvers' notion of how to measure the productivity of workers who are applying for a particular kind of summer job. Or, from the perspective of a business person, the students' goal might be described as producing a rule that is useful for making decisions about who to hire for a particular job. In either case, an important characteristic of such problems is that the descriptions, explanations, and justifications that are needed to respond to the problem are not only accompaniments to useful responses; they are the heart of useful responses. Also, because the problem is to be addressed by three-person teams of students, solution processes usually make heavy demands on metacognitive abilities involving planning, monitoring, and assessing as well as on communication capabilities and representational fluency for purposes such as: (a) justifying and explaining suggested actions and predicting their consequences; (b) monitoring and assessing progress; and (c) integrating and communicating results in a form that is useful to others. Therefore, solutions to the problem tend to emphasize a much broader notion of mathematical competencies than those stressed in traditional textbooks and tests.
Another important characteristic of model-eliciting activities is that students generate useful solutions (description, explanations, and constructions) by repeatedly revealing, testing, and refining or extending their ways of thinking. Consequently, they tend to go through sequences of interpretation-development cycles in which the givens, goals, and relevant solution processes are thought about in different ways. For example, a typical series of these cycles is described in the transcript that accompanies the chapter about iterative videotape analyses (Lesh & Lehrer, chap. 23, this volume), where a sequence of 14 distinct interpretation cycles can be identified. In general, the transition from initial to final interpretations often can be described in the following manner.
CHARACTERISTICS OF STUDENTS' EARLY INTERPRETATIONS
As the transcript of The Summer Jobs Problem illustrates, students' early interpretations of model-eliciting activities frequently consist of a hodgepodge of several disorganized and inconsistent ways of thinking about givens, goals, and possible solution steps. For example, when three people are working in a group, they often fail to recognize that each individual is: (a) thinking about the situation in somewhat different ways; (b) focusing on different information and relationships; (c) aiming at somewhat different goals; and (d) envisioning different procedures for achieving these goals. In fact, even a single individual often fails to maintain a consistent interpretation and switches unconsciously from one way of thinking to another without noticing the change.
Because problems similar to The Summer Jobs Problem frequently involve both too much and not enough information, students' first representations and ways of thinking typically focus on only a subset of the information available; and, they often seem to be excessively preoccupied with finding ways to simplify or aggregate (sum, average) all of the information. For example, in their early interpretations of The Summer Jobs Problem, students often concentrate on earnings while ignoring work schedules; or, they become engrossed with fast sales periods while ignoring slow and steady periods; or, they center on only 1 month and ignore the other 2 months. Yet, at the same time that they are preoccupied with finding ways to simplify or reduce the information, they also may express concerns about not having additional information that they believe is relevant. For example, for The Summer Jobs Problem, such information may include facts about the needs, flexibility, or friendliness of potential employees, or their willingness to work. Although this information is not available, Maya, the client in the problem, would not be helped if students refused to respond to her request merely because some potentially relevant information was unavailable. Even at the early stages of thinking about the problem, students generally recognize the to develop a simplified description (or model) that focuses on significant relationships, patterns, and trends and that simplifies the information in a useful form, while avoiding or taking into account difficulties related to surface-level details or gaps in the data.
Characteristics of Intermediate Interpretations
Later, more sophisticated ways of thinking tend to go beyond organizing and processing isolated pieces of data toward focusing on relationships, patterns, or trends in the data. For example, in The Summer Jobs Problem, students often place increasing emphasis on ratios or rates involving time and money (e.g., dollars per hour). Whereas, in their earliest interpretations, they may have failed to differentiate between (1) an average of several rates and (2) a single rate that is based on average earnings and times. In the first case, they may begin by calculating rates within each category (slow, steady, fast; June, July, August); then, (3) they find ways to aggregate these rates. In the second case, they begin by calculating sums or averages across categories (slow, steady, fast; June, July, August); then, they calculate rates based on these aggregates.
Characteristics of Final Interpretations
Students' final responses often contain conditional statements, including various ways to take into account additional information that wasn't provided. For instance, final solutions of "The Summer Jobs Problem" may enable Maya to assign different "weights" to reflect her views about the relative importance of information from different months or different periods of work. They may enable her to adjust suggested weights to suit her preferences. They may use supplementary procedures, such as interviews, to take into account additional information; or, they may consider new hiring possibilities that were not suggested (such as hiring more or fewer full-time or part-time employees). Also, rather than applying a single rule uniformly across all of the employees, students may use a series of telescoping procedures. For example, as the transcript in Appendix to chapter 23 illustrates, one solution began by selecting employees for a "must hire" category; then, a different procedure was used to select employees among the remaining possibilities. Also, instead of relying on sums or averages to simplify the information, graphs like the ones shown in the Appendix may be used to focus on trends.
In general, for model-eliciting activities, solution process seldom conform to the following pattern. Initial interpretations organize and simplify the situation (so that additional information can be noticed or so that attention can be directed toward underlying patterns and regularities, which, in turn, may force changes in the interpretations that was used). That is, the new information that is noticed creates the need for more refined or more elaborate descriptions or interpretations. Then, in turn, the new interpretation that emerges often creates the need for another round of noticing additional information. Thus, interpretations can be unstable and evolving; and, the general cycle of development repeats until students judge the match between the model and the modeled to be sufficiently close and sufficiently powerful to produce the desired results without further adaptations.
Before describing principles for designing effective model-eliciting activities, this section makes four brief disclaimers which tend to cause confusion if they are not addressed early.
Although emphasizing model-eliciting activities and problems that involve simulations of situations that might occur in students' everyday lives, the authors are not claiming that these are the only kind of thought revealing activities that can play useful roles in research, instruction, or assessment. Rather, they represent an especially informative and productive class of problems that emphasize productive perspectives about the nature of mathematics, the nature of real life situations in which mathematics is useful, and the nature of the abilities that contribute to success in situations encountered in everyday life.
Teachers who helped to develop our six principles discovered that, when they examined traditional textbooks and tests, nearly of the problems violated all of the principles described in this chapter. In fact, even in current curricular materials being developed under such banners as authentic assessment, performance assessment, or applications-oriented instruction, few problems could be found that satisfy the principles that they believed to be critical for the purposes emphasized in this chapter.
Although some people might be concerned that attention to pure mathematics might suffer if a small amount of attention is paid to problem solving situations that might occur in the present or future lives of students, the authors of this chapter believe that there is little danger that school mathematics will swing too far in the direction of emphasizing mathematics or science understandings and abilities that are genuinely useful outside school. Furthermore, most of the design principles that apply to model-eliciting activities apply equally well to other types of activities, such as skill-building activities or pure mathematics activities in which mathematical constructs are explored or compared, rather than being elicited or applied.
When the authors advocate problems that might reasonably occur in everyday lives outside of schools, they are not attempting to define real life formally or to argue that any particular view of reality is superior. Instead, the main point emphasized is that if a goal of research is to investigate the nature of students' developing knowledge, then it behooves researchers to be willing to enter the reality of students occasionally instead of always expecting them to come to the world of the researcher and educator. Nonetheless, throughout this chapter, it is stressed that naive notions about what is "real" for a given student often lead to actions that are antithetical to the design principles underlying model-eliciting activities. Certainly, most middle school childrens' realities are different from those that many adults attribute to them.
Another relevant observation about real life activities is that, in modern jobs beyond the entry level, and especially in jobs that make heavy use of sophisticated conceptual technologies, a surprising number of the mathematical activities address "what if' questions that stretch the boundaries of reality. For example, the conceptual systems that humans create to make sense of their experiences also mold and shape the world in which these experiences occur by creating new types of complex, dynamic, interacting systems ranging in size from large-scale communication and economic systems to small-scale systems for scheduling, organizing, and accounting in everyday activities. Thus, reality itself tends to be filled with human constructs, and the line between applied and pure mathematics activities blurs. Similarly, even in the case of pure mathematics activities of creating, exploring, or transforming patterns that do not resemble existing systems (e.g., communication systems, management systems, accounting systems, economic systems, organizational systems), the products that are produced often turn into applications when the world is transformed by their existence.
The authors are not identifying model-eliciting activities with current curricula reform movements focusing on performance assessment. Yet, experiencing the kinds of activities that are described in this chapter, it is true that many participants thought of performance assessment as a primary function that these model-eliciting activities would serve. That is, the activities were designed explicitly to be useful for instruction and assessment as well as for research, and they were explicitly designed so that students would be able to learn and document what they are learning at the same time. Nevertheless, there are several reasons to be cautious about associating model-eliciting activities with performance assessment. First, focusing on performance is often very different from focusing on powerful constructs. Beyond this, the term assessment often is treated as if it refers to nothing more than testing, which suggests a number of unfortunate connotations that the authors do not wish to associate with model-eliciting activities. For example, tests usually come at the end of instruction; whereas model-eliciting activities often are most useful at the start, where the emphasis is on identifying students' conceptual strengths and weaknesses and on optimizing progress rather than making value judgments about work that has been completed or skills that have been mastered already. The goal of testing tends to be to place a "good-bad" label on students or their work; however, the goal of model-eliciting activities is to reveal the nature of students' thinking. Tests are designed to measure what is, not to change it; on the other hand, during model-eliciting activities, students routinely develop constructs that are new to them, which is another way of saying that they are learning.
The authors are not identifying model-eliciting activities with traditional conceptions of applied problem solving. In mathematics and science education, problem solving often is defined as "activities in which students must get from givens to goals when the path is not immediately obvious." Similarly, productive heuristics traditionally are thought of as providing answers the question, "What can you do when you are stuck?" But, when attention focuses on model development in situations that can occur in everyday life, then the essence of many problem solving situations is finding ways to interpret them mathematically. In such situations, it is generally more important for students to find ways to adapt, modify, and refine ideas that they do have, rather than to try to find ways to be more effective when they are stuck (i.e., when they have no relevant ideas or when no substantive constructs appear relevant, as often happens in puzzles and games).
Table 21.1 describes a: shift in emphasis from the traditional view of problem solving to an alternative that is based on model-eliciting activities. This perspective highlights a paradigm shift with potentially deep roots and extensive implications. For a brief account of other key elements of this paradigm shift, see chapter 2 in this volume (Kelly & Lesh).
WHAT CREATED NEED FOR MODEL-ELICITING ACTIVITIES IN OUR OWN PAST RESEARCH PROJECTS?
What specific difficulties have model-eliciting activities been designed to address? When ideas for this chapter were discussed in the project on innovative research designs in mathematics and science education (chapters 1 & 2, this volume, give the background of this project), the topic of "problem characteristics" came up most directly in discussions about clinical interviews. For example, in some of the authors' research (Lesh et al., 1989), one of the original reasons why the notion of a model-eliciting activity was developed was to address important problems and opportunities that occurred during classroom-based clinical interviews.
One of the most important factors determining the success of most clinical interviews is the quality of the underlying tasks; and, this is especially true when mathematics and science education research shifts beyond surveying performance on a large number of small problems and moves toward probing students' thinking on a smaller number of larger problems. For example, in productive clinical interviews, it generally is desirable to reduce the number of interventions that researchers are required to make; and, in general, this is possible only when extremely well designed tasks are used.
When some of the authors conducted one-to-one interviews in their research, some of the most significant difficulties that arose occurred because such approaches tend to be costly and time consuming for both students and interviewers as well as for the people of many levels and types who analyze the data. Therefore, time constraints made it difficult to include enough sessions, enough subjects, and enough tasks to thoroughly investigate the many phenomena that it was desirable to emphasize. On the other hand, opportunities also arose when teachers and colleagues saw how much useful information often came out of these labor-intensive interviews. As a result, it was wondered if there might be some way to scale up the interviews so that they could be used with larger numbers of students or by busy teachers who did not have long hours to spend interviewing individual students or watching videotapes.
The following three observations suggested productive strategies to respond to the challenges and opportunities described on the preceding page.
One reason the authors believed that one-to-one interviews were needed was based on the assumption that sophisticated branching sequences of probing questions were required in order to follow the thinking of individual students and to document a variety of levels and types of responses. However, it was observed that the most expert interviewers varied a great deal in the frequency and duration of their interventions. For some interviewers, the ratio of "researcher talk" to "student talk" was nearly one-to-one, and the time lag between interventions was short. But, for other experts who were equally adept at producing convincing documentation about the nature of students' thinking, the interventions tended to be both brief and infrequent. Consequently, the model-eliciting activities described in this chapter were developed, in part, by studying the interviewing techniques of the "quieter" types of clinical interviewers. A goal was to design problems that keep students thinking productively but depend on only a minimum number of interventions and that encourage them to reveal explicitly a great deal of information about their evolving ways of thinking.
To accomplish this goal, it was important to design problems that would be: (a) self-adapting, in the sense that students would be able to interpret them meaningfully using different levels of mathematical knowledge and ability, as well as using a variety of different types of mathematical descriptions or explanations; (b) self-documenting, in the sense that the responses that students produce would reveal explicitly how they are thinking about the problem situation; and (c) self-monitoring, in the sense that students themselves would have a basis for monitoring their own thinking and would continue thinking in productive ways without continually needing to depend on adjustments by interviewers.
When tasks with these three characteristics are emphasized, it was found that interviewers usually do not need to make as many adjustments in the problem solving situation because students themselves are able to adapt the problem to fit their own ways of thinking. Ideally, after posing such self-adapting and self-documenting problems, interviewers can be free to roam about a classroom full of students, looking over shoulders, taking notes, and making only those few interventions that are absolutely necessary at appropriate times. In other words, in much the same way that an expert chess master may be able to carry on complex interactions with a roomful of individual chess players, we have found that model-eliciting problems often provide powerful yet practical ways for busy teachers or researchers to carry on simultaneous "interviews" with groups of students.
Another reason one-to-one interviews tend to be expensive is that students' ways of thinking often seem to be observable only by examining the processes that they use to arrive at their responses; and, the only way to observe these processes seems to be through the use time-consuming analyses of videotapes and transcripts. Therefore, the model-eliciting activities described herein were developed, in part, by studying ways to pose problems so that the final products that students generate would reveal as much as possible about the ways of thinking that created them.
One way to achieve this objective is to state the purpose of the problem in such a way that the solution process becomes an essential part of the product itself. A potential route to do this is to establish problem goals (such as developing constructions, descriptions, explanations, or justifications) that require students to reveal explicitly how they interpreted the situation by imparting what types of mathematical quantities, relationships, operations, and patterns they took into account. For example, in the transcript that is given in the Appendix to the chapter on iterative videotape analyses (Lesh & Lehrer, chap. 23, this volume), the students solve the problem by constructing tables and graphs that serve as "smart tools" to enable them to describe the productivity of workers so that decisions can be made about who to rehire. Consequently, the goal of the problem is to develop a tool for decision making, not only to make a decision. That is, these smart tools are not only parts of the process of producing responses to the problem, they are important parts of the responses themselves. Furthermore, the product that results can reveal a great deal of information about how students were thinking about the problem solving situation.
If model-eliciting activities are thought of as one-to-many interviews, then it is clear that some information may be lost that might have been available otherwise during one-to-one interviews followed by detailed videotape analyses. For instance, even for the best designed model-eliciting activities, students' final results seldom reveal information about rejected ideas that nonetheless influenced the final interpretation that was used and roles that various students played during the solution process. On the other hand, if researchers or teachers are freed from other time-consuming interactions, they often are able to record a great deal of this type of information while students are working. Furthermore, although some information inevitably will be lost when one-to-many interview techniques are used, other information is likely to become available that would not have been apparent using only one-to-one interviews. For example, when model-eliciting activities are used in classroom settings, many more students can be "interviewed," many more problems can be addressed, and the interviewer's time and expertise can be used efficiently when it is most productive, such as during data gathering and data analysis.
When one-to-one clinical interviews are conducted, one goal may be to determine the state of knowledge of a given learner or problem solver, but another goal may be to investigate how this state of knowledge develops over time in response to various types of conceptual challenges. Also, the learner or problem solver who is of interest is not always an individual isolated student. For example, teams of students solve problems; and, they also develop shared knowledge. Of course, if the learner or problem solver is a group, then questions may arise about the state of knowledge of individual students in the group, But, for many research questions that are of interest, the goal is not to form conclusions about either individuals or groups. For instance: What does it mean for students to develop a particular type of deeper or higher order understanding of a given mathematical construct? What is the nature of a typical primitive understanding of the preceding construct? What mechanisms enable learners or problem solvers to develop such a construct from situated to decontextualized knowledge? What factors tend to create the need for learners or problem solvers to develop a particular construct?
None of the answers to these questions involves statements about particular learners or problem solvers, regardless of whether the "individual" is a single person or a group. Instead, they involve statements about the nature of productive learning environments, or about what it means to achieve certain levels or types of understanding for particular mathematical constructs, or about the nature of various dimensions of development for the constructs. Nevertheless, even for researchers or teachers who are interested in making statements about individual students, it still may be useful to investigate model-eliciting activities involving teams of students. One reason is because many of the mathematical understandings and abilities that are desirable for students to develop are meaningful only within social contexts. For example, to the extent that mathematics is about communication, justification, or argumentation, the development of social norms tends to be highly relevant, and student-to-student or student-to-teacher interactions are as relevant as student-to-problem interactions. Furthermore, if Vygotskian perspectives on learning are adopted (Vygotsky, 1978), then one of the most important dimensions of conceptual development involves the gradual internalization of external processes and functions. In group problem solving sessions, it is natural for students to externalize ways of thinking that might remain internal otherwise. Also, the goals of learning or problem solving situations may involve developing shared knowledge, rather than only personal knowledge.
FOR DESIGNING PRODUCTIVE
To judge the productivity of model-eliciting activities, the most important criterion to keep in mind is that, when students work on them, they should reveal explicitly the development of constructs (conceptual models) that are significant from a mathematical point of view and powerful from a practical point of view. If this single criterion is satisfied, then the activity generally will be useful for purposes such as providing information that (a) helps teachers to plan effective instruction; (b) helps researchers to investigate the nature of students' developing mathematical or scientific constructs; and (c) helps assessment specialists to recognize and reward a broad range of mathematical capabilities that contribute to success in a technology-based age of information. This section describes six principles for designing activities that meet this criterion. In ether words, it describes six principles for creating productive model-eliciting activities.
The Model Construction Principle
Above all, model-eliciting activities are intended to be thought revealing activities. The ways of thinking that need to be highlighted are the conceptual systems that students use to construct or interpret (describe, explain) structurally interesting systems. Therefore, to develop model-eliciting activities that are thought revealing, the first principle to emphasize is that it is desirable for the goal of the activity to include the development of an explicit construction, description, explanation, or justified prediction.
Whereas the problematic aspects of traditional textbook problems tend to involve trying to make meaning out of symbolically stated questions (in addition to computational difficulties associated with the correct execution of relevant skills), model-eliciting activities emphasize almost exactly the opposite kinds of processes. They involve trying to make symbolic descriptions of meaningful situations; that is, they involve mathematizing.
If the solution to a problem involves mathematizing (e.g., quantifying, coordinating, expressing something spatially), then one of the most important products that students create is a model in which a variety of concrete, graphic, symbolic, or language-based representational systems may be needed in order to describe the relationships, operations, and patterns that the underlying model is intended to illustrate. For this reason, to satisfy the model construction principle, a primary question that needs to be asked is: Does the task put students in a situation where they recognize the need to develop a model for interpreting the givens, goals, and possible solution processes in a complex, problem solving situation? Or, does it ask them to produce only an answer to a question that was formulated by others? .
What is a model? A model is a system that consists of (a) elements; (b) relationships among elements; (c) operations that describe how the elements interact; and (d) patterns or rules, such as symmetry, commutativity, or transitivity, that apply to the relationships and operations. However, not all systems function as models. To be a model, a system must be used to describe another system, or to think about it, or to make sense of it, or to explain it, or to make predictions about it. Also, to be a mathematically significant model, it must focus on the underlying structural characteristics of the system being described. Therefore, if an activity satisfies the model construction principle, the developers of the activity should be able to name the kind of system that the students are being challenged to construct, and the system should focus on a mathematically significant construct.
How can activities create the need for students to develop, revise, refine, and extend a mathematically significant model? To answer this question, task developers often find it useful to ask themselves: What kinds of situations require anyone-including myself and other adults-to create models? Answers to this question tend to be similar regardless of whether one is working in mathematics, the sciences, everyday life, business, engineering, or another profession in which mathematics is useful. To illustrate:
In traditional mathematics textbooks, tests, and teaching, problems tend to be classified according to the kinds of numbers and number operations that they involveDo they involve whole numbers, fractions, decimals, ratios, or percents? Do they involve addition, subtraction, multiplication, division, or exponentiation?But, when only such questions are asked, unit labels tend to be treated as if they were mathematically uninteresting (e.g., 30 miles per hour x 20 minutes = [?]); and, even less attention tends to be given to the situations that these numbers are used to describe. Such omissions are important because mathematics is about quantities and quantitative relationships at least as much as it is about numbers and number operations, and because the quantities that are involved include much more than simple counts and measures. For example, they may include any of the following:
Model-eliciting activities emphasize the fact that mathematics is about seeing at least as much as it is about doing. Similarly, in science, it is obvious that some of the most important goals of instruction involve helping students to develop powerful models for making sense of their experiences involving light, gravity, electricity, magnetism, and other phenomena; and, it also is obvious that young students invent models of their own for making sense of these phenomena and that changing their ways of thinking must involve challenging and testing these models. But, because mathematics textbooks and tests usually have been preoccupied with computation situations in which interpretation is not problematic, students seldom develop more than extremely impoverished descriptive systems for making sense of situations involving the aforementioned kinds of quantities. In fact, it often is assumed that students do not (and cannot) develop metaphors,. Diagrams, language, and symbol systems for describing anything more than simple systems involving counts or measures.
Research on model-eliciting activities suggests a very different picture of what is possible (Lesh et al., 1993). If you build it, they will come!If students clearly recognize the need for a mathematical construct, then, in a remarkable number of instances, we've found that they will invent it. For example, when the authors first began to gather information about students solutions to The Sears Catalogue Problem, which is given in Appendix A of this chapter, the main model (or reasoning pattern) that students were expected to construct was proportional reasoning of the form A/B = C/D; and, in fact, most of the students observed did end up thinking about the problem using a type of proportional reasoning. However, a high percentage of students went far beyond an interpretation of the problem based on simple ratios, proportions, or linear equations (Lesh & Akerstrom, 1982). For example, to find a useful way to think about the situation, students often invented surprisingly creative ways to deal with issues involving weighted averages, trends, interpolation, extrapolation, data sampling, margins of error, or other ideas that their teachers thought were "too sophisticated" for them to learn. Furthermore, students who proved adept at such problems often were not the same ones who excelled at rule-following exercises or symbol-string manipulations (Lesh, 1983).
What created the need for students to develop descriptions and explanations in this situation? On the one hand, problems like The Sears Catalogue Problem contain an overwhelming amount of relevant information. So, students needed to think about issues such as: How many, and which, items should be considered? Which should be ignored? What should be done about unusual cases (such as the fact that the cost of pocket calculators decreased, whereas the cost of most other items increased)? How should the data be classified or organized? What kinds of patterns and relationships (e.g., additive, multiplicative, exponential) should be hypothesized? On the other hand, such problems often contain insufficient information because students frequently identified relevant issues for which no facts are available (e.g., how comparable are old and new radios?). Therefore, students must filter and weigh information. Also, they must make assumptions to compensate for information that is missing. Consequently, issues related to sampling, variability, and averages tend to arise, as well as information about ratios and positive or negative changes in costs. Also, in order to aggregate information about many different types of items, several methods were possible. For example, some students compute changes in individual prices and then aggregate the changes for many items while others students aggregate prices and then calculate one single price increase.
After reviewing the answers that hundreds of teams produced in response to this problem, one conclusion that the authors reached is that many students who have been labeled below average in ability routinely develop more sophisticated mathematical constructs than anybody tried to teach them (Lesh et al., 1993). Also, the interpretation cycles that students go through during 60-minute problem solving sessions often correspond to compact versions of the developmental sequences that psychologists and educators have observed over periods of several years concerning the "natural" evolution of children's concepts of rational numbers and proportional reasoning (Lesh & Kaput, 1988). For example, Table 21.2 describes stages in the development of proportional reasoning and in modeling cycles during a multicycle, problem solving sequence such as that entailed in The Sears Catalogue Problem in Appendix A.
Because of striking similarities between the two columns of the preceding chart, the authors sometimes refer to such problem solving episodes as local conceptual development (LCD) sessions.4 This LCD characteristic has important consequences for implementing other principles that have proved to be important for developing model-eliciting activities.
The Reality Principle
In many respects, the reality principle could be called the meaningfulness principle. This is because, in order to produce the impressive kinds of results described in the preceding section, it is important for students to try to make sense of the situation based on extensions of their own personal knowledge and experiences.
One way for curricula developers to test whether the reality principle is satisfied is to ask, "Could this really happen in a real life situation?" Nonetheless, the key to satisfying the reality principle is not for the problem to be "real" in an absolute sense. The key to success is to recognize that students often have "school mathematics abilities" (and disabilities) that function almost completely independently from their "real life sense-making abilities" and to acknowledge that superficially real problems are often precisely the ones that do the most damage in terms of discouraging students from basing their responses on extensions of the real life knowledge and experiences. So, if questions about reality are asked, it is important to keep in mind that an adult's reality can be quite different from a child's reality, and that one child's reality is not necessarily the same as another's.
It is not the purpose of this section to define what is meant by real life problems. Neglected characteristics of such problems have been described in a number of publications (e.g., Lesh, 1981; Lesh & Akerstrom, 1982; Lesh & Lamon, 1992; Lesh & Zawojewski, 1987) where numerous examples are given. Instead, the goal here is to give examples to show how the reality principle can be used to develop effective model-eliciting activities where the effectiveness of an activity is measured by its success in eliciting informative and significant work from students.
In textbooks and tests, it is easy to find problems that refer to real objects and events. Boats go up and down streams; trains pass one another going in the same or opposite directions; swimming pools fill with water; ladders slide down walls; and students are asked questions about how long, how fast, when, or where. But, very few of these questions are likely to be posed in the everyday lives of students, their friends, or families. Furthermore, the answers that the authors consider "correct" often would not be sensible in real situations. For example, consider the multiple-choice exercise in Table 21.3, which was extracted directly from a famous test produced by a famous test maker. Clearly, answer choice b is the one that the authors considered correct; but, in a real situation, none of the answer choices is very reasonable. Choices c, d, and e imply that the whole board is shorter than the sum of its parts. Yet, choice b (the intended "correct" answer) is impossible too! This is because saw blades are not infinitely thin, so some material must be lost during sawing, and the amount of loss depends on such factors as the type of wood being cut (hardwood vs. soft pine), the type of cut (ripping vs. crosscutting), and the type and width of the saw blade. Answer a is the only feasible solution; however, it implies a huge loss in sawing (perhaps by a very wide and dull saw blade, followed by vigorous sanding). The result is that real woodworkers who encounter such problems could give the "correct" answer only by turning off their "real heads" and engaging their "school math" reasoning (where such foolishness often is rewarded!).
In Table 21.3, the exercise on the right is an early attempt by a group of teachers to improve the multiple-choice question that's given on the left side. The first suggested improvement was based mainly on the current widespread aversion to multiple-choice (preanswered) items. That is, the original, multiple-choice format was replaced by the more politically correct, "constructed-response" format. But, in terms of failing to elicit interesting work from students and of having negative effects on the students interviewed by the authors, the second version of the problem proved to be even worse than the first. To see why, imagine what would happen if we asked a professional woodworker the revised question. What kind of response would we expect? Is it possible that 7 feet 9 inches (a rip cut) might be an acceptable answer to the new question? Should the cuts preserve the thickness and width, but divide the length; should they preserve the width and length, but divide the thickness; or should they preserve the length and thickness, but divide the width? How quickly is the answer? Is overestimating preferable to underestimating? How important are accuracy, precision, and lack of waste? What assumptions should be made about the effects of sanding and finishing the parts? And so on. To answer such questions, a real woodworker would need to know the purpose of cutting the wood. If an explanation is needed, the woodworker would need to know who needs the explanation and why. Without being given such information, there is no basis for deciding (for example) whether a 30-second or a 30-minute explanation is appropriate. Therefore, the only way to respond to such questions in the absence of relevant information is for students to turn off their real life, sense-making abilities.
Appendix B at the end of this chapter gives two more examples to illustrate how several groups of expert teachers used the reality principle to select and improve performance assessment activities that have been published by the National Council of Teachers of Mathematics (NCTM) and other relevant professional or governmental organizations. The first example, called The Softball Problem, was found in a popular NCTM Addenda Series: Grades 5-8. Developing Number Sense in the Middle Grades (1991a). The second example, called Exploring the Size of a Million Dollars (see Table B1 in Appendix B), was found in the NCTM's Curriculum and Evaluation Standards for School Mathematics (NCTM, 1989). In both cases, the problems obviously refer to realistic events. But, in both cases, interviews with students revealed that the questions that were asked encouraged significant numbers of them to turn off their real life sense-making abilities and to give only the "school answers" that they thought were expected. The teachers' critiques illustrate why a "real" problem is not merely one that refers to a real situation. The question that is asked also needs to make sense in terms of students' real life knowledge and experiences.
To develop problems that encourage students to base their solutions on extensions of their personal knowledge, the topics that work best tend to be those that fit the current local interests and experiences of specifically targeted groups of students. But, in order to demonstrate their proficiency with a given mathematical or scientific construct, it often is not necessary for all of the students in a given classroom to work on exactly the same problem. Generally, clones, or structurally isomorphic problems, can be used so that students with different experiences and interests can demonstrate their competencies in different kinds of contexts.
To encourage students to make sense of problems using extensions of their real life knowledge and experiences, one device that the authors have found useful is to base problem solving situations on articles that describe real life situations in a mathematically rich newspaper (Katims et al., 1994). When such mathematically rich newspapers are used researchers or teachers often pass them out the day before a given, problem solving episode is planned. Students also are given a set of preparatory questions that focus on reading the relevant newspaper article "with a mathematical eye." Then, following brief discussions of such preparatory questions, students usually spend less time floundering when they begin to work on the project. Also, for practical purposes, the newspaper articles help parents to recognize the significance of the work that students are doing.
The Self-Assessment Principle
If problems are meaningful and if students recognize the need for a given construction, description, explanation, then an explosion of ideas is likely to occur in a group of students. But, for these conceptual systems to evolve, selection and refinement and elaboration also are needed. So, the self-assessment principle asks: Does the problem statement strongly suggest appropriate criteria for assessing the usefulness of alternative solutions? Is the purpose clear (what, when, why, where, and for whom)? Are students able to judge for themselves when their responses need to be improved, or when they need to be refined or extended for a given purpose? Will students know when they have finished? Or will they continually need to ask their teachers, "Is this good enough?"
Good business managers know that getting good work from employees depends, to a large extent, on stating assignments in such a way that workers know what is to be produced when it is to be produced, why it is to be produced, and for whom it is to be produced. Otherwise, employees may have no way of knowing when the results are good enough, and no way of making judgments about such issues as whether speed is more important than precision, details, and accuracy. If workers cannot judge the quality of their work, then the quality of their work usually suffers. Therefore, it should be no surprise that a similar principle also applies to school work also. In fact, this principle is especially important in the case of model-eliciting activities because acceptable solutions generally require several modeling cycles and because students work in teams where disparate ways of thinking generally must be sorted out, reformed, or integrated. At the outset of such problem solving episodes, teams of students usually start with different ideas from team members. Therefore, to make progress, groups need to: (a) detect deficiencies in their current ways of thinking; (b) compare alternative ideas and select those that are most and least useful; (c) integrate strengths and minimize weaknesses from alternative ways of thinking; (d) extend or refine the interpretations that are most promising; and (e) assess the adaptations that are made.
If students are unable to detect deficiencies in their primitive ways of thinking, then they are not likely to make significant efforts to develop beyond their primitive interpretations. If interpretation-reality mismatches are not detected, then students are not likely to proceed from their (n-1)st to their nth interpretation. In addition, if there are no criteria available for assessing mismatches among alternative interpretations, then these potential conflicts are not likely to lead to productive adaptations. Therefore, effective, model-eliciting activities should be stated in such a way that students themselves can assess their progress and the usefulness of the preliminary results that they produce. During each modeling cycle, students must be able to judge whether current solutions need to be revised, in which directions they should move, or which of several alternative solutions is most useful for a given purpose. In particular, the problem statement should make clear the criteria for answering the question, "Are we done yet?"
In the PACKETS Project (Katims et al., 1994), from which several examples cited in this chapter were taken, one device that is used to encourage students' self-assessment is to employ the notion of a client-a person or group of people in everyday roles who request that students construct a product for a specific purpose. As a result of specifying such a client purpose, the evaluation of students' work can be grounded in how well each product meets the client's stated purpose. For example, in Appendix B of chapter 6, this volume, the quality assessment guide that is given provides guidelines for assessing the usefulness of both preliminary and final products-or for assessing the relative quality of alternative results. These guidelines can be used by anyone-including students-for real life, problem solving situations where it is clear what, when, why, and for whom work is being done.
In projects that focus on performance assessment, a quality assessment guide of the type just mentioned sometimes is referred to as a "scoring rubric" because, from a client's perspective and in the client's voice, five levels of quality are identified that can be used to sort products from "noteworthy" to "needs redirection." But, for most of the purposes emphasized in research, instruction, and assessment, labeling a product along a simplistic good-boo continuum is only one trivial purpose for such a guide. Far more important purposes involve helping students (as well as teachers or parents) to identify strengths and weaknesses in the work that is produced so that improvements can be made and additional learning can occur.
As the example problems in the previous section (and Appendix B to this chapter) suggest, traditional mathematics textbooks and tests include many superficially precise questions that fail to give any clues about the matters that answers to such questions would be intended to inform in real life situations. Consequently, the scoring of answers to such questions usually must involve factors that are quite different from those that would influence the quality of responses in real situations. For example, even in many performance assessment materials that are intended to be authentic (in the sense of being similar to those that occur in real life situations), scoring rubrics often focus on criteria that were not stated in the original problem and that would not be likely to apply to the real life situations that the problems are intended to simulate. They often favor "one rule solutions" using standard algorithms or equations over nonstandard procedures or informal methods that might be more reasonable in a practical sense. They may favor lengthy prose communication over elegant graphical representations. But, if quality scores are based on something besides how well the solution meets the stated goals of the problem, then the real "problem" that students need to address is not the problem that was stated; instead, it also involves addressing unstated school expectations that often are applied inequitably across students. For example, two groups may produce equally useful solutions to a given problem, but one group may be penalized because it did not use a favored strategy involving graphs or particular types of equations; on another problem, the same two groups again may produce equally useful solutions to the problem, but the same group may be penalized again because of some new unstated criteria that are imposed after the fact. Such experiences give the appearance of changing the rules for some students but not for others, and following such experiences, many students quickly learn to turn off their real life knowledge and experiences. Getting the "right" answers to school problems often means focusing more attention on interpreting and complying with nonmathematical norms imposed by teachers, textbooks, or tests--and less on interpreting significant, powerful, interesting, and useful mathematical constructs.
The Construct Documentation Principle
For the purposes discussed in this chapter, the problem solving situations that are needed must be more than merely thought eliciting; they also must be thought revealing. Therefore, the construct documentation principle poses the question: Will responding to the question require students to reveal explicitly how they are thinking about the situation by revealing the givens, goals, and possible solution paths that they took into account? In particular, will it provide an "audit trail" that can be examined to determine what kinds of systems (objects, relations, operations, patterns, and regularities) the students were thinking with and about?
One reason the construct-documentation principle is important is because both researchers and teachers often are interested in activities that do more than stimulate and facilitate the development of important mathematical constructs. Sometimes, students also need to be able to document the constructs that they develop. Therefore, the construct-documentation principle is aimed at activities that are intended to contribute simultaneously to both learning and the documentation of learning, while at the same time they facilitate self-assessment and thinking about thinking.
The construct-documentation principle is essential not only for the purposes of the researcher (or teacher), but also for the student. In fact, fostering self-reflection is perhaps the most significant reason the construct-documentation principle is important. This is because, in general, it is not easy for students (or teachers, or researchers) to go beyond thinking to also think about thinking. Therefore, to facilitate reflection, effective thought revealing activities should encourage students to externalize their thought processes as much as possible. One way to make it natural for students to externalize their ways of thinking is to have them work in groups where such processes as planning, monitoring, and assessing must be carried out explicitly. However, an even more effective method is to focus on activities in which the products that students create require them to disclose automatically what kinds of mathematical objects, relations, operations, and patterns they are thinking about. For example, when an activity satisfies the construct documentation principle, students' solutions should reveal, as explicitly as possible, how they are thinking about the givens, goals, and solution processes. That is, the descriptions and explanations that they produce also should reveal answers to the following kinds of questions:
For model-eliciting activities, the explanations and justifications and descriptions that are given should be integral parts of the answers themselves. Therefore, the processes of reasoning that students use to generate "answers" should be embedded in their final product. In this way, assessing the quality of the final result automatically involves assessing the quality of the mathematical reasoning used to produce it. Nonetheless, even if the products that students generate require them to reveal significant aspects of their ways of thinking, this does not guarantee that the preceding information will be apparent to students, teachers, or researchers. Therefore, to help participants analyze students' ways of thinking, the authors have found it useful to work with teams of teachers (students or researchers) to develop "ways of thinking" sheets for a variety of model-eliciting activities. Simplified versions of these "ways of thinking" sheets are included in the teachers' guides that accompany the materials known as PACKETS (Katims & Lesh, 1994). But, in classrooms where the authors have worked, these "ways of thinking" sheets often look like large posters that display snippets taken from the products that students create for a given problem. Then, these snippets are organized to suggest alternative types of products that students can be expected to create. For example:
1. In the Case of "The Summer Jobs Problem" (in the Appendix to chapter 23, this volume):
2. In the Case of "The Sears Catalogue Problem" (in Appendix A to this chapter):
The Construct Shareability and Reusability Principle
The construct shareability and reuseability principle poses the question: Is the model that is developed useful only to the person who developed it and applicable only to the particular situation presented in the problem, or does it provide a way of thinking that is shareable, transportable, easily modifiable, and reusable?
As conceptual tools, mathematical models and the procedures derived from them vary greatly in their generalizability. Some are highly restricted to the peculiarities of particular problem situations, but others are taken out of their initial setting and applied to a wide variety of structurally similar situations. For example, among the smart tools that people develop using spreadsheets, graphs, or graphing-calculator programs, it is easy to distinguish among:
Yet, in mathematics classes, students are seldom challenged to develop reusable, modifiable, shareable smart tools. In fact, textbook and test problems rarely ask students to produce any models at all. Most often, traditional problem sets provide the relevant models or tools, then direct students to apply them to produce single-number answers. Even applied problems, which are intended to involve real life situations, usually require no more than specific answers to particularistic questions. Consequently, it is generally possible to produce perfectly acceptable solutions to such problems that use very little real mathematics.
When the answer to a problem is 12 feet, one might ask, "Where is the mathematics?" Certainly, this answer reveals little about the ways of thinking that were used to produce it. Therefore, to assess the quality of students' mathematical reasoning in the context of such problems, the temptation is to focus on processes rather than on products. Consequently, if students' work is assessed based on processes, rather than the products that they thought they were being asked to produce, they inevitably conclude (correctly) that what problems ask them to do is not what they really are supposed to do - that, regardless what they produce, someone (e.g., the teacher) is going to keep changing the rules so that their work will not be ). valued. .
Problems that satisfy the construct shareability and reusability principle confront students with the need to go beyond developing personal tools to developing general ways of thinking. Therefore, by asking for descriptions, explanations, or prediction procedures that can be used by others beyond the immediate situation, such problems tend to emphasize much more powerful uses of mathematics than problems that fail to call for any form of generalization. Nevertheless, to say that students have produced a general model or a transportable tool is different from (and easier than) concluding that they who created this tool can transfer the relevant knowledge to other contexts and situations. Students' products can be observed and assessed directly, whereas inferences about transferability or generalizability are much more difficult to establish.
The Effective Prototype Principle
Does the solution provide a useful prototype, or metaphor, for interpreting other situations? Long after the problem has been solved, will students think back on it when they encounter other structurally similar situations? If so, the solution usually needs to be as simple possible, while still creating the need for a significant construct.
Effective model-eliciting activities tend to operate like case studies in professional schools in the sense that they are most effective when they provide rich and memorable contexts for learning and for discussing important mathematical ideas. During the school year, such problem solving episodes should become an important part of the culture and history of individual classrooms. For example, in classrooms where the authors have worked, it is common to hear mathematical discourse in which students and teachers refer to the problems-"Remember when we used ratios in The Summer Jobs Problem" or "That's a lot like The Million Dollar Getaway" -because the problems serve as effective vehicles for discussing many topics and for making many different mathematical connections.
Einstein is credited with saying, "A theory should be as simple as possible, but no simpler." The same is true of model-eliciting activities. Effective thought revealing activities must be structurally significant, but it often is not necessary for them to be procedurally complex. For example, if the goal is to focus attention on underlying quantitative relationships, then it may be desirable to minimize computational complexity. That is, computational complexity should be limited to that which is needed for focusing attention on the targeted underlying conceptual relationships that are essential for dealing with the intended structure of the task. If students cannot see the conceptual "forest" because of too many procedural "trees," then they are not likely to refer to, and draw strength from, the experience when confronting similar situations in the future.
Thought revealing activities are needed for teaching and assessment as well as research. Furthermore, in chapter 9 of this book, Lesh & Kelly describe how thought revealing activities for students often provide productive contexts to use as the basis of thought revealing activities for teachers. Therefore, to close this chapter, it is useful to describe some common misconceptions that have occurred frequently when model-eliciting activities have been used for the purpose of instruction, rather than research.
When teachers and researchers first see examples of the kind of model-eliciting activities discussed in this chapter, they often describe them using such terms as cute, open-ended,. and difficult. But, in an important sense, effective, model-eliciting activities should be none of these; and certainly, they are not intended for the enrichment of gifted students only. In fact, the six principles in this chapter are meant to help teachers select (or create) activities that are especially appropriate for meeting the needs of students who have been labeled average or below average.
Why "Fun" isn't a Primary Characteristic of Model-Eliciting Activities?
One commonly recognized reason why many people try to use real life problems is for motivational purposes; and, one of the first things that people often like about model-eliciting activities is that they seem "fun" or "cute". But, in general, the authors have not found motivational factors to be among those that are the most important - if the goal is to develop problems where the primary goals are to elicit important information about the nature of students' mathematical knowledge and to recognize and reward a broader range of mathematical understandings and abilities than those emphasized in traditional textbooks tests. This is not to say that motivation is not important; but, if we try to make problems "fun" for students, there tends to be a point of diminishing returns. This is because the very problems that stimulate the interest of some students often prove to be of no interest to others. Sports, rock and roll, dinosaurs, pizza and other topics that engage some students are equally uninteresting to other students. Today's "hot" topics are out of style tomorrow. Furthermore, if attempts are made to emphasize the usefulness of problems in terms of their payoff for getting future jobs, the problems that work least well often are those that students perceive to be relevant only in low-level or low-paying jobs for adults.
Although it is wise to start with problem settings that are likely to appeal to a wide range of students, experience suggests that good teachers often are masters at stimulating students to become interested in topics that expand their interests, rather than merely catering to them. Fortunately, model-eliciting activities do not require universally appealing topics. The topics that work best are those that fit current local interests. Consequently, the most useful problem solving situations are those that teachers can modify easily to fit the interests and experiences of specific students. a goal is to provide every student with as many "low-pressure, high- interest" opportunities as possible to demonstrate their abilities and achievements within contexts that are familiar and comfortable.
We've found that, what teachers need most is not cute problems, but problems whose solutions involve the construction of conceptual tools that empower students to achieve goals that they themselves consider important. The best way to create such model-eliciting activities is not to search for what is "fun", but to ensure that the solutions that students construct can be based on extensions of their real life knowledge and experiences. Conversely, if students' legitimate ideas are not taken seriously, or if they are dismissed even though their points would be valid in real situations, then the students can be expected to become unresponsive, even if the topic interested them initially.
Why is "Difficult" an Inaccurate Characteristic of Model-Eliciting Activities?
The term open-ended often conjures up the notion of never knowing when you have finished, or of being value-free in the sense that nearly any answer is acceptable (frequently because the quality or usefulness of alternative responses cannot be compared). But, effective model- ! eliciting activities exhibit neither of these characteristics. They are open in the sense that "right" answers have not been predetermined, and students must select the mathematical ideas that they will use to build their solutions, but the activities are highly structured in order to encourage students to build important mathematical models and to know when they have finished their task.
Model-eliciting activities should not be unstructured. In fact, extensive research and trial testing usually make them some of the most highly structured activities that most students have encountered in their mathematics instruction. However, the structure tends to be implicit, rather than explicit; and, it unfolds naturally as students work, rather than needing to be imposed by teachers at the beginning or during the problem solving process.
Why Is "Difficult" an Inaccurate Characterization of Model-Eliciting Activities?
Teachers who have little experience with model-eliciting activities often think that such problems are too difficult for their children. This may be because such teachers are accustomed to these rules of thumb:
Experience with model-eliciting activities has shown that students who have excelled at "one-rule, quick-answer" word problems do not excel necessarily at activities where "one-rule, quick-answer" responses are not appropriate. Furthermore, nearly all students are successful at some level because there are no tricks and because the most straightforward approach is usually a productive one. On the other hand, many teachers or other adults often perform no better than average middle school students on model-eliciting activities, especially if they often waste too much time trying to remember and implement formal one-rule solutions instead of using common sense and a few basic principles.
SUMMARY: THREE COMMON QUESTIONS ABOUT MODEL- ELICITING ACTIVITIES
This final section addresses the following questions that are often raised by researchers and teachers who are not familiar with model-eliciting activities.
What Role(s) Should Teachers Play When Students Are Working on project- Size, Model-Eliciting Activities?
Answers to this question depend on what purposes teachers have in mind when they choose to use model-eliciting activities. For simplicity, we focus on the case of using these activities at the beginning of units of instruction when their main purpose is to identify students' conceptual strengths and weaknesses.
In ideal instructional settings with one-to-one teacher-to-student ratios, teachers could begin each session with interviews designed to follow each student's thinking, identifying conceptual strengths and weaknesses, and strengthening relevant concrete/intuitive/informal conceptual foundations before attempting to formalize these ideas using abstract, symbolic, and formal definitions, rules, and procedures. Although individualized interviews might re time consuming, time might be saved in the long run because instructional activities could build on conceptual strengths, address (or avoid) conceptual weaknesses, and avoid rehashing issues that are understood clearly already. Also, new ideas might be remembered better because they could be embedded within familiar and meaningful contexts.
In most classroom settings, teachers do not have time to engage in activities such as conducting one-to-one interviews with each student. Nonetheless, they often realize that, if they knew more about the strengths and weaknesses on their students, their teaching could be much more effective. Therefore, model-eliciting activities were designed explicitly to help provide details about students' ways of thinking; and, to do this in topic areas that are most important, model-eliciting activities should focus explicitly on the most important "big as" in any given course or grade level. The goal is to enable teachers to observe their students' thinking and to identify their conceptual strengths and weaknesses, while at the same time helping students to strengthen their relevant concrete, intuitive, and informal conceptual foundations. In other words, during model-eliciting activities, some of the teacher's main role consist of being observers, facilitators, mentors, and learners.
How Can Teachers Spend So Much Time on Project-Size Activities?
Model-eliciting activities focus in depth on a small number of major ideas rather than try to cover superficially a large number of idea fragments or isolated skills. But, this teaching strategy pays off only if:
How Can Average-Ability Students Be Expected to Invent Significant Mathematical Ideas?
The answer to this question stems from the following observations: (a) students do not begin from a state of having no knowledge about the relevant processes and understandings; and (b) model-eliciting activities are fashioned to facilitate certain types of inquiry and development without telling students what to do and how to think.
In the middle school curriculum, most of the ideas that are taught have been "covered" during earlier grades, and most of them will be "covered" again in following courses. Few ideas are introduced completely fresh for the first time, and few are introduced for the last time. Furthermore, before students begin to work on a given topic, they usually work on topics that are considered to be prerequisites. So, the next ideas that they are expected to learn are seldom more than extensions or refinements of ideas that have been introduced already. In particular, they usually possess the elements of a language and powerful graphic and symbolic notation systems that were designed especially to express the "new" ideas that they are expected to learn.
One way that mathematical knowledge and abilities grow is by a mechanistic process in which small ideas, facts, and skills are assembled gradually, like tinker toys, into larger and more complex ideas, facts, and skills. But, mathematical ideas and abilities also develop along a number of other dimensions such as: concrete-abstract, intuitive-symbolic, global/undifferentiated-refined, informal-formal, or specific-general. So, one of the most effective strategies for maximizing students' progress is to begin instruction with activities in which students reveal and test their concrete/intuitive/informal understandings (and mis- understandings) while extending, refining, or integrating these ideas to develop new levels of more abstract, or formal understandings. Students seldom begin to work on topics with no understanding. Most students already have developed some concrete, intuitive, informal foundations on which the intended formalizations can be based.
In the history of science, once the need for an idea has become clear, once the right questions have been asked in the right ways, and once adequate conceptual and technical tools are available, even some breathtaking achievements often have been accomplished in short order--and nearly simultaneously by several people who were not collaborators. So, if teachers have the power to place students in contexts that make the need for targeted ideas clear. If they pose appropriate tasks or questions, and if they provide appropriate tools and incentives, then they should not be astonished if average-ability youngsters often become able to "put it all together" to create some surprisingly sophisticated constructs
At conferences these days, it is not uncommon to hear researchers presenting data showing high-achieving students at selective colleges who produce nonsense when responding to seemingly straightforward questions about proportions (Clement. 1982a). averages (Konold, 1991). or other basic ideas from mathematics or science. Yet, by engaging children in the kind of model-eliciting activities that have been described in this chapter, the authors have been more impressed that low-achieving middle school children often invent - or significantly refine, or modify, or extend-more powerful mathematical and scientific constructs than those referred to in the research on high-achieving college students, and also more powerful than those that had characterized their failure experiences in situations involving traditional textbooks, teaching, and tests. What explains this anomaly of low-achieving middle school youngsters apparently outperforming college students? The answer involves all six of the principles in this chapter. That is, students should not be expected to invent powerful constructs unless:
The Sears Catalogue Problem
On the day before the class began to work on The Sears Catalogue Problem, each student was given:
The goals of the problem are for students to determine the buying power of their allowance 10 years ago and today and write a new newspaper article describing what a comparable allowance should be now. The new article should be persuasive to parent who might not believe that children's allowances, like adults' salaries, should be increase to reflect increasing prices.
Companion web pages, sample screens of which are shown in FIGs. A 1 through A4, contain similar information about the cost of items that teenagers might have wanted to purchase with their allowances 10 years ago and today.
This model-eliciting problem involves data analysis and the mathematics of change. A web page (FIG. A1) gives the context for the problem, the familiar subject of teenagers, and their allowances.
FIG. A1. The context for the problem.
A second web page (FIG. A2) presents the model-eliciting problem.
FIG. A2. The problem statement.
Other web pages provide price information for things that teenagers might buy with their allowance money. This information is organized into two collections of advertisements: one collection of items and prices from 10 years ago and the other showing current prices. Problem solvers may browse the collections to select data to use in their solutions. They also may gather additional data on their own.
Example web pages from the price collections are shown in FIGs. A3a. A3b. A4a, and A4b.
FIG. A3a. One page in the collection of advertisements from 10 years ago.
FIG. A3b. Corresponding data about prices of similar items today.
FIG. A4a. 10-year old advertisement for athletic wear.
FIG. A4b. Advertisement for similar clothes today.
Using the Reality Principle to Improve Performance Assessment Activities
The critiques that follow Tables B1 and B2 were generated in a series of semester-long multi-tiered teaching experiments (see chap. 9 by Lesh & Kelly, this volume) in which groups of expert teachers worked together to develop the principles that are described in this chapter for creating effective, construct-eliciting activities. As the following analyses make clear, a "real" problem is not merely one that refers to a real situation. The question that is asked also should make sense in terms of students' real life knowledge and experiences.
AN ATTEMPT TO IMPROVE "THE SOFTBALL PROBLEM"
A Teacher's Analysis
As a third iteration of how to improve this problem, one group suggested the following. The suggestion was based on the observation that, in the second version, when students were asked to explain their reasoning, it was unclear who was supposed to be given the explanation or for what purpose. So, it became virtually impossible to judge whether one explanation was better than another. By contrast, notice that, in the third iteration-refinement of the softball problem, the explanation is not only about the process that was used to arrive at a decision; it also is the product itself.
A Third Iteration of "The Softball Problem"
You have a friend who is the manager of a softball team. It is the bottom of the ninth inning, two outs gone, and no one is on base. Your team is one run behind. The manager (your friend) has decided to send in a pinch hitter in the hopes of scoring the tying run. The possibilities are Joan, Mary, and Bob. Their batting records are given in the table below. The manager decided to send in Mary, but Mary struck out and the team lost.
In the local newspaper the next day, there were a number of nasty letters to the sports editor demanding that your friend be fired.
Write a letter to the editor describing why your friend made a wise choice, although the results did not turn out as everybody had hoped.
AN ATTEMPT TO IMPROVE "THE MILLION DOLLAR PROBLEM"
One Teacher's Critique of "Exploring the Size of a Million Dollars"
A similar problem, called "The Million Dollar Getaway," (see Table B3) was published in a program called The PACKETS Program (Katims et al.. 1994) that developed out of the kind of multi-tiered teaching experiments that are described in chapter 9 by Lesh & Kelly, this volume. Like the NCTM problem, "The Million Dollar Getaway" also presents a hypothetical situation. However, this problem addresses issues raised in the preceding critique, and the result is that the chances improved significantly that students would engage in sense- making that is an extension of their real life knowledge and experiences.
At first glance, the PACKETS activity appears less structured than the NCTM problem. It does not specify the denominations of the stolen currency, and it does not tell students to explore specific mathematical topics, such as weight and volume. In spite of this, the activity is not less structured; it is less teacher directed.
When students are told the specific concern -"this sounds like more money than one person could carry" -they need not resort to guessing what the task is or to straying into nonmathematical explorations or explanations; instead, they can move directly to modeling the situation with mathematics. When students know who is asking the question and why, they can sense what kinds of issues to explore and what kinds of answers might be appropriate. For example, one way in which students assess their work in progress is to play the role of Channel 1 reporters covering the story and to ask themselves. "How helpful would this result be in preparing me for the evening news broadcast?"
Because students do not know the exact denominations of the stolen bills, they must go beyond the worst-case scenario of all $1 bills to construct alternative collections of bills that are equivalent to $1.000.000 in singles. For example, some students typically consider various linear combinations of $1, $5, $10, $20, and $100 bills amounting to $1.000.000; others invent the idea of best-case and worst-case scenarios, in order to establish boundaries for the different possibilities.
1 It is especially appropriate to use the term construct in this chapter because, as constructivist philosophers surely would want us to emphasize, the term suggests that these conceptual systems simply be delivered to students solely through teacher's presentations. To a great extent, they must be developed by students themselves. On the other hand, the term construct is unnecessarily vague and misleading for at least three reasons:
First, the term construction often is interpreted as if it were synonymous with terms such as assembly; and many conceptual entities that can be assembled are not constructs in the sense intended in this chapter. For instance, assembly may involve nothing more than systems of low-level facts and skills that have little to do with conceptual systems for making sense of experience. Thus, constructions processed don't necessarily lead to constructs of the type that we want to emphasize in this chapter.
Second, among the processes that do contribute greatly to the development of the kind of constructs emphasized in this chapter, many do not fit the term construction. For example, development usually involves sorting out, differentiating, reorganizing, and refining unstable conceptual systems at least as much as it involves assembling or linking stable conceptual systems.
Third, in this chapter, we want to focus on the noun construct more than the verb construct, so it is helpful to use terminology that highlights this distinction and preference. Just as past curricula reform failures occurred when "discovery" was treated as if it were an end in itself, regardless of the quality of what was discovered, curricula reformers today often treat "construction" as an end itself, regardless of what gets constructed. In both cases, the means to an end is treated as the end in itself, whereas more important ends receive too little attention.
For the preceding reasons, and for the other reasons given in this chapter, the philosophical perspective adopted in this chapter might be called a "constructivist" theory, to distinguish it form "assemblyist" interpretations of constructivism and to shift the emphasis beyond the construction as a process toward constructs as products of learning and problem solving. But, even if these distinctions are stressed, it still is important to emphasize that many of the most important processes that contribute to the development of constructs do not fit well with common meanings of the term construction.
2 For details, see chapter 6 in this book.
3 For an example, see chapter 23 in this book.
4 The idea of interpreting multiple-modelling-cycle applied problem solving as "local conceptual development" has many practical and theoretical implications because: Mechanisms that developmental psychologists have shown contribute to general conceptual development can be used now to help clarify the kinds of problem solving processes (or heuristics, or strategies) that should facilitate students' abilities to use (and even create) substantive mathematical ideas in everyday situation; and mechanisms that appear important in "local conceptual development sessions" can be used to help explain general conceptual development in such areas as proportional reasoning. In other words, productive techniques that are well known among applied mathematicians may help developmental psychologists to create more prescriptive models to describe the mechanisms that are driving forces behind general conceptual development in mathematics and the sciences.