9
390 IEEE TRANSACTIONS ON EDUCATION, VOL. 44, NO. 4, NOVEMBER 2001 The Impact of a Simulation-Based Learning Design Project on Student Learning Gregory K. W. K. Chung, Member, IEEE, Thomas C. Harmon, and Eva L. Baker Abstract—This study examined the effects of a sophisticated simulation-based task on students’ learning of course-related con- tent, ability to deal with complex, open-ended problems, and atti- tudes toward the Interactive Site Investigation Software (ISIS) and the course. Students were required to conduct a hazardous waste site investigation using the simulation software designed specifi- cally for the course. ISIS simulated physical processes as well as real-world engineering processes. Assessment of student outcomes was based on the use of constructed-response knowledge maps to measure content understanding, and surveys to measure student attitudes and use of cognitive processes. Students reported very positive attitudes toward the ISIS experience, appeared to have gained substantial knowledge over the course of ISIS use, and per- ceived the ISIS activity as being generally effective in improving their skills in dealing with complex projects, linking theory to real- world applications, and improving their problem-solving perfor- mance. Our assessment of student outcomes was a successful first attempt, but more work is needed to validate our measures with advanced students, particularly when the task is complex and re- quires interdisciplinary knowledge. Index Terms—Assessment, capstone, concept mapping, eval- uation, hazardous waste, knowledge mapping, problem-based learning, simulation. I. INTRODUCTION M ODERN engineering education is undergoing signifi- cant changes, notably in the way engineering schools are adopting problem-based instruction to meet the changing demands of engineering practice [1]–[7]. Mastery of technical content is no longer sufficient. Increasingly, engineering pro- grams are requiring students to work on team projects that are open-ended with loosely specified requirements, produce pro- fessional-quality reports and presentations, consider ethics and the impact of their field on society, and develop lifelong learning practices. An implicit goal of this shift in curricula is to produce graduates who will be ready to assume engineering tasks upon graduation—that is, with the skills to develop solutions to prob- Manuscript received March 22, 2001; revised July 31, 2001. This work was supported under the Civil and Environmental Engineering Department at the University of California, Los Angeles, PR/Award EEC-9700753, as adminis- tered by the National Science Foundation. The findings and opinions expressed in this report do not reflect the positions or policies of the Civil and Environ- mental Engineering Department at the University of California, Los Angeles, or the National Science Foundation. G. K. W. K. Chung and E. L. Baker are with the Center for Research on Evaluation, Standards, and Student Testing, University of California, UCLA CSE/CRESST, 301 GSE&IS, Los Angeles, CA 90095-1522 USA (e-mail: [email protected]). T. C. Harmon is with the Department of Civil and Electrical Engineering, University of California, Los Angeles, CA 90095 USA. Publisher Item Identifier S 0018-9359(01)09873-9. lems under competing constraints of functionality, cost, relia- bility, maintainability, and safety [8]. The move from focusing solely on technical knowledge to viewing engineering education much more broadly has been driven by the recognition that engineering schools have failed to keep pace with the changing practices of today’s engineers. Various factors—global competition; the shift in spending pri- orities from national security to economic competitiveness; and the use of new information technologies, materials, and pro- cesses—create an increasingly complex environment for engi- neers to work in [4], [6], [8], and [9]. Today’s graduates are apprenticed for one to two years before they engage in mean- ingful engineering tasks. The half-life of engineering knowl- edge ranges from two to seven years, yet the engineered systems are becoming more complex and multidisciplinary [9]. Today’s engineers must adapt to changing environments; engage in life- long training to maintain their technical skills and knowledge; work effectively in a team; and have knowledge of the business, societal, and environmental impact of their work. To encourage engineering programs to respond to these changes, the Accreditation Board for Engineering and Tech- nology (ABET) has specified 11 criteria for graduating students in their Engineering Criteria 2000 (EC2000). As an example of five of these criteria, EC2000 specifies that graduating students should have 1) an ability to apply knowledge of mathematics, science, and engineering; 2) an ability to design and conduct experiments, as well as to analyze and interpret data; 3) an ability to function in multidisciplinary teams; 4) an ability to identify, formulate, and solve engineering problems; and 5) an ability to communicate effectively [1], [10]. One of the ways engineering programs have responded to these changes—both the changing demands of engineering and the EC2000 specifications—is to implement capstone courses [8], [11]–[14]. In [8, p. 162] we find five attributes of a capstone course: Students should 1) have a significant and insightful team design project; 2) be required to focus and use much of the knowledge acquired in the curriculum; 3) solve problems representative of real-life engineering; 4) acquire an understanding of the professional aspects and culture of engineering; and 5) learn and practice project proposing, planning, and control. The problem scope of capstone courses varies by implementation; however, in general, the tasks are complex and cannot be solved by one person. The problems also tend to be ill-defined and open-ended in the following sense: Broad requirements are laid out in the form of final deliverables and task-specific constraints, and students are expected to satisfy those requirements. In the process, students engage in activities that exercise a range of skills, including 0018–9359/01$10.00 © 2001 IEEE

The impact of a simulation-based learning design project on student learning

  • Upload
    el

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

390 IEEE TRANSACTIONS ON EDUCATION, VOL. 44, NO. 4, NOVEMBER 2001

The Impact of a Simulation-Based Learning DesignProject on Student Learning

Gregory K. W. K. Chung, Member, IEEE, Thomas C. Harmon, and Eva L. Baker

Abstract—This study examined the effects of a sophisticatedsimulation-based task on students’ learning of course-related con-tent, ability to deal with complex, open-ended problems, and atti-tudes toward the Interactive Site Investigation Software (ISIS) andthe course. Students were required to conduct a hazardous wastesite investigation using the simulation software designed specifi-cally for the course. ISIS simulated physical processes as well asreal-world engineering processes. Assessment of student outcomeswas based on the use of constructed-response knowledge maps tomeasure content understanding, and surveys to measure studentattitudes and use of cognitive processes. Students reported verypositive attitudes toward the ISIS experience, appeared to havegained substantial knowledge over the course of ISIS use, and per-ceived the ISIS activity as being generally effective in improvingtheir skills in dealing with complex projects, linking theory to real-world applications, and improving their problem-solving perfor-mance. Our assessment of student outcomes was a successful firstattempt, but more work is needed to validate our measures withadvanced students, particularly when the task is complex and re-quires interdisciplinary knowledge.

Index Terms—Assessment, capstone, concept mapping, eval-uation, hazardous waste, knowledge mapping, problem-basedlearning, simulation.

I. INTRODUCTION

M ODERN engineering education is undergoing signifi-cant changes, notably in the way engineering schools

are adopting problem-based instruction to meet the changingdemands of engineering practice [1]–[7]. Mastery of technicalcontent is no longer sufficient. Increasingly, engineering pro-grams are requiring students to work on team projects that areopen-ended with loosely specified requirements, produce pro-fessional-quality reports and presentations, consider ethics andthe impact of their field on society, and develop lifelong learningpractices. An implicit goal of this shift in curricula is to producegraduates who will be ready to assume engineering tasks upongraduation—that is, with the skills to develop solutions to prob-

Manuscript received March 22, 2001; revised July 31, 2001. This work wassupported under the Civil and Environmental Engineering Department at theUniversity of California, Los Angeles, PR/Award EEC-9700753, as adminis-tered by the National Science Foundation. The findings and opinions expressedin this report do not reflect the positions or policies of the Civil and Environ-mental Engineering Department at the University of California, Los Angeles, orthe National Science Foundation.

G. K. W. K. Chung and E. L. Baker are with the Center for Research onEvaluation, Standards, and Student Testing, University of California, UCLACSE/CRESST, 301 GSE&IS, Los Angeles, CA 90095-1522 USA (e-mail:[email protected]).

T. C. Harmon is with the Department of Civil and Electrical Engineering,University of California, Los Angeles, CA 90095 USA.

Publisher Item Identifier S 0018-9359(01)09873-9.

lems under competing constraints of functionality, cost, relia-bility, maintainability, and safety [8].

The move from focusing solely on technical knowledge toviewing engineering education much more broadly has beendriven by the recognition that engineering schools have failedto keep pace with the changing practices of today’s engineers.Various factors—global competition; the shift in spending pri-orities from national security to economic competitiveness; andthe use of new information technologies, materials, and pro-cesses—create an increasingly complex environment for engi-neers to work in [4], [6], [8], and [9]. Today’s graduates areapprenticed for one to two years before they engage in mean-ingful engineering tasks. The half-life of engineering knowl-edge ranges from two to seven years, yet the engineered systemsare becoming more complex and multidisciplinary [9]. Today’sengineers must adapt to changing environments; engage in life-long training to maintain their technical skills and knowledge;work effectively in a team; and have knowledge of the business,societal, and environmental impact of their work.

To encourage engineering programs to respond to thesechanges, the Accreditation Board for Engineering and Tech-nology (ABET) has specified 11 criteria for graduating studentsin their Engineering Criteria 2000 (EC2000). As an example offive of these criteria, EC2000 specifies that graduating studentsshould have 1) an ability to apply knowledge of mathematics,science, and engineering; 2) an ability to design and conductexperiments, as well as to analyze and interpret data; 3) anability to function in multidisciplinary teams; 4) an ability toidentify, formulate, and solve engineering problems; and 5) anability to communicate effectively [1], [10].

One of the ways engineering programs have responded tothese changes—both the changing demands of engineeringand the EC2000 specifications—is to implement capstonecourses [8], [11]–[14]. In [8, p. 162] we find five attributes ofa capstone course: Students should 1) have a significant andinsightful team design project; 2) be required to focus and usemuch of the knowledge acquired in the curriculum; 3) solveproblems representative of real-life engineering; 4) acquirean understanding of the professional aspects and culture ofengineering; and 5) learn and practice project proposing,planning, and control. The problem scope of capstone coursesvaries by implementation; however, in general, the tasks arecomplex and cannot be solved by one person. The problemsalso tend to be ill-defined and open-ended in the followingsense: Broad requirements are laid out in the form of finaldeliverables and task-specific constraints, and students areexpected to satisfy those requirements. In the process, studentsengage in activities that exercise a range of skills, including

0018–9359/01$10.00 © 2001 IEEE

CHUNG et al.: THE IMPACT OF A SIMULATION-BASED LEARNING DESIGN PROJECT 391

design, judgment, decision-making, problem solving, andteamwork. For examples of capstone implementations, see[12] for electrical engineering, [15] for computer science andengineering, and [8], [14], [16]–[19] for general approaches todesigning a capstone course.

A. Current Study

In this study the authors examined the implementation of acapstone course in civil engineering on the topic of hazardousmaterials. Student teams assumed the role of consultantscontracted to carry out a hazardous waste site investigationof an abandoned airfield. The course in this study differedfrom typical capstone courses in two important ways. First,simulation software was developed to support the site investi-gation. The software, designated Interactive Site InvestigationSoftware (ISIS), was custom developed in Java for creating anenvironment where students could engage in a simulated siteinvestigation. The software modeled the area under investiga-tion, incorporating soil and contaminant transport properties inthree dimensions. Thus, a realistic contaminant scenario couldbe created and could be varied in complexity. In addition, atraditional problem in this course was that the course durationand voluminous material precluded spending sufficient timeon establishing an understanding of the links between theoryand real-world situations. It was the instructor’s intent thatthe students learn these links by completing the ISIS designproject.

In addition to simulating physical processes, ISIS also sim-ulated the real-world engineering processes involved in a siteinvestigation. For example, when students requested drilling atparticular locations, the results of the drill would not be returnedimmediately. Rather, students were required to follow typicalprocedures—wait for the drilling to be completed, send the boresample to the laboratory for analyses, wait for the analyses tobe done, and then pay for the analyses from a fixed budget. Thelaboratory report was realistic in its presentation (i.e., only datawere returned). Students needed to use the appropriate data incomputations that would help them decide the next step in theinvestigation. Students engaged in a complex open-ended taskwith written and oral reports as products, which required stu-dents to use the same kinds of processes they would experiencein real-world settings.

The second significant feature of this course was its focus onattempting to measure the impact of ISIS on student learningand attitudes. While there exist numerous implementations ofcapstone courses purporting to improve the student experience,in general, there is less evidence supporting such assertions. Fur-ther, the evidence reported generally suffers from the followingshortcomings: students’ self-reported perceptions of learning(versus a direct measure of learning), instructors’ anecdotal re-ports of impact (versus an experimental manipulation of treat-ment), use of untested (versus validated) measures, and smallsample sizes.

In this study the authors attempted to address the method-ological issues by basing the evaluation of ISIS impact aroundstudent outcomes. Modern assessment techniques were usedto evaluate changes in student knowledge and attitudes towardISIS. The use of modern assessment techniques is consistent

with recent calls to use methods beyond course evaluationsand student attitude surveys [1]–[3], [20]. This study is amongthe few to use a knowledge-mapping performance assessmentto measure student learning. However, because much of thework with knowledge mapping has been with K–12 students,we wanted to validate our measures with advanced students.Other examples of use of assessment in engineering educationare seen in [21]–[24]. Assessment issues related to team-work are reported in [25]–[30]. General assessment issues inproblem-based contexts are reported in [14], [31], and [32].

Thus, evaluation focused on 1) assessment of student learningand 2) assessment of student attitudes toward ISIS. The mainresearch question was: How does students’ use of ISIS impacttheir overall learning of course-related content; their ability tohandle complex, open-ended problems; and their attitudes to-ward ISIS and the course? A second research question addresseddifferential impact of ISIS on students: Whom does use of ISISbenefit most? Does ISIS impact different kinds of students dif-ferently (e.g., by students’ gender, prior experience with com-plex projects, academic standing)? And finally, we explored dif-ferent ways of measuring student learning with knowledge mapsin an attempt to understand how to measure learning in studentswho were already knowledgeable in a particular domain.

II. PILOT STUDY

A. Method

A pilot study was conducted to test measures, tasks, and ad-ministration procedures. The pilot study occurred during thefirst implementation of ISIS in winter 1999. Our focus was togather information on the deployment of ISIS and develop andtest the utility of knowledge mapping to measure content under-standing. Knowledge mapping was unfamiliar to the instructor,and CRESST had never used knowledge mapping with engi-neering students.

1) Participants: Twenty-eight students, from one civil andenvironmental engineering capstone design course, participatedin this study. In general, the sample was mostly male, mostlyWhite and Asian American, and mostly upper-division under-graduates (M age 23 years;SD 2.56 years; range 20–30years).

2) Instructional Setting:ISIS was implemented for thefirst time in a senior/master’s-level environmental engineeringcourse. In this course, students were taught the principles ofcontaminant hydrogeology (physical chemistry, soil physics,and groundwater hydrology) through conventional lecturesand homework problems. About halfway into the ten-weekcourse, the students’ outside assignments became (virtual)data collection and interpretation using ISIS. Student teamscompleted a design project in the form of a remedial investi-gation-feasibility study (RI/FS) for a virtual hazardous wastesite. ISIS tasks included drilling, collecting samples, analyzingsamples, and performing field tests, and were designed tolink theory outlined in the lectures to real-world situations.The lecture portion of the course continued to address onlythe theoretical aspects of contaminant hydrogeology with twoexceptions: 1) approximately 2 h were dedicated to describingfield equipment and techniques and 2) approximately 2 h were

392 IEEE TRANSACTIONS ON EDUCATION, VOL. 44, NO. 4, NOVEMBER 2001

dedicated to describing real case studies exemplifying projectssimilar to the one the students were facing.

3) Tasks: The CRESST knowledge-mapping system wasused to measure students’ content understanding because of1) its sensitivity to instruction (i.e., as students learn more,their knowledge-mapping scores increase); 2) its demonstratedrelationship with other measures of content understanding; 3)the ease with which student knowledge maps could be scoredagainst an expert criterion map; and 4) its availability via theInternet [33]–[38].

Knowledge mapping is a kind of performance assessment thatrequires students to demonstrate their understanding of a con-tent area by creating a network diagram, where nodes representconcepts and labeled arcs describe how concepts are related. In-dividual concept-relation-concept tuples can be thought of aspropositions [39]–[41].

Operating the mapper required only a mouse. Concepts couldbe added to the map via menu selections. Links were createdby connecting two concepts and then selecting the desired re-lationship from a pop-up menu. Students were allowed to usea concept only once, but they could create any number of linksamong concepts. Previous research has shown that participantsas young as fifth grade could learn to use the knowledge mapperin approximately 10 min of training.

The instructor used the on-line knowledge-mapping systemto build a criterion map, and while doing so refined the set ofconcepts and relationships. The final knowledge map used inthe pilot study contained 25 concepts and five relationships.Students had to work with this predefined set of concepts andlinks; students could not create their own concepts or links. Theknowledge-mapping tasks were assigned as homework.

4) Measures:

a) Knowledge map performance: The instructor’s knowl-edge map was used as the expert criterion map forscoring purposes. The scoring algorithm was based onthe method developed by [33]. Briefly, the number ofpropositions in the student map (i.e., concept-relation-ship-concept) that also existed in the expert map wasconsidered the map score. Because the student and expertmaps were computer-based, the scoring was carried outautomatically.

b) Student survey: A 51-item student survey was usedto gather the following kinds of information: 1) de-mographics; 2) amount of experience with complexprojects; 3) amount of time spent on the project; 4)perceived effectiveness of ISIS in helping students linktheory to real-world applications, manage large andcomplex projects, improve teamwork skills, and improvecommunication skills; 5) frequency of students usinggeneral problem-solving processes; 6) teamwork; 7)availability of ISIS-related resources; and 8) perceptionof the knowledge-mapping activity.

B. Results

The results of the pilot study resulted in two changes to theevaluation. First, the knowledge-mapping task was revised.Students did not understand the meaning of the relationships,

and the use of knowledge maps as homework assignments didnot work. Thus, for the main study the knowledge-mappingtask was simplified in terms of administration and the specifi-cation of concepts and relationships. Second, the correlationsobserved among the survey scales, particularly the relationshipsbetween the reported use of problem-solving processes andfinal grade, and between attitude and perceived effectivenessof the project were interesting. These findings suggested thata measure of students’ self-regulation behavior (i.e., planningand self-checking) should be included to help explain students’performance.

III. M AIN STUDY

A. Method

1) Participants and Setting:Eighteen students in a civil en-gineering capstone design course participated in this study. Ingeneral, the sample were mostly White and distributed evenlyby academic standing (M age 25 years;SD 3.2 years; range

21–31 years), and the sample was balanced by gender. Therewere 11 undergraduates and six graduates (one student did notfill out the demographic survey). The instructional setting wassimilar to the pilot study.

2) Design: A one-group, pretest/post-test design was usedin this study. Originally the plan was to conduct an experimentaltwo-group, randomized pretest/post-test design but the enroll-ment in the class was unexpectedly low; thus, all students par-ticipated in teams that used ISIS and there was no control group.We also included a measure for teamwork but these data yieldedno interesting results; thus, the teamwork analyses are not re-ported.

3) Measures:

1) Knowledge mapping: Knowledge maps were used tomeasure students’ content understanding. All knowledgemaps received two scores, based on two independentscoring methods: 1) an expert content score and 2) aproposition quality score. Scoring student maps againstthe expert map provided information on students’ under-standing overall, while scoring individual propositionsprovided information about the quality of particularpropositions. The difference between the two methods isin the information available for instructional purposes.The content score yields a single number for the entiremap, and the proposition quality score yields a singlescore for each proposition in the map.

The development of the knowledge map task for themain study followed the same process as in the pilot studyand, in addition, included input from the course teachingassistant. The final expert knowledge map (see Fig. 1)contained 17 concepts and four relationships.

2) Knowledge map proposition quality: The second scoringapproach used to measure students’ understanding was torate the quality of each proposition. Proposition scoresranged from 0 to 3 : 0 Proposition does not make sensein any circumstance; 1 Proposition appropriate and cor-rect in an everyday, pragmatic sense (explanatory poweris limited to an everyday event); 2 Proposition appro-priate (reflects scientific understanding, but has limited

CHUNG et al.: THE IMPACT OF A SIMULATION-BASED LEARNING DESIGN PROJECT 393

Fig. 1. User interface of the on-line knowledge-mapping system. The map shown is the expert criterion map used for scoring student maps, with 17 concepts(contaminant transport, Darcy’s law, diffusion, dispersion, drawdown test, film transfer, groundwater flow, hydraulic conductivity, hydraulichead, isotherm test,NAPL dissolution, permeability, piezometer, retardation, slug test, sorption, tracer test), and four relationships (affects, is a model parameter, measures, models).

explanatory power); 3 Proposition is abstract and ex-planatory (reflects most highly principled, scientific un-derstanding). The scoring rubric was based on the workof [36].

The instructor (expert) scored each unique propositionculled from all student maps. Students generated 192unique propositions across the pretest and post-testknowledge maps, out of a possible 1088 propositions[ , where the number of concepts,

the number of links]. Interestingly, students usedabout 18% of the possible propositions, which is similarpercentage-wise to the [36] study (720 [15%] of thepropositions were used out of 5880).

The scoring activity resulted in the instructor reportingthat the expert knowledge map was incomplete. Some stu-dents created propositions that were appropriate and ofhigh quality but excluded from the expert map. Thus, theexpert map was modified to include appropriate proposi-tions and student maps rescored.

3) New propositions: A subset of students’ post-testknowledge map, defined as “new propositions,” was de-rived from the post-test knowledge map. This measureremoved all propositions from the post-test map thatalso existed in the pretest map. This new-propositionmap was then scored using the expert criterion (content)method and the proposition quality rating method toyield two new-proposition scores. The new-proposition

scores would be an indicator of knowledge acquisitionover the course of the ISIS activity. This procedure isnumerically similar to computing gain scores, but alsoallows for the examination of the type of propositionintroduced at the post-test.

Thus, each student had pretest and post-test contentscores, pretest and post-test scores on each type ofproposition (nonsense, pragmatic, scientific, principled),a post-test new proposition content score, and post-testscores for new propositions on each type of proposition.

4) Student survey: A 52-item student survey was usedto gather the following kinds of information: 1) de-mographics; 2) amount of experience with complexprojects; 3) amount of time spent on the project; 4)perceived effectiveness of ISIS in helping students linktheory to real-world applications and manage large andcomplex projects; 5) frequency of students using generalproblem-solving processes; 6) availability of ISIS-relatedresources; and 7) perception of the knowledge-mappingactivity.

5) Self-regulation survey: A 32-item student survey wasused to gather information on students’ self-reporteduse of self-regulation skills. Two scales from an existingsurvey were adapted from [35]. The scales were planning(e.g., I determine how to solve a task before I begin) andself-checking (e.g., I check how well I am doing whenI solve a task).

394 IEEE TRANSACTIONS ON EDUCATION, VOL. 44, NO. 4, NOVEMBER 2001

TABLE IDESCRIPTIVESTATISTICS AND INTERCORRELATIONS(SPEARMAN) FOR THESTUDENT SURVEY SCALES (N = 18)

6) Course outcome: The final course grade (0–12 pointscale) was the main course outcome measure.

4) Procedure: Data were collected before and after the ISISproject. The pretest consisted of a knowledge-mapping pretestand self-regulation survey; the post-test consisted of a knowl-edge-mapping post-test.

1) Pretest: An individual mapping task was administeredto all students prior to the start of ISIS (Week 6 of10). For the knowledge-mapping task, a blank map waspresented to students, and the researcher demonstratedhow to use the knowledge mapper. During the pretest,the class teaching assistant assisted in the data collection.He helped explain the idea of knowledge maps in termsfamiliar to the students. Students were given up to20 min to complete their knowledge maps. Followingthe knowledge-mapping task, students were given theself-regulation survey. The entire pretest activity tookless than 1 h tocomplete.

During the period between the pretest and post-test,students conducted their site investigation with ISIS andattended the class lectures. Each group was scheduled touse the computer lab twice a week for 2 h to work onISIS. No class time was provided to students to work onthe ISIS project.

2) Post-test: The post-test occurred during Week 10 of thecourse. Week 10 was the last instructional week, andWeek 11 was when the oral and written group reportswere due. The 20-min knowledge-mapping task wasadministered first, followed by the student survey. Theentire post-test activity took less than 1 h tocomplete.

IV. A SSESSMENT OFSTUDENT LEARNING

A. Survey Results

1) Student Survey Scales:Six scales were formed from theselected items of the student survey: 1) effectiveness of ISIS inhelping link theory with learning real-world applications; 2) ef-fectiveness of ISIS in helping develop skills to handle complexprojects; 3) attitudes toward ISIS; 4) attitudes toward course;

5) problem-solving processes used during ISIS; and 6) effec-tiveness of ISIS in helping improve problem-solving processes.The reliability for the attitudes toward ISIS scale was very low( ); therefore, the attitudes scale was dropped from theanalyses. Of the remaining scales, alphas ranged from 0.63 to0.77.

Table I shows the means, standard deviations, and the cor-relations among the student survey scales. Perceived effective-ness of ISIS with developing skills to manage complex projectswas related significantly with perceived effectiveness of ISIS inhelping link theory to real-world applications and with positiveattitudes toward ISIS.

B. Knowledge-Mapping Results

Because of the small sample size and the low number ofnonsense and principled propositions, the propositions werecollapsed to form two new measures. Scores for nonsenseand pragmatic propositions were combined and scores forscientific and principled propositions were combined. Thesenew groupings were defined as shallow and deep propositions,respectively. Table II presents descriptive statistics and inter-correlations among the knowledge map measures and finalcourse grade.

To assess whether ISIS had an impact on student learning,the authors focused on the relationship between knowledge mapperformance and final grade, the extent to which students’ con-tent understanding improved from the pretest to the post-test,and the quality of students’ understanding.

1) Knowledge Map Validity Check:Prior to using theknowledge map measures in these analyses, the authorsinspected the relationships among the different knowledgemap scoring measures (see Table II). Based on prior work[36], high correlations were expected between the contentscore and deep proposition scores in general. Significantcorrelations were found between the content score and thedeep proposition score on the pretest ( ),post-test ( ), and new propositions( ) measures. These correlations weresimilar in direction, magnitude, and significance with prior

CHUNG et al.: THE IMPACT OF A SIMULATION-BASED LEARNING DESIGN PROJECT 395

TABLE IIDESCRIPTIVESTATISTICS AND INTERCORRELATIONS(SPEARMAN) OF KNOWLEDGE-MAPPING MEASURES

TABLE IIICORRELATIONS(SPEARMAN) BETWEENSTUDENT SURVEY SCALES AND COURSE ANDCONTENT KNOWLEDGEMEASURES(N = 18)

work [36], and they provide evidence that the knowledge-map-ping measures were working as expected.

A second check on the knowledge map measures wasprovided by the set of correlations in Table III between theself-regulation processes of planning and self-checking, andthe shallow propositions. The more students reported that theyengaged in self-regulation cognitive processes, the lower thenumber of shallow propositions they had in their post-testknowledge map. However, we did not observe significantcorrelations between the self-regulation measures and the deepproposition measures, as might be expected.

2) ISIS Impact on Learning Content:To test whether stu-dent learning occurred over the period of ISIS activity, pairedtests were conducted on the pretest and post-test scores of thecontent measure, shallow proposition measure, and deep propo-

sition measure. Significant differences were found between thepretest and post-test on the content scores,

, and deep knowledge measure, .Students had significantly higher content scores on the posttestthan pretest, and had more deep propositions in their posttestknowledge map compared to their pretest knowledge map. Totest for differences in the type of new propositions students in-troduced in their post-test knowledge map, atest was con-ducted on the number of new shallow and deep propositions.While students introduced more deep propositions than shallowones in their post-test, this difference was not significant at the0.05 level ( ).

Overall, these results suggest that students did acquire knowl-edge over the course of the ISIS activity. Students had highercontent scores on the post-test than pretest and introduced more

396 IEEE TRANSACTIONS ON EDUCATION, VOL. 44, NO. 4, NOVEMBER 2001

deep propositions at the post-test than pretest, and of the newpropositions introduced, most were deep propositions. Aboutfour propositions on average were added by students on thepost-test knowledge map. While the ISIS activity appears tohave provided opportunities for students to learn additional con-tent, the strength of this interpretation is weakened by the lackof a control condition.

3) Quality of Student Learning: What Was Learned?:Aninspection of the three most frequently used propositions instudents’ maps provided additional information on the specificpropositions used. Two of the three propositions were clearlynew (drawdown test measures hydraulic conductivity, new innine of ten maps; slug test measures hydraulic conductivity, newin ten of ten maps). In contrast, the most frequently used propo-sition (piezometer measures hydraulic head, new in three of 15maps) was used by most students prior to the start of the ISISactivity. Overall, students’ new propositions were deep proposi-tions, suggesting that students learned substantive content. Alsointeresting is the extent to which deep propositions existed instudents’ pretest maps, which suggests that some students hadconsiderable knowledge of the content prior to the ISIS activity.This finding may reflect students’ retention of material coveredin earlier courses. A capstone course typically requires synthesisand application of ideas covered in earlier core courses, as wasthe case in this study.

Finally, students’ use of model-focused relationships was ex-amined (is a model parameter and models). The presumption isthat these relationships would be used more often as a result ofthe ISIS experience. Successful performance on the ISIS projectrequired the use of theoretical formulas and derivation of param-eters from the simulation data (i.e., linking theory to [simulated]real-world conditions).

For model-focused relations, the proportion of new shallowpropositions to all post-test shallow propositions was 0.83.For deep propositions, this proportion was 0.72. For the otherrelations (affects and measures), the proportion was 0.77 forshallow propositions and 0.57 for deep propositions. The resultthat deep propositions were mainly model-focused providesadditional evidence that students profited from the ISIS experi-ence with respect to linking theory with real-life applications.Interestingly, most of the shallow propositions were new aswell, indicating a concomitant increase in conceptual andnonconceptual knowledge, a finding that is also consistent withprior work [36].

4) Who Profited From the ISIS Experience?:The next setof analyses examined different groupings of students by dif-ferent factors: 1) amount of experience with complex projects;2) gender; 3) role in the team (member or project manager);and 4) academic standing (undergraduate, graduate). Tests forgroup differences by final grade, survey scales, self-regulationscales, teamwork scales, and knowledge map performance (con-tent score, shallow propositions, and deep propositions) wereconducted using the Mann–Whitney nonparametric procedure.Because this study was exploratory, it was not adjusted for mul-tiple comparisons. The-level was set to 0.05.

A significant difference was found between experienced andless experienced students on students’ perception of the effec-tiveness of ISIS on helping them deal with complex projects.

Experienced students reported having projects of comparablecomplexity to ISIS three to five times or more in college (versusless experienced, who reported having such projects one to twotimes). The experienced students ( 4.14, 0.50) per-ceived ISIS as being more effective than did the less experi-enced students ( 3.30, 0.67). When graduate stu-dents were compared to undergraduate students, graduate stu-dents had significantly higher final course grades ( 10.15,

1.21 versus 7.60, 1.34) and reported en-gaging in self-checking more often ( 3.28, 0.30versus 2.68, 0.59). When gender was examined,males reported using more problem-solving strategies (3.50, 0.38 versus 3.10, 0.39) and the team-work process of leadership ( 3.47, 0.51 versus2.95, 0.44). No other differences were found for any ofthe groupings on any of the other measures.

V. DISCUSSION

This study examined an exemplar implementation of a cap-stone course that embedded a sophisticated simulation-basedtask within instruction. The simulation was comprehensive, ap-proximating physical systems and the broader engineering con-text. The effects of ISIS on students’ learning and attitudes to-ward ISIS were evaluated. Our main research question focusedon how ISIS impacted students’ learning of course-related con-tent; their ability to deal with complex, open-ended problems;and their attitudes toward ISIS and the course. Secondary re-search questions were who profited from ISIS and to what extentcan knowledge maps measure content understanding of studentswith advanced knowledge.

A. Limitations of This Study

The most serious limitation in this study is the lack of a con-trol condition. Because enrollment was lower than expected,there were not enough participants to form non-ISIS controlgroups; thus, the extent to which the findings are related solelyto ISIS is unclear, particularly the observed gains in learning.A second limitation is the confounding between classroom in-struction and the ISIS design experience.

The design of this study cannot disentangle the source of thelearning effects due to classroom instruction and the design ex-perience with ISIS, nor tease out the learning effects due solelyto ISIS (relative to a non-ISIS condition); however, one findingis clear: learning occurred and our measurement of learning wassensitive this change. While this finding may seem trivial—oneexpects learning to occur—how one measures complex learningis less obvious. Unlike the situation in the measurement of phys-ical phenomena, where metrologists routinely calibrate instru-ments traceable to a higher standard, there exists little valida-tion, much less standardization, in the measurement of complexcognitive phenomena (e.g., critical thinking, problem solving,understanding). Thus, while the authors acknowledge the limi-tations of this study, this study adds to a small but growing baseof research on the measurement of complex learning.

CHUNG et al.: THE IMPACT OF A SIMULATION-BASED LEARNING DESIGN PROJECT 397

B. Implications for ISIS

The findings of this study point to several provocative impli-cations, particularly in instruction and assessment. With respectto instruction, the use of ISIS was clearly positive. Students re-ported very positive attitudes toward the ISIS experience, bothin the pilot study and the main study. This finding is remark-able considering students reported an unexpected amount oftime and effort required by the project, limited access to thesystem, an unstable system (during the pilot study), and theadded burden of participating in this study. This finding suggeststhat the unique approach taken by this capstone design coursewas successful and worthwhile from the students’ perspectiveand warrants continued use, development, and refinement.

In terms of learning, students appeared to have profited fromthe ISIS project. The data from the main study are consistentwith the idea that students gained deep content knowledge be-tween the pretest and post-test. There were more deep proposi-tions than shallow propositions in the knowledge maps and therewas more use of propositions with theoretical relations. Finally,students reported that they considered the ISIS activity to begenerally effective in improving their skills to handle complexprojects, linking theory to real-world applications, improvingtheir problem-solving performance, and developing positive at-titudes toward ISIS.

With respect to the effectiveness of ISIS for different kinds ofstudents, the data from the pilot study and from the main studysuggest experience with complex projects like ISIS was impor-tant. Experience in terms of academic standing (graduate versusundergraduate) also appeared to be a factor. Graduate studentshad higher course grades, reported using more self-checkingprocesses, and had higher content knowledge map scores. Dif-ferences were also found by gender. Males reported using moreproblem-solving processes. These differences should be viewedas exploratory, because the significance testing was not con-trolled for multiple comparisons.

Finally, the finding in both the pilot and main studies that ex-perienced students perceived the value of ISIS as more effectivein helping them handle complex projects is interesting. This dif-ference was not found when the question was examined by anyother factor, in either the pilot or main study. The authors spec-ulate that this outcome may be the case of experienced studentshaving a retrospective appreciation for the experience of theproject by virtue of their having experienced like projects. Animplication of this finding is that less experienced students willprofit from the ISIS experience, although they may not be appre-ciative of the ISIS experience until their next complex project.

C. Implications for Assessment in Engineering Education

The attempt to use knowledge mapping as a performancemeasure of content understanding was generally successful.Differences in the students’ performance were detected overthe course of ISIS, which point to instructional sensitivity ofthe measure. However, more work is needed to validate theuse of knowledge mapping to measure students’ understandingof complex subject matter, particularly when students areadvanced (e.g., upper-division students). As occurred in thisstudy, students generated propositions that were absent from

the expert map, yet were clearly appropriate. This omissionalso points to the need for multiple expert maps to capture arepresentative sample of high-level knowledge.

A clear next step is to develop a broader range of assess-ments to measure cognitive outcomes that address more of theEC2000 criteria. Assessments of problem solving, teamwork,design skills, and communication are needed and can be em-bedded within the simulation environment. Future work willembed within ISIS the capability to track and assess students’on-line behaviors—the on-line equivalent of a trained observerassessing the field performance of the team over time. The utilityof embedded assessments lies in the timeliness of the reportingof assessment information. Instructors would have real-time ac-cess to information about their students’ ongoing performanceon the task, and just as important, be able to provide timelyfeedback to the students. Because many of these activities areon-line, the authors envision ISIS as the genesis of a perfor-mance-oriented instructional and assessment platform that canroll up student-level data into larger departmental informationsystems.

As engineering schools move toward ABET/EC2000compliance, the authors anticipate calls to situate studentoutcomes—individual student learning, teamwork, and at-titudes—into a larger system of indicators. They anticipatea movement toward gathering a variety of evidence (versusa single grade or survey) increasingly from on-line perfor-mance-oriented tasks to better uncover what students arelearning, the depth of their learning, and the process they areusing to learn. The goal of such assessments is to provideinstructors and administrators with high-quality informationabout students’ learning, which will provide the basis formaking sound decisions about instruction and curriculum.

REFERENCES

[1] “Criteria for accrediting engineering programs,” Accreditation BoardEng. Technol., Baltimore, MD, 2000.

[2] , “Assessment white paper: A framework for the assessment of engi-neering education,” Amer. Soc. Eng. Educ., 1996.

[3] , “How do you measure success? Designing effective processes for as-sessing engineering education,” Amer. Soc. Eng. Educ., 1998.

[4] Board Eng. Educ.,Engineering Education: Designing an AdaptiveSystem. Washington, DC: National Academy Press, 1995.

[5] H. R. Coward, C. P. Ailes, and R. Bardon, “Progress of the engineeringeducation coalitions,” SRI Int., Arlington, VA, Final Rep. Eng. Educ.Centers Division, NSF, 2000.

[6] E. Dowell, E. Baum, and J. McTague,Engineering Education for aChanging World. Washington, DC: Amer. Soc. Eng. Educ., 1994.

[7] C. W. Meyers, “Restructuring engineering education: A focus onchange,” Nat. Sci. Foundation, Arlington, VA, Rep. NSF WorkshopEng. Educ., 1995.

[8] E. W. Banios, “Teaching engineering practices,” inProc. Annu. Fron-tiers Eng. Educ. Conf., 1991, pp. 161–168.

[9] W. A. Wulf, “How shall we satisfy the long-term educational needs ofengineers?,”Proc. IEEE, pp. 593–596, 2000.

[10] M. Besterfield-Sacre, L. J. Shuman, H. Wolfe, C. J. Atman, J. McGourty,R. L. Miller, B. M. Olds, and G. M. Rogers, “Defining the outcomes:A framework for EC-2000,”IEEE Trans. Educ., vol. 43, pp. 100–110,2000.

[11] B. Bond, “The difficult part of capstone design courses,” inProc. Annu.Frontiers Eng. Educ. Conf., 1995, pp. 2c3.1–2c3.4.

[12] R. L. Mertz, “A capstone design course,”IEEE Trans. Educ., vol. 40,pp. 41–45, 1997.

[13] J. L. Newcomer, “Design: The future of engineering and engineeringtechnology education,” inProc. Annu. Frontiers Eng. Educ. Conf., 1999,pp. 12b9–12b14.

398 IEEE TRANSACTIONS ON EDUCATION, VOL. 44, NO. 4, NOVEMBER 2001

[14] M. J. Safoutin, C. J. Atman, R. Adams, T. Rutar, J. C. Kramlich, and J. L.Fridley, “A design attribute framework for course planning and learningassessment,”IEEE Trans. Educ., vol. 43, pp. 188–199, 2000.

[15] W. T. Neumann and M. C. Woodfill, “A comparison of alternative ap-proaches to the capstone experience: Case studies versus collaborativeprojects,” inProc. Annu. Frontiers Eng. Educ. Conf., 1998, pp. 470–474.

[16] M. Cline and G. J. Powers, “Problem based learning via open endedprojects in Carnegie Mellon University’s chemical engineering under-graduate laboratory,” inProc. Annu. Frontiers Eng. Educ. Conf., 1997,pp. 350–354.

[17] A. Ellis, L. Carswell, A. Bernat, D. Deveaux, P. Frison, V. Meisalo, J.Meyer, U. Nulden, J. Rugelj, and J. Tarhio, “Resources, tools, and tech-niques for problem based learning in computing,”SIGCSE Bulletin, vol.30, no. 4, pp. 45b–60b, 1998.

[18] R. G. S. Matthew and D. C. Hughes, “Getting at deep learning: Aproblem-based approach,”Eng. Sci. Educ. J., vol. 3, pp. 234–240, 1994.

[19] M. McCracken and R. Waters, “WHY? When an otherwise successfulintervention fails,” inProc. SIGCSE/SIGCUE Conf. Innovation Technol.Comput. Sci. Educ., 1999, pp. 9–12.

[20] R. Waters and M. McCracken, “Assessment and evaluation in problem-based learning,” inProc. Annu. Frontiers Eng. Educ. Conf., 1997, pp.689–693.

[21] W. E. Dillon, G. V. Kondraske, L. J. Everett, and R. A. Volz, “Perfor-mance theory based outcome measurement in engineering education andtraining,” IEEE Trans. Educ., vol. 43, pp. 153–158, 2000.

[22] B. Helland and B. G. Summers, “Assessment techniques for alearner-centered curriculum: Evaluation design for adventures insupercomputing,” inProc. Annu. Frontiers Eng. Educ. Conf., 1996, pp.301–305.

[23] F. McMartin, A. McKenna, and K. Youssefi, “Scenario assignments asassessment tools for undergraduate engineering education,”IEEE Trans.Educ., vol. 43, pp. 111–119, 2000.

[24] W. C. Newstetter and S. Khan, “A developmental approach to assessingdesign skills and knowledge,” inProc. Annu. Frontiers Eng. Educ. Conf.,1997, pp. 676–680.

[25] B. M. Aller, “‘Just like they do in industry’: Concerns about teamworkpractices in engineering design courses,” inProc. Annu. Frontiers Eng.Educ. Conf., 1993, pp. 489–492.

[26] G. K. W. K. Chung, H. F. O’Neil, Jr., and H. E. Herl, “The use ofcomputer-based collaborative knowledge mapping to measure teamprocesses and team outcomes,”Comput. Human Behavior, vol. 15, pp.463–494, 1999.

[27] K. L. Gentili, J. F. McCauley, R. K. Christianson, D. C. Davis, M. S.Trevisan, D. E. Calkins, and M. D. Cook, “Assessing students’ designcapabilities in an introductory design class,” inProc. Annu. FrontiersEng. Educ. Conf., 1999, pp. 13b1-8–13b1-13.

[28] D. Jacobson, J. Davis, and B. Licklider, “Ten myths of cooperativelearning in engineering education,” inProc. Annu. Frontiers Eng. Educ.Conf., 1998, pp. 790–794.

[29] A. McKenna, L. Mongia, and A. Agogino, “Capturing students’ team-work and open-ended design performance in an undergraduate multi-media engineering design course,” inProc. Annu. Frontiers Eng. Educ.Conf., 1998, pp. 264–269.

[30] J. E. Seat, W. A. Poppen, K. Boone, and J. R. Parsons, “Making de-sign teams work,” inProc. Annu. Frontiers Eng. Educ. Conf., 1996, pp.272–275.

[31] E. V. Duzer and F. McMartin, “Methods to improve the validity andsensitivity of a self/peer assessment instrument,”IEEE Trans. Educ., vol.43, pp. 153–158, 2000.

[32] J. Heywood, “Problems in the design of assessment led curricula,” inProc. Annu. Frontiers Eng. Educ. Conf., 1999, pp. 12a9-1–12a9-4.

[33] H. E. Herl, E. L. Baker, and D. Niemi, “Construct validation of an ap-proach to modeling cognitive structure of U.S. history knowledge,”J.Educ. Res., vol. 89, pp. 206–218, 1996.

[34] H. E. Herl, H. F. O’Neil, Jr., R. A. Dennis, G. K. W. K. Chung, D. C. D.Klein, J. Lee, J. Schacter, and E. L. Baker, “Tech. Rep. year 1 CAETIfindings,” (Rep. ISX/DODEA), Nat. Center Res. Evaluation, Standards,Student Testing, Los Angeles, CA, 1996.

[35] H. E. Herl, H. F. O’Neil, Jr., G. K. W. K. Chung, C. Bianchi, S.-L. Wang,R. E. Mayer, C.-Y. Lee, A. Choi, T. Suen, and A. Tu, “Final Rep. vali-dation of problem solving measures,” Nat. Center Research Evaluation,Standards, Student Testing, Los Angeles, CA, Tech. Rep. 501, 1999.

[36] E. Osmundson, G. K. W. K. Chung, H. E. Herl, and D. C. D. Klein, “Con-cept mapping in the classroom: A tool for examining the development ofstudents conceptual understanding,” Nat. Center Res. Evaluation, Stan-dards, Student Testing, Los Angeles, CA, Tech. Rep. 507, 1999.

[37] M. A. Ruiz-Primo and R. J. Shavelson, “Concept maps as potential alter-native assessments in science,” in Annu. Meet. Amer. Educ. Res. Assoc.,San Francisco, 1995.

[38] M. A. Ruiz-Primo, S. E. Schultz, and R. J. Shavelson, “Concept map-based assessment in science: Two exploratory studies,” Nat. Center Res.Evaluation, Standards, Student Testing, Los Angeles, CA, Tech. Rep.436, 1997.

[39] D. H. Jonassen, K. Beissner, and M. Yacci,Structural Knowledge: Tech-niques for Representing, Conveying, and Acquiring Structural Knowl-edge. Hillsdale, NJ: Lawrence Erlbaum, 1993.

[40] J. D. Novak and D. B. Gowin,Learning How to Learn. New York:Cambridge Univ. Press, 1984.

[41] J. Turns, C. J. Atman, and R. Adams, “Concept maps for engineeringeducation: A cognitively motivated tool supporting varied assessmentfunctions,”IEEE Trans. Educ., vol. 43, pp. 164–173, 2000.

Gregory K. W. K. Chung (M’01) received the B.S. degree in electrical engi-neering from the University of Hawaii, Manoa, the M.S. degree in educationaltechnology from Pepperdine University, Los Angeles, CA, and the Ph.D. degreein educational psychology from the University of California, Los Angeles.

He is a Senior Research Associate at the National Center for Research onEvaluation, Standards, and Student Testing. His current work at CRESSTinvolves developing problem-solving assessments for computer-based as-sessments, evaluating technology-based learning environments, conductingresearch on the measurement of team processes using networked simulation,and developing Internet-based assessment tools for diagnostic and embeddedassessment purposes.

Thomas C. Harmon received the B.S. degree in civil engineering from theJohns Hopkins University, Baltimore, MD, and the M.S. and Ph.D. degrees incivil engineering from Stanford University, Stanford, CA.

He is an Associate Professor in the Department of Civil and EnvironmentalEngineering at the University of California, Los Angeles (UCLA). His researchand teaching center around the fate and transport of contaminants in the envi-ronment. He is the instructor for C&EE 164 Hazardous Waste Site Investigationand Remediation, and the lead content provider for the ISIS program.

Eva L. Baker received the B.A. degree in English and the M.A. and Ph.D. de-grees in education from the University of California, Los Angeles (UCLA).

She is Co-Director of the National Center for Research on Evaluation, Stan-dards, and Student Testing and director of the UCLA Center for the Study ofEvaluation. Her research is focused on the design and validation of technology-based learning and assessment systems and on new models to measure complexhuman performance in large-scale assessments. A professor in the Psycholog-ical Studies in Education Division at the UCLA Graduate School of Educa-tion and Information Studies, she also is Co-Chair of the Joint Committee onthe Revision of the Standards for Educational and Psychological Testing. Sheis involved in international, national, and state policy deliberations on assess-ment. She was recently selected to deliver the distinguished 1998 William An-goff Memorial Lecture at Educational Testing Service. Her publications includemore than 450 chapters, books, and articles.