11
ARTICLES Articles should deal with topics applicable to the broad field ofprogram evaluation. Implications for practicing evaluators should be clearly identified. Examples of contributions include but are not limited to reviews of new developments in evaluation and descriptions of a current evaluation effort, research problem, or technique. Manuscripts should include appropriate references and not exceed 10 double-spaced typewritten pages in length. Hypertext: What It Is and How to Use It to Analyze Data VALORIE BEER AND ANNE MARIE S. JENSEN Evaluators of large or complex programs frequently face a data morass that threatens any meaningful analysis. The problem is acute for evaluations that follow naturalistic/ qualitative paradigms, or for meta-evaluations for which the original data are not uniformly represented by the same conventions in each data set. In either case, the evaluator must contend with field notes, observation forms, interview transcripts, and original sources that are not amenable to straightforward computational analysis, but must be sifted and sorted from various perspectives before meaning (or meanings) emerge. This iterative dance between real or anticipated data and its analysis begins early in the evaluation. Preconceived assumptions regarding "important" or "trivial" charac- teristics in the environment, and their classifications and relationships, are anathema to several evaluation paradigms; however, from the very beginning, almost every evaluator"is making interpretations and must have some conceptual scheme to do this" (Bogdan and Biklen, 1982, p. 32). The scheme can be devised in two ways: . By analyzing the evaluation data according to the interpretations and categories that are indigenous to each evaluation site. This "emit" approach requires the evaluator to categorize the data as a"native" would (Patton, 1980). Although this approach to framework-building appears to elicit "natural" representations, it overlooks the probability that the Valorie Beer • Manager of Evaluation, Apple Computer, Inc., 20525 Mariani Avenue MS 72Z, Cupertino, CA 95014; Anne Marie S. Jensen • 423 Pennsylvania #1, San Francisco, CA 94107. Evaluation Practice, Vol. 12, No. 3, 1991, pp. 193-203. Copyright O 1991 by JAI Press, Inc. ISSN: 0191-8036 All rights of reproduction in any form reserved. 193

Hypertext: What it is and how to use it to analyze data

Embed Size (px)

Citation preview

Page 1: Hypertext: What it is and how to use it to analyze data

ARTICLES

Articles should deal with topics applicable to the broad field of program evaluation. Implications for practicing evaluators should be clearly identified. Examples of contributions include but are not limited to reviews of new developments in evaluation and descriptions of a current evaluation effort, research problem, or technique. Manuscripts should include appropriate references and not exceed 10 double-spaced typewritten pages in length.

Hypertext: What It Is and How to Use It to Analyze Data

VALORIE BEER AND ANNE MARIE S. JENSEN

Evaluators of large or complex programs frequently face a data morass that threatens any meaningful analysis. The problem is acute for evaluations that follow naturalistic/ qualitative paradigms, or for meta-evaluations for which the original data are not uniformly represented by the same conventions in each data set. In either case, the evaluator must contend with field notes, observation forms, interview transcripts, and original sources that are not amenable to straightforward computational analysis, but must be sifted and sorted from various perspectives before meaning (or meanings) emerge.

This iterative dance between real or anticipated data and its analysis begins early in the evaluation. Preconceived assumptions regarding "important" or "trivial" charac- teristics in the environment, and their classifications and relationships, are anathema to several evaluation paradigms; however, from the very beginning, almost every evaluator"is making interpretations and must have some conceptual scheme to do this" (Bogdan and Biklen, 1982, p. 32). The scheme can be devised in two ways:

. By analyzing the evaluation data according to the interpretations and

categories that are indigenous to each evaluation site. This "emit" approach requires the evaluator to categorize the data as a"native" would (Patton, 1980). Although this approach to framework-building appears to elicit "natural" representations, it overlooks the probability that the

Valorie Beer • Manager of Evaluation, Apple Computer, Inc., 20525 Mariani Avenue MS 72Z, Cupertino, CA 95014; Anne Marie S. Jensen • 423 Pennsylvania #1, San Francisco, CA 94107.

Evaluation Practice, Vol. 12, No. 3, 1991, pp. 193-203. Copyright O 1991 by JAI Press, Inc. ISSN: 0191-8036 All rights of reproduction in any form reserved.

193

Page 2: Hypertext: What it is and how to use it to analyze data

194 EVALUATION PRACTICE, 12(3), 1991

.

program's "natives" may not be aware of, or may not have labels and categories for, some objects and behaviors in their environment (Pelto, 1970).

By analyzing the data according to constructs based on evaluation theory and extant analysis strategies. These "etic" representations give a ready-made framework for interpreting new data; however, they risk becoming inappropriate pigeonholes.

Whether the data analysis begins with native representations or theoretical constructs, the evaluator must be prepared with structures flexible enough to accom- modate the data through its analytical metamorphosis. At the same time, the evaluator may want or need to:

• make the structures available to collaborators at different program sites

• maintain an "audit trail" of structural changes in the program (and the rationale for them)

• do "what-if" manipulations of the data (i.e., change the variables or categories so that rival interpretations might arise)

• replicate the representations for another evaluation (without having to remove or rekey the data in the original structure).

Significant conceptual guidance already exists to assist the evaluator with these tasks (see, for example, Guba, 1978; Lincoln and Guba, 1985; Miles and Huberman, 1984; Patton, 1980, 1981); however, the mechanics of analysis are so cumbersome as to discourage much creativity, risk-taking, and conceptual play with the data. Natural- istic/qualitative data collection and analysis have been dependent on paper, index cards, and text processing and database management programs. A few software packages specifically address qualitative data analysis; however, none provides "accelerators" that adequately alleviate the time and cognitive constraints that prevent evaluators from experimenting with the information in any more than one or two of its fascinating (and, perhaps, meaningful and enlightening) manifestations. "Unfortunately, most [database management systems] are not well-equipped to handle the kinds of multi-media, ill- structured, multiple representation information" that evaluators face (Russell, Burton, Jordan, Jensen, Rogers, and Cohen, 1989, p. 20). The result can be hasty, over- simplified conclusions for which the trail of reductions (and the rationale for them) is lost (Miles and Huberman, 1984).

However, naturalistic/qualitative evaluations and meta-evaluations are, by na- ture, hypertextual, that is, characterized by non-linear examinations, searches, and linkages of intricately related material (Jordan, Burton, Jensen, and Russell, 1987; Smith, 1988). It therefore seems appropriate to explore both hypertextual ways of thinking about evaluation and hypertextual computer programs-- the former for their potential contribution to the conceptualization of evaluation, and the latter for their assistance in maintaining, manipulating, and analyzing non-numerical evaluation data.

Page 3: Hypertext: What it is and how to use it to analyze data

Hypertext: What It Is and How to Use It to Analyze Data 195

EVALUATION AS A HYPERTEXTUAL THINKING TASK: THE CASE IN EDUCATIONAL EVALUATION

Evaluators of qualitative data perform their analyses in multiple "thought spaces" (conceptual categories) using subtle and intensively-interacting methods of data organization and retrieval. The evaluator"is noting regularities, patterns, explanations, possible configurations, causal flows and propositions" (Miles and Huberman, 1984, p. 22).

Educational settings typify environments that require this kind of analysis. The data processing and negotiation required during front-end and formative evaluations (and the integration of the results into instructional design) depend on the evaluator's ability to organize and link da ta - in a sense, to think hypertextually. For example, during front-end evaluations (e.g., job, task and needs analyses), the evaluator analyzes tasks and outputs, and links those to each other and to the corresponding competencies (knowledge, skills, abilities) required for performance. Using needs assessment data, the evaluator then establishes which competencies the target population needs to learn, and suggests knowledge structuring schemes that the instructional designer will sub- sequently reify in the curriculum. The evaluator follows the output-task-competency "links" to ensure that all elements necessary for the conduct of a task (as determined by the job evaluation) become part of the curriculum. The difficulty lies in keeping track of the elements so that none are lost between this front-end analysis and the actual curriculum design.

When the curriculum is ready for formative evaluation, the evaluator's task is to identify elements (e.g., objectives, content, student activities, teaching strategies) that need revision. The evaluator inspects for consistency by examining all data pertaining to one element (such as instructional strategy), and for integration by examining all elements that are (or should be) related (such as objectives and test items). A particularly difficult task for the formative evaluator is to remember the intricacies of the curriculum well enough to assess consistency and integration, and then to predict the impact of curriculum revisions upon them.

In both front-end and formative evaluations, the problem directly relates to the nature of the information that the evaluator has amassed: "small collection[s] of data, usually organized around a single topic" (Seyer, 1989, p. 22). The same may be said of meta-evaluation data, except that the "collections" tend to be larger. These data sets may be thought of as "nodes" in a network of evaluation information.

Given this property of qualitative evaluation data, it seems reasonable to make the leap from mental to electronic hypertext.

"The advantage of using hypertext [software].. . is the enormous flexibility offered the user in representing structures. The basic node-and-link construct of hypertext is powerful enough to incorporate structures ranging from the formal (e.g., relational tables) to semi-formal (semantic networks) to informal (unstructured data), in a single environment" (Jordan, Russell, Jensen, and Rogers, 1989, p. 1).

Page 4: Hypertext: What it is and how to use it to analyze data

196 EVALUATION PRACTICE, 12(3), 1991

E V A L U A T I O N AS A H Y P E R T E X T U A L C O M P U T I N G TASK: AN E X A M P L E F R O M E D U C A T I O N A L E V A L U A T I O N

Hypertext application development for the design of educational programs is already well under way (Driver, 1989; Gustafson, 1989; Merrill, 1987; Merrill, Li and Jones, 1989; Seyer, 1989; Stevens, 1989). Most programs developed thus far are of the "expert system" genre; that is they are intended to give the neophyte instructional designer- evaluator customized advice during various phases of the design process. These systems typically are more complex than rule-based or decision-tree programs; several use frames, semantic nets or direct representations to "strive to represent and manipu- late the rules of thumb (heuristics) and 'fuzzy l o g i c ' . . , that human experts apply to incomplete and uncertain data when solving problems and making decisions" (Kirrane and Kirrane, 1989, p. 38).

The evaluator will find that these hypertext instructional design programs generally are well-developed for the front-end evaluation phase of curriculum design. The output-task-competency arrangement common to job evaluation and needs analysis seems to translate easily to a hypertextual representation. The suggestion has also been made that hypertext may be useful in the later stages of evaluation, particularly for test item banking and questionnaire development (Kearsley, 1989).

However, for the professional evaluator, most of these programs have two major drawbacks:

1. They are expert systems-- which the expert may not need. The experi- enced evaluator (or instructional devcloper) needs a design aid, a program that is more heuristic than algorithmic, that "is not tied to a particular theory and may be tailored to accommodate design within any theoretical stance" (Russell, et al., 1989, p. 4). This is important especially for the naturalistic/qualitative evaluator who needs the flex- ibility of having the analysis design emerge during the process. (Accord- ing to one argument [Gayeski, 1988], the design aid approach is better in any case, since algorithms for instructional design and evaluation are not well known.)

2. They do not address formative or meta-evaluation directly. There are no representations for these types of evaluation in extant hypertext programs.

Yet, especially at the points where evaluation intersects the instructional design process, there is tremendous opportunity for dynamic analysis of the evaluation data and for synergistic interplay between the evaluator and instructional designer.

What would a hypertext design aid that more explicitly addressed this interplay look like to the evaluator of, say, a skill-based training program? It would begin with front-end evaluation, a complex process in which the evaluator collects new data and meta-analyzes existing materials (such as job descriptions). The hypertext stack ~ built by our hypothetical evaluator for this job evaluation might begin with a "job" card (Figure 1).

Page 5: Hypertext: What it is and how to use it to analyze data

Hypertext: What It Is and How to Use It to Analyze Data 197

aster: Job [er

Job Description: >> <<

Related outputs: [Output]

Related tasks: [Tasks]

Source(s) of the information on this card: >> <<

Definition of terms shown in bold on this card: [Current Definitions]

Figure 1. Job card

This card would contain a brief description of the job and embedded "links" (indicated by the brackets []) to other types of cards that describe outputs (Figure 2) and tasks (Figure 3) related to the job.

The evaluator might include in these cards a reference to the source (s) of the job information and perhaps a glossary of concepts and terms ("Current Definitions") associated with the particular job evaluation model being used (so that others using the hypertext stacks could follow the evaluator's rationale).

This card structure (job + output + task) would enable the evaluator to maintain the data in manageable chunks, while retaining interconnections (integration). The evaluator could also view all instances of one chunk (such as outputs) to check for consistency and completeness within the set. If the evaluator were meta-analyzing existing sources of job information, the cards would provide a way of standardizing the representation of data from disparate sources. The evaluator would then be able to proceed with the job evaluation unencumbered by the dissimilar formats of the original material (e.g., job descriptions, performance appraisals).

Description of output: >> <<

Standards/criteria for acceptable output: >> <<

Knowledge, skills, abilities required to produce output: [Knowledge] [Skill] [Ability]

Related tasks: [Task]

Source(s) of the information on this card: >> <<

Definition of terms shown in bold on this card: [Current Definitions]

Figure 2. Output card

Page 6: Hypertext: What it is and how to use it to analyze data

198 E V A L U A T I O N P R A C T I C E , 12(3) , 1991

Task Statement: >> <<

Knowledge, skills, abilities required for performance: [Knowledge] [Skill] [Ability]

Conditions for performance: >> <<

Standards/criteria for acceptable performance: >> <<

Taxonomy level: >> ( (

Related output(s): [Output]

Source(s) of the information on this card: >> <<

Definition of terms shown in bold on this card: It-:, I Current DefinitionsCard I

Figure 3. Task card

In anticipation that the front-end analysis may indicate the need for a training program, the evaluator might include in the task and output cards information that would presage the creation of behavioral objectives and instructional content. The evaluator might describe the criteria for successful completion of a task or output, list the conditions for task performance, and estimate the taxonomy level at which a task would fall were it converted to a behavioral objective (using, for example, the taxonomy formulated by Bloom, Engelhart, Furst, Hill and Krathwohl, 1956).

When the job evaluation was completed (i.e., all of the cards and fields filled in), the evaluator might then have the hypertext program construct an "aerial view" (browser) of all cards in the stack (Figure 4) to assess the interconnections among tasks and outputs. Holes in the analysis (e.g., tasks without outputs) would show as missing cards or as cards without links.

Having completed the front-end analysis, the evaluator might hand off the training-related card stacks to an instructional designer. Taking the evaluator's task and

~NERIC TASK ANALYSIS BROWSER

j ~ ~ - - - . ~ . ~ .

7 / ,,~ - - - ' - - ~ - [ Knowledge I

Figure 4. Job Evaluation Browser

Page 7: Hypertext: What it is and how to use it to analyze data

Hypertext: What It Is and How to Use It to Analyze Data

~ ~L '41u m l l i I ! [~1~ I L .~ l ' . l i I lK1111T i l l i t ~ - _ . s l i |'..I |hi i ] RTI | ,

Behavioral statement ("The student will..."): >> <<

Conditions: >> <<

Standards/criteria for performance: >> <<

Enabling objective(s): [Evaluation Level] [Synthesis Level]

Content: >> <(

Definition of terms shown in bold on this card: 113 ] Current DefinitionsGard I

Instructional design rationale for the fields on this card: IR I RationaleCard [

Figure 5. Job Evaluation Browser

199

output cards, the designer would create the training objectives (behavioral statements), and specify the conditions and standards for successful completion of the objective (Figure 5).

From the knowledge, skill and ability cards, the designer would determine the enabling objectives (each on its own card), and begin to outline the content for them. So that others could follow or audit the instructional design rationale, the designer might explain it in a separate card.

When the curriculum was ready for formative evaluation, the evaluator would examine the cards, links and browsers for consistency and integration among the curriculum elements. The evaluator would follow "linkages between representations to find portions of the course that need modification," and would test suggested changes to assess their implications for related (linked) elements throughout the curriculum (Russell, et al., 1989, p. 11). Finally, the evaluator could re-assess the revised training, checking for consistency and integration among curriculum elements and for the program's adherence to the original job evaluation. (Ensuring that the necessary links were in place would help to alleviate the "dropped-in" appearance that frequently characterizes curriculum revisions and up-dates.)

Aside from these direct applications to educa t iona l evaluation, an evaluator may find it useful to plan evaluations of any kind in a hypertext system. The evaluator could create and link cards for evaluation questions, data, sources and methods. The browser would show the interconnections among these elements of evaluation planning (Figure 6). Holes in the strategy would show as unlinked or omitted cards.

THE WHYS (AND WHY NOTS) OF HYPERTEXTUAL EVALUATION

Using hypertextual programs to support hypertextual thinking has several advantages for the evaluator:

Page 8: Hypertext: What it is and how to use it to analyze data

200 E V A L U A T I O N PRACTICE, 12(3), 1991

m

i- \

"\'\ I'

4

O ft.

0 0

- - - 7 " -

/ /

J

e~

o

,6

t~

Page 9: Hypertext: What it is and how to use it to analyze data

Hypertext: What It Is and How to Use It to Analyze Data 201

1. The data can be represented uniformly, yet mutably, in the card fields (or "templates"). The evaluator can change the master template and have all subsequent representations reflect the change. This allows the evaluator to do "what if" manipulations of the representations, without damaging the content-- a crucial capability for the naturalistic/qualita- tive evaluator who is searching for alternate interpretations of the data set.

2. The hypertext structures are re-usable. The templates, including their links but minus their content, could be copied and the entire structure used for another evaluation. In the case of job evaluation, re-using the existing job modeling structure could save hours in the evaluation of new jobs or the updating of existing job analyses. (The training outline structure also could be re-used as a template model for the design of other programs based on the same job.)

3. The hypertext environment offers numerous opportunities for collabo- ration. Evaluators working in naturalistic/qualitative paradigms con- tinually search for "intersubjective consensus" (Miles and Huberman, 1984, p. 22), a process that could be enhanced by the ability to share work spaces (e.g., templates, cards, links) and to make evaluation structures explicit in the hypertext environment.

4. Hypertext facilitates the creation o f a "rationale trace." By maintain- ing a library of structures, a separate stack of"comment" cards, or some other record of interactions with the system, the evaluator can maintain a"trail" that allows others to see the evolution of and follow the rationale for the analysis. This is useful for retracing (and then validating or modifying) the analysis path.

These major advantages notwithstanding, using hypertext representations in evaluation has its drawbacks. The ability to represent, inspect, and modify complex structures provides a powerful tool, but only if the evaluation structure or model is well- developed; hypertext will not compensate for immature conceptualizations. (However, "well-developed" does not necessarily mean "preordained." Hypertext can accommo- date the changes in representations so important to the non-quantitative evaluation paradigms.) Learning to "think hypertextually" may be a new experience for the evaluator accustomed (because of time constraints) to finding one or two representa- tions that will suffice as the evaluation's outcome. A related p r o b l e m - being "lost in hyperspace" - awaits the evaluator who, having created a large hypertext stack, now faces a miasma of cards, links, and structures.

Hypertext's greatest strength (and yet, perhaps, its greatest source of apprehension for the evaluator) is its effect on the thought process itself. Suddenly, alternative representations and interpretations of the evaluation data are not only possible, but feasible, to explore. Hypertext programs incite the evaluator to creative interaction with the data and its myriad representations and interpretations- and all without having to worry about dropping or damaging the pieces of the evaluation puzzle.

Page 10: Hypertext: What it is and how to use it to analyze data

202 EVALUATION PRACTICE, 12(3), 1991

N O T E

1. The examples shown in this article were created in Instructional Design Environment (IDE), a software system developed at Xerox Corporation's Palo Alto Research Center. IDE provides tools to create an on line record of data representations and knowledge structures, explicitly for instructional design and evaluation. Although IDE is a research prototype currently unavailable for purchase or general use, its characteristics stimulate thinking about dynamic representations of evaluation data and the ways in which other hypertext programs may be created or modified to support evaluation data collection and analysis. For complete descriptions oflDE, see Jensen, Jordan, and Russell, 1987; Jordan, et al., 1987; 1989; Russell, et al., 1989. Further information about IDE may be obtained from the Institute for Research on Learning, 2550 Hanover Street, Palo Alto, CA 94304.

REFERENCES

Bloom, B.S., Engelhart, N.D., Furst, E.J., Hill, W.H., and Krathwohl, D.R. (1956). Taxonomy of Educational Objectives: Handbook I: Cognitive Domain. New York: McKay.

Bogdan, R.C., and Biklen, S.K. (1982). Qualitative Research for Education: An Introduction to Theory and Methods. Boston: Allyn and Bacon.

Driver, J. (1989). A rule based curriculum development system. Presentation at the Instructional Technology Institute: Computer Based Tools for Instructional Design, Logan, Utah (July).

Gayeski, D.M. (1988). Can (and should) instructional design be automated? Performance and Instruction, 27 (10), 1-5.

Guba, E.G. (1978). Toward a Methodology of Naturalistic Inquiry in Educational Evaluation. (Monograph Series No. 8). Los Angeles: University of California, Center for the Study of Evaluation.

Gustafson, K. (1989, July). A hypercard course development system. Presentation at the Instructional Technology Institute: Computer Based Tools for Instructional Design, Logan, Utah.

Jensen, A.S., Jordan, D.S., and Russell, D.M. (1987). The IDE system for creating instruction. Paper presented at the App!ications of Artificial Intelligence and CD-ROM in Education and Training Conference, Arlington, Virginia (October).

Jordan, D.S., Burton, R.R., Jensen, A.S., and Russell, D.M. (1987). A hypertext environment to support the task of instructional design. Paper presented at Hypertext '87, University of North Carolina, Chapel Hill, North Carolina.

Jordan, D.S., Russell, D.M., Jensen, A.S., and Rogers, R.A. (1989). Facilitating the development of representations in hypertext with IDE. Hypertext '89 Proceedings.

Kearsley, G.P. (1986). Automated instructional development using personal computers: Research Issues. Journal of Instructional Development, 9 (1), 9-15.

Kirrane, P. R. and Kirrane, D.E. (1989). What artificial intelligence is doing for training. Training, 26 (7), 37-43.

Lincoln, Y.S. and Guba, E.G. (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage. Merrill, M.D. (1987). An expert system for instructional design. EEE Expert, 2(2), 25-37. Merrill, M.D., Li, Z., and Jones, M. (1989). An instructional design expert system. Presentation at the

Instructional Technology Institute: Computer Based Tools for Instructional Design, Logan, Utah (July).

Miles, M.B. and Huberman, A.M. (1984). Qualitative Data Analysis: A Sourcebook of New Methods. Beverly Hills, CA: Sage.

Patton, M.Q. (1980). Qualitative Evaluation Methods. Beverly Hills, CA: Sage. Patton, M.Q. (1981). Creative Evaluation. Beverly Hills, CA: Sage. Pelto, P.J. (1970). Anthropological Research. New York: Harper and Row. Russell, D.M., Burton, R.R., Jordan, D.S., Jensen, A.S., Rogers, R.A., and Cohen, J. (1989). Creating

instruction with IDE: Tools for instructional designers. Technical Report PS-00076. System Sciences Laboratory, Xerox Palo Alto Research Center.

Page 11: Hypertext: What it is and how to use it to analyze data

Hypertext: What It Is and How to Use It to Analyze Data 203

Seyer, P. (1989). Performance improvement with hypertext. Performance and Instruction, 28(2), 22- 28.

Smith, K.E. (1988). Hypertext-- linking to the future. Online (March 1988), 32-40. Stevens, G.S. (1989). Applying hypermedia for performance improvement. Performance andlnstruc-

tion, 28 (6), 42-50.