Program Evaluation & Faculty Participation By Group 4: Charlotte Featherston, Sara Martin,...

Preview:

DESCRIPTION

A Classic Model: The Tyler Model Often referred to as “objective model” Emphasis on consistency among objectives, learning experiences, and outcomes Curriculum objectives indicate both behavior to be developed and area of content to be applied (Keating, 2006)

Citation preview

Program Evaluation & Faculty ParticipationBy Group 4: Charlotte Featherston, Sara Martin, Christina (Gaupp) Stacy, and Elsy Thomas

Historical Perspective & Background

• Early approaches to educational program evaluation

• Ralph Tyler’s behavioral objective model

A Classic Model: The Tyler Model • Often referred to as “objective model”

• Emphasis on consistency among objectives, learning experiences, and outcomes

• Curriculum objectives indicate both behavior to be developed and area of content to be applied (Keating, 2006)

Tyler’s Four Principles of Teaching

• Principle 1: Defining appropriate learning objectives

• Principle 2: Establishing useful learning experiences

• Principle 3: Organizing learning experiences to have a maximum cumulative effect

• Principle 4: Evaluating the curriculum and revising those aspects that did not prove to be effective (Keating, 2006)

Primary Strengths of Tyler’s Model

• Clearly stated objectives a good place to begin

• Involves the active participation of the learner (Prideaux, 2003)

• Simple linear approach to development of behavioral objectives (Billings & Halstead, 2009)

Progression of Program Evaluation

• 1980’s• Outcome assessments• State legislatures• National League from Nursing

(NLN)

• CCNE – 1990’s

Program Evaluation: 2000• Sauter (2000) surveyed all baccalaureate nursing programs in the United states to determine how they develop, implement, and revise their program evaluation plans.

• In 2006 Suhayda and Miller reported on the use of Stufflebeam’s CIPP model in providing a frame work for comprehensive program evaluation that would serve undergraduate and graduate nursing programs.

Relevance & Justification

Program Evaluation

Set Expectations

Collect Data

Use Data

Importance of Evaluation

Relevance

D-Implement Educational Program

G-Improvement the Educational Program

A-Needs Assessment

F-Feedback to 1- Learner

2- Teacher3-Organization

E-Evaluate the EducationalProgram

B-Educational objectives1- Cognitive

2- Psychomotor3-Attitude

C-Plan and Design Educational Program

1-Content 2-Method 3- Material

4-Evaluation methodology5-Environment

The ABC’S of EVALUATION

Impact to Program

Evaluation

Focus of Impact Evaluation

Participant’s perception and

satisfactionParticipant’s beliefs about teaching and

learning

Participant’s teaching

performance

Student’s perceptions about

staff teaching performance

Student’s learning

Effects on the culture of institution

Resources to Conduct

Impact Evaluation

Reliable and valid instruments

Trained data collectors

Personnel with research and statistical expertise

Equipment for data collection

Equipment for data collection analysis

When to do Impact Evaluation1.New program added to curriculum

2.Pilot programs which are due to be markedly scaled up 3.Ongoing program

Curriculum design

Discipline of knowledge

Characteristics of discipline

Characteristics of discipline

Evaluation on Curriculum

Conclusion• Program evaluation is collaborative, comprehensive, and complex.

• By understanding the history of program evaluation we can better understand the theory behind it.

http://teaching.berkeley.edu/sites/teaching.berkeley.edu/files/evaluationFINAL2.png

Conclusion• Evaluation should focus on a specific purpose with the goal of long-term

improvement.• Evaluators must consider program values along with societal expectations.

Conclusion“Development and implementation of a carefully designed theory-driven program evaluation plan will support continuous quality improvement for

nursing education programs” (Billings & Halstead, 2009, p. 507).

Assess Plan Improve

References• Bastable, S.B. (2013). Nurse as educator. (4th ed.). Sudbury, MA: Jones and Bartlett. • Billings, D. M., & Halstead, J. A. (2009). Teaching in nursing: A guide for faculty (3rd ed.). St. Louis, MO: Elsevier Saunders. • Denham, T.J. (2002). Comparison of two curriculum/Instructional Design Models: Ralph W. Tyler and Siena College Accounting

Class, ACCT205. Retrieved from ERIC Database. (ED 471734)• Educational Development Programs, 6(2), 96-108. Retrieved from: http://www.tandfonline.com/doi/abs/10.1080/

13601440110090749• Keating, S. (2006). Curriculum development and evaluation in nursing. Philadelphia, Pennsylvania: Lippincott Williams & Wilkins.• Klein, C., (2006). Linking competency-based assessment to successful clinical practice. Journal of Nursing Education.45(9), 379-

383.• McDonald, Mary C. (2014). Guide to assessing learning outcomes. (3rd ed.). Sudbury, MA: Jones and Bartlett. • Northeastern Illinois University. (n.d.). Classical Model. Ralph Tyler, 1949, Book Summary. Retrieved from

www.neiu.edu/~aserafin/New%20Folder/TYLER.html• Oermann, M., & Gaberson, K. (2006). Evaluation and testing in nursing education. (2nd ed.). New York, NY: Springer Publishing

Company, Inc. • Outline of principles of impact evaluation . (n.d.). Retrieved from http://www.oecd.org/dac/evaluation/dcdndep/37671602.pdf• Prideaux, D. (2003). Curriculum design: ABC of learning and teaching in medicine. British Medical Journal, 326(7383), 268-270.

Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1125124/?tool=pubmed• Ross, A. (2010). Survey data collection for impact evaluation. Retrieved from:

http://siteresources.worldbank.org/EXTHDOFFICE/Resources/5485726-1256762343506/6518748-1292879124539/25.Collecting-Quality-Data-for-Impact-Evaluation_Adam

• University of South Florida College of Education. (n.d.). Ralph Tyler’s little book. Retrieved from www.coedu.usf.edu/agents/dlewis/publications/tyler.htm