23
September 29, 2004 M E M O R A N D U M To: Participants of the Adding Value to the MSP Evaluations Conference From: Norman Webb, Rob Meyer, and Paula White of the Adding Value Team Subject: Summary of the Fourth Adding Value Conference The fourth meeting of the Adding Value to the Mathematics and Science Partnership Evaluations Conference was held on September 16-17, 2004 at the Wisconsin Center for Education Research, University of Wisconsin-Madison. MSP evaluators, participants, and presenters were Terry Ackerman, Ruth Anderson, Joanne Bogart, Frank Davis, Jim Dorward, MaryAnn Gaines, Arlen Gullickson, Susan Millar, Judith Monsaas, Penelope Nolte, Beth Rodgers, Ben Sayler, Suzanne Sublette, and Valerie Williams. Persons in attendance from the Wisconsin Center for Education Research Adding Value Project were Janet Kane, Rob Meyer, Norman Webb, and Paula White. This memo summarizes progress made at the meeting. Introductions and Review of Agenda for Conference Norman Webb, Principal Investigator, Wisconsin Center for Education Research gave an introduction to conference participants by identifying the goals of the conference and the goals, principles, and activities of the Adding Value Project. Conference Goals: Further community among MSP evaluators Address common issues Provide assistance in analyses Adding Value Project Goals:

Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

Embed Size (px)

Citation preview

Page 1: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

September 29, 2004

M E M O R A N D U M

To: Participants of the Adding Value to the MSP Evaluations Conference

From: Norman Webb, Rob Meyer, and Paula White of the Adding Value Team

Subject: Summary of the Fourth Adding Value Conference

The fourth meeting of the Adding Value to the Mathematics and Science Partnership Evaluations Conference was held on September 16-17, 2004 at the Wisconsin Center for Education Research, University of Wisconsin-Madison. MSP evaluators, participants, and presenters were Terry Ackerman, Ruth Anderson, Joanne Bogart, Frank Davis, Jim Dorward, MaryAnn Gaines, Arlen Gullickson, Susan Millar, Judith Monsaas, Penelope Nolte, Beth Rodgers, Ben Sayler, Suzanne Sublette, and Valerie Williams. Persons in attendance from the Wisconsin Center for Education Research Adding Value Project were Janet Kane, Rob Meyer, Norman Webb, and Paula White. This memo summarizes progress made at the meeting.

Introductions and Review of Agenda for Conference

Norman Webb, Principal Investigator, Wisconsin Center for Education Research gave an introduction to conference participants by identifying the goals of the conference and the goals, principles, and activities of the Adding Value Project.

Conference Goals: Further community among MSP evaluators Address common issues Provide assistance in analyses

Adding Value Project Goals: Increase the knowledge of MSP evaluators about design, indicators, and

conditions needed to successfully measure change in student learning over time Develop useful tools and designs for evaluators to attribute outcomes to MSP

activities Apply techniques for analyzing the relationship between student achievement and

MSP project activities to evaluate the success of MSP projects

Project Principles: Build on what has been learned about evaluating large-scale systemic reform Develop a learning community among the MSP evaluators

Page 2: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

Project Activities: Provide technical assistance to MSP evaluators regarding MSP evaluation

challenges Two-day semiannual meetings each spring and fall Teleconference meetings to identify evaluation needs Site visits Prototypes for value-added data analyses Value-added and alignment tools

Site Round Robin

Norman Webb asked the evaluator or representative from each MSP or RETA present at the conference to provide a brief summary of the evaluation activities and issues associated with their project.

Black Hills Special Services Cooperative, Promoting Reflective Inquiry in Mathematics (PRIME): We have a MSP targeted grant focusing on mathematics. Our goal is for math teachers to do 100 hours of professional development on content and inquiry-based instruction and then for teachers to implement the inquiry-based areas in the classroom. We have a large Native American population and a huge population gap; few Native Americans make it beyond the eighth grade. We’re working on keeping the students in high school and trying to close the achievement gap. The state standardized test is not well aligned with classroom instruction. We’re using performance assessment at grades 4, 8, and 11 and we also have standardized assessment at that level. Inverness is serving as the external evaluator.

El Paso Collaborative for Academic Excellence: We have a comprehensive grant with 12 school districts participating. The resources are directed towards staff developers who spend a lot of time in the classroom to help teachers implement the curriculum frameworks designed jointly by the University of Texas-El Paso faculty. We lay out topics to be covered by grade and levels of cognitive demand. This is linked to the state standards. The department chairs are very involved in ensuring the success of a math-science partnership, which is the key to maintaining sustainability. We also have enhanced partnerships with business leaders, a parent involvement component, research projects for teachers, and a new teacher induction program.

Mathematics and Science Partnership of Southwest Pennsylvania: We have a comprehensive grant with 40 school districts involved. We administered the Survey of Enacted Curriculum online. We had to modify the survey to only include items related to the MSP initiatives. We had a 66 percent response rate. We’re also going to administer a principal survey at the K-8 level.

Texas Engineering Experimental Station, Alliance for Improvement of Mathematics Skills PreK-16 (AIMS): We have a targeted MSP in our second year. The project revolves around four goals: 1) to enhance professional learning of preK-16 administrators and teachers, 2) to provide challenging curricula, 3) to enhance applications of

2

Page 3: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

technology, and 4) to conduct research on the effectiveness of the interventions. We’re using an instructional content instrument. This year we added a higher education survey.

University of North Carolina, North Carolina Partnership for Improving Mathematics and Science (NC-PIMS): We have a comprehensive MSP working with 17 counties in North Carolina. They’re primarily the poorest counties and quite diverse. We’re trying to close the gap through lateral entry teachers from businesses who have content knowledge but lack pedagogical knowledge. We’re just starting our third year; it’s a cascade model. We are collecting benchmark data. We’re administering the CCSSO Survey of Enacted Curriculum this year. We’re also using a diagnostic model to provide a profile of information on skills students have mastered or not mastered.

University System of Georgia, Partnership for Reform in Science and Mathematics (PRISM): The focus is on P-12 and university collaboration around three goals: 1) to raise standards and expectations for students and to improve student achievement, 2) to improve the quality of teaching in mathematics and science using professional development for in-service teachers, and 3) higher education involvement with P-12 schools. The challenges as far as evaluation are what is and isn’t PRISM and setting up a tracking record to link the various variables. One strategy is learning communities both for higher education and P12 and tracking the nature of those. Another strategy is to change the faculty reward system within all the universities in Georgia; this is interesting to evaluate.

University of Wisconsin, Madison, System-Wide Change for All Learners and Educators (SCALE): Involves four school districts including Los Angeles, Madison, Providence, and Denver and two institutions of higher education: UW-Madison and the University of Pittsburgh, plus the Institute For Learning (IFL). SCALE is organized around five goals: 1) instructional system, 2) development of immersion units, 3) development of pre-service capacity in institutes of higher education, 4) equity issues and closing the gap issues, and 4) research and evaluation team’s work including the indicator line of work, targeted studies, case studies, and building partnerships. Utah State University, Building Evaluation Capacity of STEM Projects: As a RETA project, we provide up to ten days a year of consulting to MSP projects. We are now assisting sites to respond to site visit reports as a result of feedback they received. We’re helping individual projects on how to provide evidence in the project evaluation to show that they’re changing. We’re developing an online logic tool and an online evaluation course 101 so you can steer your administrators, parent, and teacher groups to make sense of evaluation efforts and findings. See www.usu.edu/cbec.

Vermont Institute for Science and Mathematics, Vermont Mathematics Partnership: We are a targeted project working on math. I’ve worked primarily with qualitative research on the evaluation, but there has also been a great deal of quantitative data collected. The project grew out of a masters program to produce teacher leaders in math. We’ve done observations with high and low involvement. We’ve developed a math a second language course and we’re looking at the norm-referenced test score data to see

3

Page 4: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

what we can say about the interventions. It’s been an evolving partnership with the university; we’ve doubled the number of participants in the project.

Western Washington University, North Cascades and Olympic Science Partnership: We are a comprehensive project with 28 school districts participating, covering a huge region of the state. The focus is on pre-service and in-service science indicators and a curriculum adoption program. We have a training institute for teachers with the focus on content immersion. We’re looking for partnerships tools and ways of measuring the connections and sustainability.

Assessing Partnerships Part I: Partnership Issues and Approaches

Norman Webb asked participants to have a “structured conversation” to think deeply about the issue of partnerships. We’ve structured some questions that will help us get at the notions of partnerships. Our focus is not only on K-12, but on K-20.

What constitutes a partnership within the MSP solicitation? Who are the partners? What are the interactions and processes among the partners?

NSF’s concept of partnerships changes. We are subject to the new vision of partnerships as NSF and projects become more definitive of what are how partnerships.

Different conceptualizations of sustainability – see Jeanne Rose Century’s work on this: http://cse.edc.org/work/research/rsr/default.asp.

Partners and partnerships as defined in the proposals have both formal and informal dimensions – different forms of inclusion/exclusion – e.g., the way STEM faculty and K-12 administrators make sense of the world—not on the same page about assumptions they make. Multiple cultures rarely have key players who are able to put on the table ideas requiring cross-cultural negotiations.

Different stakes for partners - something must be at stake for actors to see the value in partnerships – need something at the table for everybody. There needs to be some sort of common experience to bind people.

Partners must both contribute as well as gain some advantage. Science and mathematics faculty may feel as though they are not always getting something out of the partnership, maybe even a hindrance.

How do you make people into “stakeholders” who see the value in the partnership?

Staff turnover - some parts of the country have an 80 percent teacher turn-over rate—what’s the nature of that partnership, then—how can we get the initiative and professional learning in place “long enough” to sustain a partnership?

The partnership has to consider the individuals as well as the roles to be included in the collective.

The partnership involves the “NSF way” specified in the RFP; the other way involves how the work “really happens” as well as the expectations that people

4

Page 5: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

have (tenure track faculty member versus a principal or a teacher trying to meet standards).

The partnership needs to start with consensus, but also work long enough to operationalize and sustain it.

Concept of common vision or “minimal consensual cohesion.” Idea of competing partnerships. “We’re not the only game in town.” Our partners

have other partnerships that may have higher priority than our relationships. Institutionalization of elements that already exist serve to help build sustainability

(if the program ends, then there are remnants left over). Actors/partners need a common vision as well as the idea that benefits accrue to

partners as a result. Notion of mandatory partners versus volunteer partnerships. NSF is concerned with core math/science faculty supporting teachers; a large

emphasis was placed on this during site visit. Partners are long standing institutions that are officially defined, as the partners

do not really turn over. NSF understanding is about how higher education can support K-12; but the focus

groups highlight the other players and de-emphasize how higher education can do this. Leadership structure allows higher education to play a strong role.

Interaction needs to be two ways - different definitions of who the partners are; K-12 districts, higher education (universities, community colleges, levels of universities, Research I science centers), state agencies, Chamber of Commerce, professional organizations (as institutions).

The external evaluator is considered a partner as well as the professional development agency (“change agency”)—a third entity to help support the school districts and universities change.

Who within a partnership is subject to evaluation? (Goals, expectations, processes, etc.)

Evaluate the nature of the partnership to determine if it’s contributing to the changes in teaching and learning. We’re not looking at the nature of the partnership, per se but rather the characteristics of the partnership.

We have institutions that are already actors and then there’s the idea of partnerships that says people can work together. How do people come together to solve a problem? The partnership gets at a new dimension addressing how to solve the problem—“what do people do to enable change?”

What else do we want to accomplish? NSF wants a change in the culture of institutions.

Difficult to measure the “in between,” perhaps look at “ecosystem models” that would include formal/informal relationships as well as systems of reward.

We might need to build partnership profiles. Paying attention to simple things like the number of interactions and by whom might be worthwhile. Idea of examining the character of interactions of people who come from different backgrounds—the result is “new knowledge” that is a product of the partners coming together (Sharon Derry- at the WCER in Ed Psych- has a model of this).

5

Page 6: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

Evaluate the people involved because individuals have “currency in their communities” and status, “credibility indexes” - social networks. Are these people the “top dogs”?

Frustrating in relation to site visits, NSF seems to want quantitative studies, then the second visit occurred and one of the professors wanted a qualitative case study. It would be helpful if NSF had more explicit guidelines; we all define it our own way, and then different MSP program officers have different expectations of the methodology to use. We’re getting mixed messages.

If NSF is considered a partner, how are we going to get stable information?

The key thing to remember is that we’re operating under the heading of a cooperative agreement; it might not be in line with our site visit team, but we have to maintain some direction while remaining flexible. Need to focus on evaluating what is changing and what is particularly important in the individual partnership. We won’t all focus on all the same change variables.

The Inspector General evaluated NSF; need to develop a set of guidelines for projects. At the programmatic NSF can be assumed to be a partner, but a project partnership has lower level partnerships as well (e.g. teacher-teams).

Need to have State Agency involved, hierarchical model. Working together versus building a partnership; looking at whether or not people

are actually forming strong linkages. If NSF holds purse strings, how can they be a fair and equitable partner?

What needs to be incorporated into an evaluation because the program is a partnership as compared to a standard education program operated by one institution? How does the evaluation change?

Pay attention to the degree of embeddedness and whether that’s changed. Degree of cohesion between partners.

How this evaluation differs from a regular program evaluation—look at the relationship between institutions and individuals.

It would be useful to get measures of cultural expectation and patterns and processes from each actor to get an understanding of the assumptions that the partners do not share—can measure a shift towards stronger agreement..

Assessing Partnerships Part II: Techniques and Tools for Evaluating Partnerships

What constitutes an effective partnership within the MSP context?

Key partners have equal input into decision-making; leadership issues, shared-decision making agreed upon goals or shared goals; benchmarks.

Shared working principles (larger than just goals)—things that operationalize terms (emphasis on the “what” as well as the “how”).

6

Page 7: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

Effective communication among partners. Trust is a key element; community where it is safe to express real conflict so the

real issues can be dealt with. Respect for good argumentation; respect for one another, respect for evidence-based culture.

Goals can change as the partners get together and things evolve. Can you be successful/effective if you don’t reach those goals by the end of the partnership?

Question of whether the goals were realistic and whether the goals can be fully accomplished within the timeframe.

It takes time for the partnership to develop because the learning process requires time.

Intended goals versus emerging goals. The partnership should be a learning community that can self-evaluate – a “reflective learning community.”

Resources should be allocated that will support the partnership. Having a partnership agreement or “memorandum of agreement” between

partners. Partnerships need to report findings and results—hits on accountability. Shared accountability - getting back to the goals and focusing on trying to make sure the outcomes are achieved.

“Remember that we are all in it together”- don’t blame school districts or other players/partners.

What indicators, measures, and variables should be included in the evaluation of an MSP partnership?

The perceptions of the partners. The extent to which the infrastructure for the partnership is supported. The “critical situations” - the impetus that created a particular change, structural

changes, and sociological changes. Don’t hypothesize about what you will find, instead just observe. Who do the agents in the partnership actually represent? Who is really at the

table in terms of the partnership? Who has the power? What are some of the unintended, environmental factors that affect partnerships?

For example, increased enrollment and stresses on faculty from the institution make it more difficult for faculty to get involved. What roles to the partners think they play in the partnership?

What is the distribution of expertise? Map out internal and external effects (lines, concentric circles, etc.)- what are the

various understandings of the partners? Mapping would be one approach—used as a process to help people see where they are.

Susan Millar used mapping in SCALE to create a “cartography” of purpose, to see what groups would be sustained. These techniques helped identify commonly agreed upon goals stressed by the working groups. Drawbacks to using maps - they have no capacity to show real power relationships.

There are many rubrics out there, some of which are complicated. Question of to what extent does the evaluation of the partnerships radically alter the way the partnerships unfold?

Use of logic models.

7

Page 8: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

Affective indicators - perception of partnerships, comfort levels with partnerships, beliefs and expectations about the partnerships.

Cultural attributes. Business model of efficiency (e.g., in terms of curriculum development). The

time span it takes to get something done. Indicators are often thought of as statistics.

Indicators could be things like how many people can the partners name and what have they gained from other partners? Are they using the expertise?

NSF funds a set of activities laid out in a logical order with the assumption that the people who are funded will do what is in the logic model; is this a real model or a virtual model? Parts of the projects involve trying to engage faculty without the knowledge of how to do it (it’s not in the logic model).

New strategies were developed in response to problems encountered in the original logic model (there is process, not just outcome).

Logic model can be more complex. How do mini-models fit together?

NW hands out a resource: University and K-12 School Partnerships: How Does One Make These Happen? (2004) by Mitchell, J., Levine, R., and Bitter, C. Critical Incident Technique, one of the evaluation techniques used in the joint evaluation of the NSF GK-12 Fellows Program conducted by the American Institutes for Research and the Wisconsin Center for Education Research. The Critical Incident categories ask people to identify behaviors that led to a change in behavior. Norman Webb asks participants, Are there other kinds of approaches?

Processing the amount of email—used as raw data and classifying it, systematic observations of important partnership events, observing to understand the decision. making process. Can see where trust builds and where partnerships break down.

Working groups give monthly working reports where they can list their challenges and progress.

Norman Webb hands out another resource: Millar, S. & Clifford, M. (2004). Mapping the Landscape: A snapshot of SCALE at 16 month, a draft version produced for participants of conference. The document presents seven different critical situations and how things were both before and after.

It would be helpful to come up with common indicators that could be used across projects.

Technical Issues of Evaluation: Sampling, Control Groups, and Power Analysis (Rob Meyer)

Used Milwaukee Public Schools study as an example. Summarized qualitative research with one per school. Scaled everything into reform variable, used a post-test and pre-test for three levels. If looking at achievement growth to measure productivity of a school and schools differ in their turnover rates, what will you do? Put a variable in for each school,

8

Page 9: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

control for anything that’s external to the school, e.g., percent free lunch, turnover rate, SES

Total performance – includes only individual values Intrinsic performance – includes all values, school level variables

Concern about picking up selectivity effect when pick up differences, selectivity differences may be:

Evaluate a school or program, differ in kind of kids they have, might get a false estimation

A program selection bias – evaluate whether professional development for a teacher is a positive experience

Professional development is perhaps main intervention, allow teacher to choose whether to get involved

Has the intervention raised productivity? If better teacher participation, just picking up that they’re more productive Huge selection issue, need to establish a control group A couple of MSPs said they were thinking of rolling out program in stages, phase

it in and randomly assign teachers to treatment or control, e.g., University of California-Riverside was able to do that, have a natural control group

Scarcity of resources can help random assignment

Varieties of Treatment and Control Group Phased implementation Before and after comparison – evaluation effectiveness School level, teacher level, class level variables Effect estimates

Milwaukee Public Schools have a value-added school report card It’s important to probe down to the teacher level Include variables in model that control for differences Track changes overtime Things that matter at elementary school were different for middle school

NAEP data example – we need to think very hard about how we measure student/teacher productivity

Looking at attainment isn’t the way to measure productivity Want to think about productivity in terms of growth Value added – measures growth No Child Left Behind – as a tracking device, but doesn’t look at growth

NAEP was designed to a do value-added analysis Two level model of Student Achievement Value-added can tell you achievement growth for student at low, medium, high

levels But may look at data and find that lines are essentially parallel

9

Page 10: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

Look at different parameters – race, sample size, school effect, and program variables

Varieties of Treatment and Control Constraints We can have a cross sectional comparison Want to know if the before is jumping all over the place

Two-Level Model of Student Achievement with Selection Bias Due to Unmeasured Student and School Characteristics

Bottom-line- school productivity is the bias, have bias line

A Before and After School Model Base Year (prior to program implementation) Post Program Year

Difference between Post-Program and Post Years Gives us an example where there’s selection bias at the school level

Challenges in the Evaluation of Integrated vs. Traditional Math Program selectivity Multiple assessments Student mobility

Our Adding Value RETA is developing MSP archetypes We’ll put software on the Web for each archetype Integrated- traditional Different in dose – e.g., SCALE, one program implements much more than

anothero No individual pre-test data, only post-test datao Web program and some tools on the Web to handle quasi-experimental

programs, SAGE model – pretest K-3

Standards for Educational Evaluation as Applied to MSP Evaluations – facilitated by Arlen Gullickson

Book: Program Evaluation Standards - www.wmich.edu/evalctr/jc/ A functional table of contents- organizes the standards with the intention of being

used as a guide to utility Thirty program evaluation standards, (Four standards-utility, propriety, feasibility,

accuracy)o Which is the most important? Utilityo When we think about evaluation research, we should be thinking in terms

of those four categories Different forms of evaluation research

o Context evaluation

10

Page 11: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

o Input evaluationo Process evaluationo Outcomes evaluation

What is Evaluation? Systematic study to determine merit and worth How do evaluators feel about being evaluated themselves?

o Many have felt as though it was an imposition Turn to Standards if there is dissent about what to do or how to handle something

o Example: Conflict of interest Meta-evaluation- Evaluation of the evaluation

o Gives your evaluation credibilityo How do distinguish between a meta-evaluator and an external evaluator?

-External- the object is the program-Meta-evaluator- the object is the evaluation

Back to Question 1: What Constitutes a Partnership? What is the object here? The program that is being evaluated Stakeholders- mentioned in the utility section

o Not every stakeholder would be engaged in the evaluation. Why not?-Human subjects rights (propriety)-Political viability (feasibility)-Practical procedures

Two Other Evaluation Resources: The Personnel Evaluation Standards - www.wmich.edu/evalctr/jc/

o The forerunners of program evaluation efforts- 1970s and 1980s - propriety is first here

o Personnel matters involved in any program- incorporate this into program evaluation

Student Evaluation Standards –www.wmich.edu/evalctr/jc/o 28 standards, same 4 categories, but propriety is first hereo Goal is for students to know how to evaluate their own progress

Evaluation Issues Related to Data Acquisitions From Districts – facilitated by Norman Webb and Frank Davis

Confidentiality Issues with Districts for student ID numbers “Scrambled” the numbers but need an algorithm to tell who was who

Incomplete Data from Districts (omitting race, gender of students) Problems when the test used by the state changes (if looking for scores from past

years to compare) Ask for scores from previous years

11

Page 12: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

Problems Getting Teacher-level Data - the databases often don’t contain this Los Angeles - with 800,000 students, can’t get class lists School counselors will know, and key people will be “holders of the history”

(FD) in the school But often no systemic way In one case, the homeroom teacher data was listed on the transcript, but the

subject specific teachers were not available (which is who we want) How can we figure out what teachers are actually teaching? We are dependent

upon the quality of the data submitted by the schools Accuracy of ethnicity/race data? Data is being affected by what kids say on the

forms—some don’t identify their race at all (or move from “Black” to “multi-racial”)

Having state level data also lets us potentially link parents to student IDs Would like to get data on schools participating in other programs—looking to

gauge the extent of that participation (money spent on the program, teacher participation levels, etc.)

Would like to have a principal/staff survey to get information on things like the culture of the school, etc.

Triangularization of Data

Is this feasible? To what degree of specificity? Perhaps some sort of a school survey

Problems with Teacher Surveys Confidentiality issues - but still want to match (what techniques?)—perhaps last

four digits of social security number Trouble getting a hold of teachers via email Personnel databases sometimes have teachers that are listed incorrectly (e.g.

music teacher listed as an elementary teacher) Email viruses and computer issues that don’t allow the surveys to come through Some teachers don’t use their school email address, just their personal addresses

(sometimes training occurred over the summer) Money is not as critical an issue as time Trouble with confidentiality documents conflicting with one another University and federal government conflict

Problems with Student Level Data Institutional Review Board’s problems with student level data (medical research

done at the university means that we have to be held to the same standards as study participants)- very protective of individual rights

Assigning a pseudo ID is a good idea, but what about checking to make sure it’s consistent – use an algorithm and check both ways

“Algebra I” in one school might not be the same as “Algebra I” in another school- or maybe in another classroom

Verify the name of the course, but also look at curriculum

12

Page 13: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

Evaluate the curriculum—perhaps look at student work via the web - some schools require teachers to post assignments on the web

When does data gathering have too much influence on the treatment? Teacher buy-in is important- make the assessments meaningful- teachers want to

know how the evaluative efforts can be used to help assess student performance (it doesn’t, it evaluates a program)

Reporting results on individual test items and using them as indicators of overall performance is problematic (e.g.- geometry might have three questions)

Achievement Gap Issues Are we looking at it the way we should be? Larger gap between Whites and Blacks and Whites and Hispanics in some cases Perhaps retention rates have something to do with this—if the program works,

then maybe these kids are staying in high school and not dropping out—but they aren’t doing “well” in school

Applying Qualitative Research Software – facilitated by Beth Rodgers

Beth Rodgers explained her background experience with qualitative software and compared the advantages and disadvantages of various qualitative research software programs. She likes N-vivo, Atlas was nicely organized when it first came out, but all the directions were translated from German, not very well. She has been doing qualitative research since 1993, using Nu*dist. N-vivo is quite new, software that helps you do analyses the way you’re used to doing it. Beth gave participants a brief demonstration of N-vivo with the warning, don’t let the software direct the project, you should be in charge and let it help you.

With N-vivo, you can create menus, electronic post-its to stick on your files, you can jump back and forth between documents

Nu*dist is static, once you put the text in N-vivo is miles ahead of Nu*dist Nu*dist can’t edit text, it’s less flexible Atlas has a steep learning curve N-vivo has no capability for analyzing video data N-vivo costs around $400, Scolari distributes both Nu*dist and N-vivo, sells Atlas

as well N-vivo has model explore – can draw maps, and do modeling http://www.qsrinternational.com - Qualitative Research Software (QSR) Web site

lists trainers, has resources including a workshop handbook, and an N-vivo getting started handbook

Wrap-Up and Review - Specific Evaluation Challenges

Norman Webb asked what topics would have the most relevance for large audiences and what instruments or approaches can be used to deal with these challenges.

13

Page 14: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

Participants identified items that would be useful to cover at the upcoming Adding Value Conference in February 2005:

Work sample analysis looking at student/teacher work—how to analyze work samples; what types of work samples to collect- inquiry-based types of samples that would get at reformed-based curricula

Having projects decide about the quality and rigor of the curricula Looking at teacher plans, teacher assessments Learning more about reporting standards and formats; interim reports; is there a

“golden standard”? Reporting to different audiences (superintendents, etc.) – will NSF develop

guidelines that may affect MSP data reporting? Publishing evaluation results - what, if anything, can be published – in which

journals? Can it be published if data was collected under the hood of an evaluation?

Resources Identified/Web Sites

Mitchell, J., Levine, R., and Bitter, C. (2004). University and K-12 School Partnerships: How Does One Make These Happen? American Institutes for Research. Paper Presented at the American Educational Research Association, San Diego, CA, April 16, 2004.

Millar, S. & Clifford, M. (2004). Mapping the Landscape: A snapshot of SCALE at 16 months. Draft version produced for participants in the “Adding Value to the Mathematics and Science Partnership Evaluations” conference, September 16-17, 2004, University of Wisconsin-Madison.

http://www.ccsso.org/projects/Surveys_of_Enacted_Curriculum/ - Survey of Enacted Curriculum

http://cse.edc.org/work/research/rsr/default.asp - Jeanne Rose Century’s work at the Center for Science Education on sustainability

www.nottingham.ac.uk/education/MARS/services/ctb.htm - Mathematics Assessment Resource Service (MARS) performance assessments

http://www.qsrinternational.com - Qualitative Research Software (QSR) Web site lists trainers, has resources including a workshop handbook, and an N-vivo getting started handbook

www.addingvalue.org - Adding Value Web site includes references, Web links, and summaries of Adding Value conferences

www.usu.edu/cbec - Utah State University’s RETA Consortium for Building Capacity

14

Page 15: Adding Value Conferenceaddingvalue.wceruw.org/Conferences/Meeting 4/Adding Value... · Web viewNSF understanding is about how higher education can support K-12; but the focus groups

www.wmich.edu/evalctr/jc/ - Joint Committee on Standards for Educational EvaluationStudent Evaluation Standards, Personnel Evaluation Standards, and Program Evaluation Standards

15