Evaluate programs

Program evaluation process

Program evaluation involves the holistic examination of a program including its environment, client needs, procedures, and outcomes using systematic data collection and analysis procedures. Most evaluations also include recommendations for improving the program and strategies for ongoing evaluation and improvement.

All program evaluations involve a process of planning, gathering data, and reporting results consisting of the following nine steps:


STEP 1 Describe the program context

Descriptions convey the mission and objectives of the instructional program and include information about its purpose, expected effects, available resources, the program’s stage of development, and instructional context.

Descriptions set the frame of reference for all subsequent planning decisions in an evaluation.

STEP 2 Identify stakeholders and their needs

Stakeholders are the individuals and organizations involved in program operations, those served or affected by the program, and the intended users of the assessment or evaluation. Stakeholder needs generally reflect the central questions the stakeholders have about the instructional activity, innovation, or program.

Determining stakeholder needs helps to focus the evaluation process so that the results are of the greatest utility.

STEP 3 Determine the evaluation purpose

Identifying a clear purpose helps determine how to conduct the evaluation.

Three general purposes for instructional evaluations are to gain insight, change practices, and/or measure effects.

    • Gain Insight - to clarify how instructional activities should be designed to bring about expected changes.
    • Change Practice - to improve the quality, effectiveness, or efficiency of instructional activities.
    • Measure Effects – to examine the relationship between instructional activities and observed consequences
STEP 4 Identify intended uses

Intended uses are the specific ways evaluation results will be applied. They are the underlying goals of the evaluation, and are linked to the central questions of the study that identify the specific aspects of the instructional program to be examined.

The purpose, uses, and central questions of an evaluation are all closely related.

STEP 5 Create an evaluation plan

The evaluation plan outlines how to implement the evaluation including: identification of the sponsor and resources available for implementing the plan, what information is to be gathered, the research method(s) to be used, a description of the roles and responsibilities of sponsors and evaluators, and a timeline for accomplishing tasks. [more]

Example: Evaluation plan - ~FAST Tex

Gathering Data

STEP 6 Gather data

Data gathering focuses on collecting information that conveys a holistic picture of the instructional program and can be seen as credible by stakeholders.

Data gathering includes consideration about what indicators, data sources and methods to use, the quality and quantity of the information, human subject protections, and the context in which the data gathering occurs. [more]

Reporting Results

STEP 7 Analyze data

Data analysis involves identifying patterns in the data, either by isolating important findings (analysis) or by combining sources of information to reach a larger understanding (synthesis), and making decisions about how to organize, classify, interrelate, compare, and display information. These decisions are guided by the questions being asked, the types of data available, and by input from stakeholders.

STEP 8 Make conclusions and recommendations

Conclusions are linked to the evidence gathered and judged against agreed-upon standards set by stakeholders. Recommendations are actions for consideration that are based on conclusions but go beyond simple judgments about efficacy or interpretation of the evidence gathered.

STEP 9 Report results

Factors to consider when reporting results, or dissemination, include tailoring report content for a specific audience, explaining the focus of the study and its limitations, and listing both the strengths and weaknesses of the study. It may also include the reporting of active follow-up and interim findings. Reporting interim findings is sometimes useful to instructors or staff in making immediate instructional adjustments. [more]

Adapted from the following sources:

Center for Disease Control Evaluation Working Group. Steps in Program Evaluation. http://www.cdc.gov/eval/steps.htm Retrieved June, 30, 2006.

Joint Committee on Educational Evaluation, James R. Sanders (chair). 1994. The Program Evaluation Standards: How to assess evaluation of educational programs, 2nd ed. Sage Publications: Thousand Oaks, CA.

Popham, James W. 1993. Educational Evaluation, 3rd ed. Allyn & Bacon: Boston.

Taylor-Powell E., Rossing B., Geran J. 1998. Evaluating Collaboratives: Reaching the potential. Madison, WI: University of Wisconsin Cooperative Extension.

University of Washington, Office of Educational Assessment. Evaluation Planning Guide. http://www.washington.edu/oea/evaluatn.htm Retrieved March 26, 2004. <Note: site no longer available>

Page last updated: Sep 21 2011
Copyright © 2007, The University of Texas at Austin