Conduct research

Single-group experiment

General description

In a single-group experiment, a pre-test and a post-test are given to one group to measure the effect of an intervention (i.e. instructional activity, innovation, or program) without using a control group.

Before introducing an intervention, measure the variable you are studying.  Next, apply the intervention.  Last, give a post-test that is comparable to the pre-test, and analyze pre-post differences.

Pretest ----------> Treatment ----------> Posttest


Are resources available?

Conducting a single-group experiment requires experience in experimental design and implementation, data collection, and qualitative, quantitative, or content analysis. [more]

How will you deal with nonparticipation and attrition?

Participants who do not complete a course or don't fully participate create a problem if their level of participation or attrition follows a pattern different from the rest of the group (i.e., the pattern is non-random). For example, if under-achieving students are more likely to drop out, then the intervention may appear to be effective. Gathering background information for all participants, such as previous achievement records or socioeconomic status, can help you estimate bias that is introduced and adjust for it.

How will you elicit cooperation?

Collaborate with program staff, instructors, and students or clients to gain their support and cooperation.


While the single group experiment offers simplicity, several problems can complicate interpretation:


Events happening between pre- and post-test may increase scores. For example, several students may enroll in a preparatory class for a standardized test.  Without a control group, it is impossible to know if the pre-post difference is a result of the event or the intervention. 


Participants may improve because they mature or regress because they become fatigued.  For example, an instructor who attends teacher-training sessions during an academic year and improves significantly on course instructor survey ratings at the end of this year may be overlooking improvement due to teaching maturation.  Measuring the outcome variable several times before, during, and after an intervention can lessen the impact of maturation.

Testing effect

Completing a pre-test measure may make participants aware of a deficit that they then address. If you use identical measures, students or clients may do better the second time because of practice.

Instrumentation effect

How you measure the outcome and who measures it may change from pre- to post-test and can affect whether students or clients appear to improve. It is important that pre and post measures be equivalent.

Additional information

Bordens, K.S. and Abbott, B.B. (1996). Research Design and Methods: A Process Approach. 3rd ed. Mountain View, CA: Mayfield.

Page last updated: Sep 21 2011
Copyright © 2007, The University of Texas at Austin