Quantitative content analysis findings
Determining the findings of your content analysis involves more than simply reporting initial results. Instead, it is important to critically examine results and check for statistical pitfalls to develop accurate findings upon which you can make reliable conclusions.
Critically examine results
No matter what your results, ask some critical questions:
- Were the criteria you selected valid indicators of content quality? Did you omit important criteria or include unnecessary ones?
- If you implemented an intervention and are comparing content between/among
groups or periods of time,
were there significant differences between/among groups on the content before the intervention started?
were conditions for groups roughly the same (for example, equivalent classrooms, instruction, and assistance outside of class)?
did anything happen other than your instructional intervention that would have affected study results?
was there any difference in motivation between/among groups before or during the study?
Check for statistical pitfalls
- While any conclusive findings should be statistically significant, having statistically significant results does not mean, they are important or valuable; it just indicates that the difference you found is unlikely to be due to chance.
- If you used multiple raters/coders, is the level of inter-rater reliability acceptable (e.g., .70 or higher)? Do results indicate any type of bias on the part of one or more of the raters/coders? If you find poor reliability or suspected bias, your results are possibly unreliable and data should be regathered and/or reanalyzed.
- If you are comparing content between/among groups, could there be any errors due to sample size? If you have fewer than 25 cases per group, you may lack adequate statistical power to detect differences between groups. On the other hand, if you have very large groups, almost any difference, even a trivial one, will be statistically significant, and could lead you to make unwarranted conclusions. For this reason, you should indicate effect sizes, which allow the readers to judge how meaningful the differences are between/among groups. [more]
- Other statistical pitfalls
Consult with a statistician if you are unable to resolve statistical problems on your own.
- Evaluate your results based on how well they answer your research questions or confirm your hypotheses.
- Statistically significant causal, predictive, or correlational findings, as well as important qualitative findings, should form the basis of your main conclusions. Emphasize your strongest findings.
- If you are evaluating an intervention using content analysis, consider all possible explanations for results before concluding an intervention definitely worked or did not work.
- Verify (triangulate ) findings from your content analysis with results from other data sources such as interviews or surveys that can provide additional insight. Finding similar results using different methods strengthens conclusions. On the other hand, differing results call for further analysis.
Aron, A. & Aron, E. N. (2002). Statistics for Psychology, 3rd edition. Upper Saddle River, N J: Prentice Hall.
Coe, R. (2000). What is an 'effect size'? A guide for users. Retrieved June 21, 2006 from the University of Durham, Curriculum, Evaluation and Management Centre, Evidence-Based Education-UK Web site:http://www.cemcentre.org/renderpage.asp?linkid=30325015
Helberg, C. (1995). Pitfalls of data analysis. Retrieved June 21, 2006 from: http://my.execpc.com/4A/B7/helberg/pitfalls/
Lane, D. M. (2003). Tests of linear combinations of means, independent groups. Retrieved June 21, 2006 from the Hyperstat Online textbook: http://davidmlane.com/hyperstat/confidence_intervals.html
Lowry, R. P. (2005). Concepts and Applications of Inferential Statistics. Retrieved June 21, 2006 from: http://faculty.vassar.edu/lowry/webtext.html