Alexandria Digital Research Library

Adding to the pool of methods for program evaluation: A comparison of latent class analysis and propensity score analysis

Taylor, Lauren Christine
Degree Supervisor:
Russell W. Rumberger
Place of Publication:
[Santa Barbara, Calif.]
University of California, Santa Barbara
Creation Date:
Issued Date:
Education, Evaluation and Education, General
Propensity Score Analysis
Randomized Design
Latent Class Analysis
Quasi-experimental Design
Program Evaluation
Dissertations, Academic and Online resources
Degree Grantor:
University of California, Santa Barbara. Education
Ph.D.--University of California, Santa Barbara, 2013

Considering the amount of funding that is distributed to educational research each year, leaders and policymakers have a vested interest in finding scientifically based evidence that answers causal questions regarding program effectiveness. The importance of program evaluation has long been recognized in many fields of research; however, the most appropriate methods for evaluating programs and estimating the causal effect is less clear. In an effort to gain insight into some of the methods used in program evaluation, this dissertation highlights the advantages and disadvantages of a commonly used statistical method called propensity score analysis. Additionally, this dissertation introduces latent class analysis as a possible new method in program evaluation. The dataset chosen for this dissertation comes from a study done by Shadish, Clark, and Steiner (2008), which consists of both a randomized and a quasi-experimental component.

In having the unique feature of both a randomized group and a nonrandomized group, I hope to show that latent class analysis can produce similar outcome results as propensity score analysis and a randomized design, while also presenting how the unique features of latent class analysis would be beneficial to program evaluation. The current study consisted of four main analyses. Analysis 1 was a latent class analysis that evaluated the method's ability to work with missing values. Analysis 2 consisted of both a latent class analysis and a propensity score analysis using the Shadish et al. (2008) imputed dataset and all 25 predictor variables. Analysis 3 used the theory-reduced set of variables to conduct multiple latent class analyses and propensity score analyses on four varying sample sizes (i.e., 100%, 75%, 50%, and 25%).

Analysis 4 conducted multiple latent class analyses and propensity score analyses on the reduced set of variables, created from theory and latent class analysis standards applied to the same four sample sizes from Analysis 3. All propensity score analyses were completed in Stata 12.0, using the pscore command, while, all latent class analyses began in Mplus 6.0, and then were exported to SPSS 20.0 Results of the independent sample t-test, conducted on the randomized dataset, placed the target average treatment value for the vocabulary posttest at 8.11 and the mathematics posttest at 4.19. When looking at each of the analyses, the closest estimate to the randomized design for the vocabulary posttest was produced by the propensity score analysis; while for the mathematics posttest, the closest estimate to the randomized design was calculated by the latent class analysis.

In general, the results do not indicate which method is better at producing more accurate average treatment effects. However, the latent class analysis identified four unique response patterns where the individual mean difference varied for each of the classes. This could have interesting potential for many program evaluations.

Physical Description:
1 online resource (187 pages)
UCSB electronic theses and dissertations
Catalog System Number:
Inc.icon only.dark In Copyright
Copyright Holder:
Lauren Taylor
Access: This item is restricted to on-campus access only. Please check our FAQs or contact UCSB Library staff if you need additional assistance.