Journal of the Society for Social Work and Research
University of Chicago Press
Objective: Findings from meta-analytic studies that use standardized mean differences (SMDs) may be overly dependent on the original measures that were used to generate SMDs. This may be particularly true when measures have arbitrary metrics or when measures fail to meet measurement equivalence. We test the hypothesis that in such cases, meta-analytic results may vary significantly— statistically and practically—as a function of the measures used to derive SMDs. Methods: We conducted 5 secondary random-effects meta-analyses of SMDs—each under a different measurement scenario—from a published meta-analysis comparing the efficacy of cognitive–behavioral therapy with that of reminiscence therapy for depression in older adults. In each scenario, SMDs were based on scores from measures with arbitrary metrics, some of which failed to meet measurement equivalence. Results: Consistent with the hypothesis, meta-analysis results differed significantly—statistically and practically—between the measurement scenarios under conditions of measurement nonequivalence. Conclusions: Results of meta-analyses involving measures with arbitrary metrics may depend on the measures that the SMDs are based on when measurement equivalence fails to hold. Inferences concerning the relative efficacy of different treatments can be measurement dependent.
Nugent, W., Yoon, S., &Walters, J.E.(2018). An empirical demonstration of the existence of measurement dependence in the results of a meta-analysis. Journal of the Society for Social Work and Research,10(1), 161-187. https://doi.org/10.1086/699248