Date of Award:
5-2013
Document Type:
Dissertation
Degree Name:
Doctor of Philosophy (PhD)
Department:
Special Education and Rehabilitation
Committee Chair(s)
Timothy A. Slocum
Committee
Timothy A. Slocum
Committee
Ron Gillam
Committee
Sarah Bloom
Committee
Bob Morgan
Committee
Michael Twohig
Abstract
Federal education policies, such as No Child Left Behind and the Individuals with Disabilities Education Improvement Act, mandate the use of scientifically-proven or research-based curricula and interventions. Presumably, interventions that have a large amount of scientific evidence documenting their success are more likely to be effective when implemented with students in school settings.
In special education, single-subject research is the predominant methodology used to evaluate the effectiveness of interventions. In single-subject research, a target behavior is measured under baseline conditions (i.e., before the intervention of interest is implemented) and intervention conditions. The data for each condition are graphed, and analyzed visually to evaluate whether the behavior appears to have changed as a result of the intervention. The conditions are replicated, and interventions that produce reliable changes in behavior are considered effective.
Although visual analysis seems straightforward, previous research suggests that experts often disagree about what constitutes an intervention effect using visual analysis. This disagreement has important implications for the identification of research-based curricula and interventions. If two experts disagree about whether a single-subject graph depicts an effective intervention, our confidence in any conclusions we might reach about the intervention using that graph is diminished. As a result, that study cannot contribute meaningfully to the body of literature supporting the use of the intervention.
Given the importance identifying research-based interventions and the complexity of visual analysis, it is particularly important to examine methods to train individuals to visually analyze data. Few studies to data have evaluated approaches to training visual analysis. In the present study, we developed an assessment tool to measure participants' visual analysis skills. This assessment was used to compare the effects of a lecture to the effects of an interactive, computer-based training on visual analysis. We also included a no-treatment condition in which participants received no instruction on visual analysis. One hundred and twenty three undergraduate participants were randomly assigned to one of the three groups. Results suggested that the lecture and computer-based training both significantly improved participants' visual analysis skills compared to no-treatment condition. These results suggest that structured approaches to teaching visual analysis improve novices' accuracy. More systematic training approaches, such as those investigated here, could also increase agreement among experts and result in more valid identification of research-based interventions.
Checksum
7b070167c3438dd260839176808d80ce
Recommended Citation
Snyder, Katie, "An Evaluation of a Computer-Based Training on the Visual Analysis of Single-Subject Data" (2013). All Graduate Theses and Dissertations, Spring 1920 to Summer 2023. 1964.
https://digitalcommons.usu.edu/etd/1964
Included in
Psychology Commons, Special Education and Teaching Commons, Teacher Education and Professional Development Commons
Copyright for this work is retained by the student. If you have any questions regarding the inclusion of this work in the Digital Commons, please email us at .