Date of Award:


Document Type:


Degree Name:

Master of Science (MS)


Communicative Disorders and Deaf Education

Committee Chair(s)

Sandra Gillam


Sandra Gillam


Ron Gillam


Sarah Schwartz


Language sample analysis (LSA) is an important practice for providing a sensitive and accurate measure of a child’s language abilities. Several research-validated progress-monitoring tools are currently available that are designed to measure language quality through narrative samples in school-age children. While these tools provide clinically important information about a child’s language abilities, they can be time-consuming to code, and challenging to code reliably. In recent years there has been a surge in the use of automated essay scoring (AES) systems for the scoring of high-stakes written assessments, but few programs has been designed to automate clinical assessments. Narrative microstructure, an important indicator of language quality, lends itself to computer automation due to its relatively objective and straightforward assessment. The purpose of the current study was to test the feasibility of developing a series of computer-based rulesets, collectively referred to as CAMS, for automatically scoring narrative microstructure in LSA. CAMS was designed to individually score six microstructure elements related to conjunctions, meta-cognitive and linguistic verbs and sentence elaboration. The accuracy and interrater reliability of CAMS with hand-scores (n = 414) was compared against goldstandard expert scores (n = 50) based on percent overlap and quadratic weighted kappa (QWK). Results indicated that QWK between CAMS and expert-scores was higher than that of hand-scorers on four of the six elements. CAMS also met the literature-based threshold for strong interrater reliability (k > 0.60) for all six elements. CAMS shows promise as a means for automating the scoring of microstructure in LSA and narrative samples.



Available for download on Friday, May 01, 2026