Date of Award


Degree Type


Degree Name

Master of Science (MS)


Mathematics and Statistics

First Advisor

James Powell


The enzyme-linked immunosorbent assay (ELISA) is a powerful quantitative tool with predictive abilities completely determined by the accuracy of a ve-parameter curve t to an analyte standard. Parameterizing the standard uses a weighted sum of squares (wSSE) on data de ned in a serial dilution using 12-16 out of 96 available wells. The wells used for the standard are wells lost to testing samples, inferring cost on the ELISA consumer and limiting the amount of standard data for tting to two replicates at each concentration. In addition to limited standard wells, parameterization of the standard is highly subject to tting error resultant from outlying data, which is impossible to detect with only two replicates, and spurious minima surrounding optimal parameters. If the standard curve cannot properly approximate samples of known concentration, then the inversion of the standard will not accurately quantify samples of unknown concentration. Both inaccurate parameterization and lengthy standard dilution series then add unnecessary expense to ELISA consumers. Our idea is to address outliers, parameterization improvements, and the design of standard dilutions using a Bayesian perspective to introduce prior information. First, using data from a variety of ELISA runs, we will build prior distributions iv by tting the ve-parameter curve to \good" data using wSSE. Distributions of credible responses at particular concentrations will be developed by sampling the priors and evaluating the ve parameter curve at those samples. Outliers can then be characterized in an analysis of the tails of the posterior response distribution. An appropriate credible level will be developed for a particular analyte and then tested against others. In parameterization, we will avoid spurious local error minima by maximizing the posterior likelihood for each analyte. A sensitivity analysis will determine a weighting for prior parameter distributions that maintain constant information in the face of new data. We then remodel the standard design, utilizing fewer standard sample concentrations. We also test alternate standard designs by removing dilution concentrations sequentially from the weighted likelihood and testing a coe cient of determination using all available data. The design which causes the least degradation to the coe cient of determination across all replicate series of all analytes is the proposed design. For removals of three to ve dilutions we assess the performance of the standard parameterization by the percentage of acceptably t data and the percentage of replicate series with acceptably t data in the low concentration spectrum. We show assessments for parameterization both with and without prior information. Finally, we comment on further development of the priors, problems in weighting and transforming data, and predicting unknown concentrations without using a reference model.