Incorporating Prior Information to Enhance Quantitative ELISA

Sarah M. Reehl, Utah State University

Abstract

The enzyme-linked immunosorbent assay (ELISA) is a powerful quantitative tool with predictive abilities completely determined by the accuracy of a five-parameter curve fit to an analyte standard. Parameterizing the standard uses 12-16 out of 96 available wells that are then lost to testing samples, inferring cost on the ELISA consumer and limiting the amount of standard data for fitting to two replicates at each concentration. In addition to limited standard wells, parameterization of the standard is highly subject to fitting error resultant from outlying data, which is impossible to detect with only two replicates, and spurious minima surrounding optimal parameters. If the standard curve cannot properly approximate samples of known concentration, then the inversion of the standard will not accurately quantify samples of unknown concentration. Both inaccurate parameterization and lengthy standard dilution series add unnecessary expense to ELISA consumers. Using data from a variety of ELISA runs, we will address outliers, poor parameterization, and standard design efficiency using a Bayesian perspective to introduce prior information and influence the probability of parameters in fitting the five-parameter model.

The prior information we utilize is the data collected on an analyte during quality control checks of plate development. Parameterizing all of the quality control replicates allows us to develop a collection of best-fit parameters and summarize a likely parameter distribution by mean and variance. In effect, the priors cause the curve fitting algorithm to select parameters that are similar to past parameterization instances rather than parameters that are influenced only by the current data set and whatever problems in might have.

First, we use prior information to detect outliers. We samples at random from the prior distribution and use the samples parameters to form a collection of hypothetical standard curves. At any given concentration these simulations give a population of response pixel intensities where we can exclude intensities that occur outside of a given frequency boundary. Our results indicate that a boundary near 0.05 tends to remove few data points while improving measures of fit.

Next, the prior information is weighted to correct for the negative effects of poor quality data. We add an extra parameter to control the weighting of prior parameter probability and penalize for the selection of parameters that are unlikely according to prior information. The impact of the poor quality data on the parameterization of the curve is lessened by using more prior information. This approach also avoids the spurious minima that occur in the curve-fitting algorithm.

Lastly, we use prior probabilities to guid parameterization using fewer standard wells in an effort to save on cost to the consumer. We assume a seven-part serial dilution and skip those dilutions whose removal degrades the curve fir the least across all analytes tested. We are able to reduce the number of wells used by the standard, without serious detriment to the curve fit, from sixteen wells to eight in total. This standard design requires the use of prior information to guid parameter selection, but in return provides the consumer with the ability to test more samples per plate. We formulate three different designs across analytes that use ten, eight, or six wells to determine the standard, reccommending the eight point design.

Using prior information requires relatively little effort. Standard quality control efforts produce the necessary prior information and the only information needed from the priors are parameter means, standard deviations, and a priori weighting schemes. In return, the consumer can easily detect outliers, improve parameterization results, and use fewer wells to develop a standard curve.