Date of Award:

5-2009

Document Type:

Thesis

Degree Name:

Master of Science (MS)

Department:

Mathematics and Statistics

Department name when degree awarded

Statistics

Committee Chair(s)

D. Richard Cutler

Committee

D. Richard Cutler

Committee

Adele Cutler

Committee

Brynja Kohler

Abstract

Random Forests (RF) (Breiman 2001; Breiman and Cutler 2004) is a completely nonparametric statistical learning procedure that may be used for regression analysis and. A feature of RF that is drawing a lot of attention is the novel algorithm that is used to evaluate the relative importance of the predictor/explanatory variables. Other machine learning algorithms for regression and classification, such as support vector machines and artificial neural networks (Hastie et al. 2009), exhibit high predictive accuracy but provide little insight into predictive power of individual variables. In contrast, the permutation algorithm of RF has already established a track record for identification of important predictors (Huang et al. 2005; Cutler et al. 2007; Archer and Kimes 2008). Recently, however, some authors (Nicodemus and Shugart 2007; Strobl et al. 2007, 2008) have shown that the presence of categorical variables with many categories (Strobl et al. 2007) or high colinearity give unduly large variable importance using the standard RF permutation algorithm (Strobl et al. 2008). This work creates simulations from multiple linear regression models with small numbers of variables to understand the issues raised by Strobl et al. (2008) regarding shortcomings of the original RF variable importance algorithm and the alternatives implemented in conditional forests (Strobl et al. 2008). In addition this paper will look at the dependence of RF variable importance values on user-defined parameters.

Checksum

9d2e1847aa15ec1023ea4d3a034c574c

Share

COinS