Document Type

Article

Journal/Book Title/Conference

PLoS ONE

Volume

12

Issue

4

Editor

Dmitri Zaykin

Publication Date

4-28-2017

Abstract

In high dimensional data analysis (such as gene expression, spatial epidemiology, or brain imaging studies), we often test thousands or more hypotheses simultaneously. As the number of tests increases, the chance of observing some statistically significant tests is very high even when all null hypotheses are true. Consequently, we could reach incorrect conclusions regarding the hypotheses. Researchers frequently use multiplicity adjustment methods to control type I error rates—primarily the family-wise error rate (FWER) or the false discovery rate (FDR)—while still desiring high statistical power. In practice, such studies may have dependent test statistics (or p-values) as tests can be dependent on each other. However, some commonly-used multiplicity adjustment methods assume independent tests. We perform a simulation study comparing several of the most common adjustment methods involved in multiple hypothesis testing, under varying degrees of block-correlation positive dependence among tests.

Included in

Mathematics Commons

Share

COinS