Omnibus tests vs. focused contrasts in analyzing experimental data

September 9, 2014 – 4:57 pm

I’ve always been uncomfortable with drawing insights from big complex ANOVA models, and in searching for a couple stats references for a paper, I found there is a sizable literature on the topic.

Here’s a statement from a book by Robert Rosenthal and colleagues that summarizes exactly how I feel about these models:

The problem is that omnibus tests, although they provide some protection for some investigators from the danger of “data mining” when multiple tests are performed as if each were the only one considered, do not usually tell us anything we really want to know.

I like the term “omnibus tests” as a description for these types of analyses that try to arrange all possible variables into a hard-to-interpret statistical model. I find the interpretation of “interaction effects” in many models I see to be particularly problematic.

As an alternative to these omnibus tests, the authors suggest using “focused contrasts,” which to me, sounds very similar to the “lots of t-tests” approach that I have settled on for many of my analyses. In their book, they present some novel algorithmic approaches to making these contrasts, and while I have not read the whole book to understand what they did, I think that the basic concept of learning about data from many simple analyses instead of one kitchen-sink style analysis is the same.

I also think they are correct that fear of errors from multiple comparisons is a big reason people gravitate toward omnibus tests. My feeling is that the epistemelogical challenges that multiple tests can create are better dealt with in the interpretation phase rather than the calculation phase of an analysis.

Sorry, comments for this entry are closed at this time.