When all you have is a hammer…

…everything looks like a nail.

Daniel Lakens, the 20% Statistician, takes a rare but easy shot at statisticians and null hypothesis significance testing.

Our statistics education turns a blind eye to training people how to ask a good question. After a brief explanation of what a mean is, and a pit-stop at the normal distribution, we jump through as many tests as we can fit in the number of weeks we are teaching. We are training students to perform tests, but not to ask questions

He defines

…the Statisticians’ Fallacy: Statisticians who tell you ‘what you really want to know’, instead of explaining how to ask one specific kind of question from your data.

My favorite is the two-tailed test of the difference of two means, which can provide evidence that the two are different, but not that they are (nearly) the same.  My runners up are goodness-of-fit tests, which do no such thing.  Sometimes I feel like I’m selling the researcher’s version of Snake Oil, rather than teaching sound data analysis and interpretation.

Lakens closes with an excellent addendum, a reference to David Hand’s Deconstructing Statistical Questions,  which goes into much more detail.

Advertisements

Seven Pillars

Wisdom hath built her house, she hath hewn out her seven pillars.  –Proverbs 9:1

I just finished Stephen Stigler’s The Seven Pillars of Statistical Wisdom, and I’m daunted–and embarrassed that I waited so long to read it.  Stigler gives us a structure and taxonomy to statistical thinking* that gives us the “big picture” of statistics.

StiglerSevenPillars

Quite a difference from the descriptives-to-inference-to-models approach that most textbook authors follow.  This is making me rethink how I approach my introductory courses, especially those for statistics majors.  I’m starting with a baby step: adding the (inexpensive, paperbound) book as a required reading in my statistical research methods class.

*the 7 pillars: aggregation, information, likelihood, intercomparison, regression, design, and residual (and that’s just the table of contents!)

Some Hard Stats about University Teaching

Thinking about becoming a university professor?  Read Kevin Birmingham’s “The Great Shame of Our Profession” before making definite plans.

A 2014 congressional report suggests that 89 percent of adjuncts work at more than one institution; 13 percent work at four or more. The need for several appointments becomes obvious when we realize how little any one of them pays….

According to the 2014 congressional report, adjuncts’ median pay per course is $2,700. An annual report by the American Association of University Professors indicated that last year “the average part-time faculty member earned $16,718” from a single employer. Other studies have similar findings. Thirty-one percent of part-time faculty members live near or below the poverty line.

queueingtheadjuncts

It’s amusing to think of all the underpaid university adjuncts striking for a “living wage.”  Unfortunately, the pool of potential “scabs” is way too deep for any strike to be effective for more than one semester.

Of course, not all disciplines have the same problems.  My department is chronically desparate to find enough statisticians to teach all our courses, and I’ve been comfortably esconced in a non-tenure track job for over 15 years.  But statisticians are rare birds, and everyone I’ve talked to allows as how it’s far too late for them to swot up on their math and stats to become employable.

Tip from the Instapundit, who knows exploitation when he sees it.

Multiple Comparisons, Made Easy

Adrian Colyer at the morning paper, takes a stab at explaining the problem with p-values and multiple comparisons.  He shoots!  He scores!  The crowd* goes wild!

p-value-wikipedia

Tip from an O’Reilly Daily Newsletter, which I found languishing in Clutter purgatory.

*OK, the crowd of two or three statistics lecturers who struggle to explain the multiple comparison problem.

An end run around an impossible integral

Ever-insightful polymath John Cook shows how to integrate the Gaussian PDF, in less time than it takes to make breakfast.  The trick?  Coordinate transformations and the Jacobian are your friends.

normal_distribution_pdf-svg

A suitably-embellished version of Cook’s post will appear in my lecture notes in the Spring semester.  Thanks, J.C.

 

The Beginning of the End for 5%

My students repeatedly ask about setting the critical values or interpreting p-values in statistical hypothesis testing.  My stock answer is they should do their tests at the 5% level, since this is the most common and accepted practice in the biomedical community (my translation: it’s what all the KooL KiDz do.)

But now some upstart Bayesian Aggie  (who’s only published 122 papers) has taken a closer look at p-values and significance levels, and claims the critical values are too loose, and need tightening up.  Good-bye 5%, hello 0.5% (for slackers) or 0.1% (for “real” researchers).  I suspect this would eliminate entire forests of bullshit journal articles with p-values of 0.05 minus epsilon, and otherwise wreak havoc in academia.

My only grumble is that I need bigger samples for many of my teaching examples.  I just wrote up a neat demo of the Breusch-Pagan test for homoskedasticity, which rejected with a p-value of 0.0308.  That ain’t gonna cut it in the New World of Your-Evidence-Ain’t-Good-Enough World Order. #@$*&++@#!, twice.

Tip from Briggsy, the Bayesian Bomb-Thrower.