One thing that will need changing is getting a stronger GRIP. Treating our incoming freshman like trauma victims doesn’t seem to be working.
Thinking about becoming a university professor? Read Kevin Birmingham’s “The Great Shame of Our Profession” before making definite plans.
A 2014 congressional report suggests that 89 percent of adjuncts work at more than one institution; 13 percent work at four or more. The need for several appointments becomes obvious when we realize how little any one of them pays….
According to the 2014 congressional report, adjuncts’ median pay per course is $2,700. An annual report by the American Association of University Professors indicated that last year “the average part-time faculty member earned $16,718” from a single employer. Other studies have similar findings. Thirty-one percent of part-time faculty members live near or below the poverty line.
It’s amusing to think of all the underpaid university adjuncts striking for a “living wage.” Unfortunately, the pool of potential “scabs” is way too deep for any strike to be effective for more than one semester.
Of course, not all disciplines have the same problems. My department is chronically desparate to find enough statisticians to teach all our courses, and I’ve been comfortably esconced in a non-tenure track job for over 15 years. But statisticians are rare birds, and everyone I’ve talked to allows as how it’s far too late for them to swot up on their math and stats to become employable.
Tip from the Instapundit, who knows exploitation when he sees it.
Adrian Colyer at the morning paper, takes a stab at explaining the problem with p-values and multiple comparisons. He shoots! He scores! The crowd* goes wild!
Tip from an O’Reilly Daily Newsletter, which I found languishing in Clutter purgatory.
*OK, the crowd of two or three statistics lecturers who struggle to explain the multiple comparison problem.
A suitably-embellished version of Cook’s post will appear in my lecture notes in the Spring semester. Thanks, J.C.
My students repeatedly ask about setting the critical values or interpreting p-values in statistical hypothesis testing. My stock answer is they should do their tests at the 5% level, since this is the most common and accepted practice in the biomedical community (my translation: it’s what all the KooL KiDz do.)
But now some upstart Bayesian Aggie (who’s only published 122 papers) has taken a closer look at p-values and significance levels, and claims the critical values are too loose, and need tightening up. Good-bye 5%, hello 0.5% (for slackers) or 0.1% (for “real” researchers). I suspect this would eliminate entire forests of bullshit journal articles with p-values of 0.05 minus epsilon, and otherwise wreak havoc in academia.
My only grumble is that I need bigger samples for many of my teaching examples. I just wrote up a neat demo of the Breusch-Pagan test for homoskedasticity, which rejected with a p-value of 0.0308. That ain’t gonna cut it in the New World of Your-Evidence-Ain’t-Good-Enough World Order. #@$*&++@#!, twice.
Tip from Briggsy, the Bayesian Bomb-Thrower.