The obvious problem with a great many studies is that they’re cited for a proposition by people who never read them. They become part of a myth of a study, such as the Lisak study persistently used to show that only 2-10% of rapes are false. It’s not remotely what the study says, and yet it’s become an article of faith, repeated constantly, believed without question. But that’s just one issue.
For people who care enough, as opposed to people who simply cite a study assuming its validity, there are numerous problems that arise, from a study conflating definitions or issues (such as a study about “rape” that includes in its definition the “ear rape” of hearing unwanted words) to the methodology of size, self-selection, payment or incentive. It’s as if I did a study of what everyone living at Casa de SJ thought about something. It might look as official as any other study, but it wouldn’t be of much value to anyone but us.
But there is another huge gap in the knowledge base upon which we rely to ground our claims of truth. What if someone with a couple of letters after their name had a thesis they desperately believed to be true and kinda made sure their study proved it? What if it was just complete nonsense, but was embraced as a darling study, cited a million times to prove a thesis that conformed with whatever belief was consistent with the current elite othrodoxy?
Except there are gatekeepers, the people who decide what studies get published and what studies do not. Granted, they can be pranked, such as Sokal Squared, and there are journals that pretend to be legit but are merely house organs for junk science grifters, but there are serious journals too, the ones we all know and believe, like the New England Journal of Medicine. Surely they can be trusted to limit what they publish to serious studies about serious matters?
Academic publishing is famously brutal. You might have a great manuscript that is under review then is rejected based on comments of one anonymous reviewer who thinks that you use too many exclamation points. Or a reviewer who is bitter because you didn’t cite his particular work. Or a reviewer who didn’t really read the manuscript and who goes on to criticize your work for neglecting some important statistical process that you, in fact, implemented plainly and correctly.
And this is just the tip of the iceberg.
I know, because I have published more than 100 academic pieces in my career to date. I’ve pretty much been through it all.
Glen Gehar tried to get a paper published “on the topic of political motivations that underlie academic values of academics,” inspired by a talk by Jonathan Haidt, who founded Heterodox Academy. Nobody wanted to publish it.
Each rejection came with a new set of reasons. After some point, it started to seem to us that maybe academics just found this topic and our results too threatening. Maybe this paper simply was not politically correct. I cannot guarantee that this is what was going on, but I can tell you that we put a ton of time into the research and, as someone who’s been around the block when it comes to publishing empirical work in the behavioral sciences, I truly believe that this research was generally well-thought-out, well-implemented, and well-presented. And it actually has something to say about the academic world that is of potential value.
I’ve never had a paper that was so difficult to publish. Not even close.
Since there were no journal takes, he ended up taking the advice of Clay Rutledge and publishing it on his own.
Honestly, this suggestion seemed kind of genius to me. After all, I don’t need more publications for any extrinsic reason at all. I’ve held tenure since 2004. Further, I know full well that my Psychology Today blog posts receive way more views than do my academic articles. And I know that, in fact, many of these views come from academics themselves.
A bit of irony is that blog posts are often far more widely read than academic articles, but lack the ascribed credibility of “serious” journals. Not to mention, they don’t cite as well, so are easily dismissed. But what did Gehar’s study find?
We designed a study with academics in mind. In short, we surveyed nearly 200 academics from around the US and asked them to rate the degree to which they prioritize each of the five following academic values:
- Academic rigor
- Knowledge advancement
- Academic freedom
- Students’ emotional well-being
- Social Justice
Do the “gatekeepers” of cites value academic rigor and advancement of knowledge, or do they value ideological objective?
Some highlights of the findings are as follows:
- Relatively conservative professors valued academic rigor and knowledge advancement more than did relatively liberal professors.
- Relatively liberal professors valued social justice and student emotional well-being more so than did relatively conservative professors.
- Professors identifying as female also tended to place relative emphasis on social justice and emotional well-being (relative to professors who identified as male).
- Business professors placed relative emphasis on knowledge advancement and academic rigor while Education professors placed relative emphasis on social justice and student emotional well-being.
- Regardless of these other factors, relatively agreeable professors tend to place higher emphasis on social justice and emotional well-being of students.
Of course, if you want to know more than just the highlights, or whether these highlights are legitimate, or whether the methodology of the study is sound, you would have to read the actual study, even if these highlights confirm what you always suspected about what’s become of academia.
Then again, you won’t be able to cite to this study in a prestigious journal because they would have nothing to do with it, unlike the Lisak study which has since been debunked as a worthless piece of crap, but one that is irrefutable in campus rape mythology.