You’ll recall the so-called Sokal Squared dust-up: three academics who were able to publish a series of outrageously fake articles in peer-reviewed critical studies journals. Fenster wrote about it here when the story broke.
For many inside academe the hoax had the desired effect: it was seen as exposing not only the weaknesses in the peer review process but also the inanity of the critical studies field as a whole. Unfortunately for some it also reverberated outside higher education, putting all of academe in a harsh light just at a very brittle time.
Higher education as an institution has lost a great deal of public trust. It is also under severe financial pressure, with commentators like Clayton Christensen doubling down on his prediction of mass college closings and consolidations in the next decade.
When under multiple sources of pressure what does one do? As always you have a choice. You can address the problems head-on or you can double down in your own way. Alas higher education has done way too much of the latter and not enough of the former. That’s true relative to broad-gauged matters like changing the business model, the main focus of Christiansen’s warning. But it is also true of the little Sokal Squared tempest in a teapot.
Here is a good example of the academic pushback from an academic blog put out by PZ Myers, a biology professor.
First, the problem in minimized–peer review has its flaws.
(Y)es, if you carry out a badly designed experiment, you will sometimes get a positive hit, but you can’t conclude anything from it. No one is surprised that, in the volume of papers submitted to the peer-reviewed literature, clunkers get through. We know the system is not perfect.
And then there’s the ever popular Squirrel!! defense–look over there!
You know, if you’re a left-leaning liberal, there are plenty of gigantic targets you could be taking aim at: all you need to do is look at all three branches of the federal government, or police activities nation-wide, or the military-industrial complex, or the undermining of regulations by big corporations, or wealth inequality. We have no shortage of big, serious problems. But for some reason, these left-leaning liberals have decided that academia is too left-leaning, and must be exposed.
Then there’s the appeal to the heart.
But punching down at marginal journals because they have a soft spot for tendentious prattle ain’t it. It’s also exploiting a feature of academia that I rather like — trust.
One might rather think that the abuse of trust is more readily apparent in the way tendentious prattle is accepted without much thought or, if you will, critical thinking.
The best argument presented falls back on the Baconian thinking that undergirds science and is held to undergird academic inquiry as a whole. According to Myers there is a good way to tackle whatever problems may be out there:
it’s called writing rebuttals and critical arguments that directly address bad work.
That’s true, actually. Especially in an ideal world. But in the world in which we actually live critical studies has rolled merrily along despite the original Sokal hit years ago, shielded by numerous layers of institutional fat. Scholarly criticism of critical studies would be better than hoaxes all else being equal but all else is not equal. There is such a thing as a bad idea whose time has come.
But if the pushback to Sokal Squared were confined to the odd academic blog and the occasional outraged letter to the editor that would be fine. Everybody has a chance to join in the debate and if some academics wish to criticize the hoaxers so be it.
But beware if and when the institution itself begins to use institutional power against the heretics. That process appears to be underway.
The vehicle is the Institutional Review Board, or IRB.
An institutional review board (IRB), also known as an independent ethics committee (IEC), ethical review board (ERB), or research ethics board (REB), is a type of committee that applies research ethics by reviewing the methods proposed for research to ensure that they are ethical. Such boards are formally designated to approve (or reject), monitor, and review biomedical and behavioral research involving humans. They often conduct some form of risk-benefit analysis in an attempt to determine whether or not research should be conducted. The purpose of the IRB is to assure that appropriate steps are taken to protect the rights and welfare of humans participating as subjects in a research study. Along with developed countries, many developing countries have established national, regional or local Institutional Review Boards in order to safeguard ethical conduct of research concerning both national and international norms, regulations or code.
Sounds good, and has illustrious history:
Formal review procedures for institutional human subject studies were originally developed in direct response to research abuses in the 20th century. Among the most notorious of these abuses were the experiments of Nazi physicians, which became a focus of the post-World War II Doctors’ Trial, the Tuskegee Syphilis Study, a long-term project conducted between 1932 and 1972 by the U.S. Public Health Service, and numerous human radiation experiments conducted during the Cold War. Other controversial U.S. projects undertaken during this era include the Milgram obedience experiment, the Stanford prison experiment, and Project MKULTRA, a series of classified mind control studies organized by the CIA.
But many critics say it is at best cumbersome and ineffective and at worst inherently flawed. This is from an interview with Carl E. Schneider, Professor of Law and Professor of Internal Medicine at the University of Michigan and author of the book The Censor’s Hand.
The problem is that the IRB system is so fundamentally misconceived that it is virtually a model of how to regulate badly. Good regulation is accountable, but IRBs are effectively answerable to nobody. Good regulation has clearly defined jurisdictional limits, but IRBs may intervene as they wish. Good regulation is guided by clear rules, but IRBs have little more than empty principles. Good regulation is disciplined by fair procedures, but IRBs can ignore every fundamental precept of due process. Good regulation is transparent, but IRBs need not even explain — much less justify — their decisions. Good regulation is staffed by experts, but IRB members cannot be competent in all the specialties they regulate. Good regulation has manageable workloads, but IRBs regulate more details of more research in more ways than they can review responsibly, and they have steadily broadened and intensified their hold over research.
In short, the IRB system makes unreliable decisions because it is lawless and unaccountable, because its organization, procedures, membership and imperialism are so inappropriate. The problem is not regulation, it is bad regulation.
With the result that IRB’s have become censors.
IRBs have become censors because they have been told to censor. IRBs may decide what questions researchers can ask, how to ask them, how to analyze answers and how to report findings. IRBs’ incentives lead them into the classic censors’ faults — like constraining too much for too little reason. If IRBs say no, little harm can come to them: researchers may be dismayed, but they cannot afford to alienate regulators who have unreviewable authority over their work. If IRBs say yes, they risk blame for trouble that (justly or not) the research provokes. This trouble can be painful, including institutional embarrassment, lawsuits and federal sanctions.
In a way IRBs occupy a kind of sacred space. Very bad deeds prompted their creation but the moral fervor seems to have led to a walling off of accountability. We will always need the sacred but the sacred always invites this kind of walling off, accompanied by rot.
A renegade but apolitical IRB is bad enough in that it can retard or even harm scientific progress. But what happens when the Olympian remove that is part of the IRB culture is married to political, or politically correct, concerns?
So now we see what might happen.
Peter Boghossian, an assistant professor of philosophy at Portland State University and the only one of three researchers on the project to hold a full-time academic position, was found by his institutional review board to have committed research misconduct. Specifically, he failed to secure its approval before proceeding with research on human subjects — in this case, the journal editors and reviewers he was tricking with his absurd but seemingly well-researched papers. . . . “An IRB protocol application should have been submitted to the Office of Research Integrity,” reads a determination letter from Portland state’s IRB dated last month. “University policy requires that all research involving human subjects conducted by faculty, other employees and students [on campus] must have prior review and approval by the IRB.”
Boghossian says in a You Tube video:
“I think that they will do anything and everything in their power to get me out. And I think this is the first shot in that.”
At this point in the story there is really not that much to be said. Updates will follow.