A while back, I asked a psychology professor why replication studies were frowned upon. She said something like "studies need to be unique and never been done before" and "there's no money in replicating someone else's work". That's why replication studies are frowned upon.
If that's true, then I'm guessing we'll continue to get more Amy Cuddy's popping out of the social sciences.
We need more of this type of people.
EDIT: In addition to this, perhaps we should stimulate new students to do a replication study as part of their education.
As you correctly point out, the skillset is slightly different to doing novel research, the attitude is different as well, so it would make sense to pool talent. Since there is no money to be made directly with it, governments should step in. And to avoid the problem of journals not wanting to publish replications, the institute would have to be well funded enough to self-pubpish with prestige.
My guess is part of the resistance is fear from established PIs that their work won't actually replicate. Even if a minority of profs, I suspect some of the most famous and powerful ones have cut corners or equivocated at one point or another, to get to where they are. It's easy for many other PIs to then get swept up in the "opinion of the field".
A paper claiming a new result generally gets more citations than a paper saying "we replicated the study of Doe et al. and reproduced the results", even if the latter is equally or more useful from a scientific standpoint.
Indeed, it is not just about money either. Intangible prestige and status among the scientific expert community is just as much, or even more, coveted. Do people read your papers and talk about it at dinner parties? Do you get invited to give talks at prestigious institutions? Do a lot of interesting and similarly active people turn out to your talks? Do people with good connections and resources want to collaborate with you on exciting ideas?
And it turns out that what people including scientists actually care about is novel, bold, visionary ideas, not drone-like repetitive meticulous detail-oriented work following the footsteps of some other group. People want something new, something cool, something flashy, something sexy, something surprising. Not just the media! Scientists themselves, too!
Moreover, many labs organize around a methodology like fMRI rather than around any particular type of question. Imagine planning a multi-decade future career around a single tool that exists today and then pretending you truly care about novelty.
You're spot on that people care deeply about academic status. But what is valued in gaining that status has become deeply broken. The fastest way to status is to put out multiple overhyped individual publications that meet the minimum viable novelty threshold for inclusion in a good journal. Half of the battle is a marketing game.
Increased rigor is a time and money commitment that many aren't willing to make, but true novelty is a much bigger risk and a good way to kill a career for anyone not yet tenured.
It's not gonna change, one has to learn to accept it and to adapt to it.
> The theory is often cited as an example of the replication crisis in psychology, in which initially seductive theories cannot be replicated in follow-up experiments.
Example: the thousands of fraudulent XRD spectra of made-up compounds.
This isn't to say there are no good works, but in a field that produces >10,000 papers per year, the bulk of it can't be all that great, but academics have to keep their jobs, PhD students have to graduate etc. So everyone keeps pretending.
Interesting, didn't hear about this case before. Can you provide a link?
Noteworthy is that the crisis is a huge deal in Psychology - a field which "real" scientists were sneering at a century or more ago.
I once knew a guy who majored in Psychology at a pretty prestigious U.S. research university, back in the mid 1980's. He said that the Psych Dept. there did a big survey of Psych undergrads, asking what they thought of the subject. The most significant finding? That the Psych majors thought the first 2 years of Psych classes were real facts about the real world. But after that - they thought that it was all bullsh*t, and learning how create and spew bullsh*t yourself. (The guy went on to law school, and was quite successful. Which could be interpreted in interesting ways.)
Other fields definitely have similar problems: Amgen found that they could only replicate about half of the cancer biology papers they tested.
The "replication crisis" showed up in psychology first for a few reasons. First, psychology probably runs the most replications of any field. The experiments themselves are easy to reproduce: all you need to redo a survey project is a Qualtrics account—--or a photocopier. They can often be done fairly quickly, making replications a good "warm-up" project for a new grad student. The field also seems to value replications, at least a little.
Second, psychology is actually quite hard! Some failed replications surely reflect sloppy or sketchy research practices, but many of them are probably due to reflect uncontrolled variables/variance. A physicist can produce an endless stream of photons, all of which are exact copies and totally independent of each other. Psychologists have nothing like that. People's reactions are shaped by their own idiosyncratic biases, their past history, and even what they think the experimenter wants to see. It's often not clear what variables actually matter, and in that sense, "failed" replications are sometimes more interesting than successes.
In structural engineering we literally tore metals apart to find their physical properties. Concrete was mixed. Giant crushing machines were used. Etc. Sometimes I think that half the reason why some fields don't have a replication crisis is because literal lives are on the line if you get it wrong, so people actually do the work to make the testing gear and align the incentives that need to be aligned.
You don't think lives are on the line with psychology? The replication crisis there should be terrifying but because of the extra degree of separation between theory and result, it is, for some reason, just hand-waved over.
You are right that if people died because of a structural engineering mistake due to incorrect information people would be sued and possibly sent to prison. In mental health, however, people can die in droves and nothing! Or public policy that destroys the lives of millions can be made on bad research and just shrugged off later.
You can't tear people apart to figure out why they're depressed. Not only will no one volunteer to be torn apart, but it won't tell you much anyway: depression (probably) isn' a property of a single protein or cell. In a sense, this is actually my day job: I work with animals where we can dig down to the level of a single neuron or sometimes even a single molecule....but it's still different enough that you still need to go back to intact, living people to see if your proposed treatment works.
A lot of these appear to be cases of flat-out fraud or dubious research practices. These all get muddled together under the heading of "replication crisis" but I do think they're a bit of a different beast.
Did you link to the wrong article?
Clinical psychology is still useful in my opinion. It still helps people. There's a real feedback loop where understanding can change outcomes.
I dislike the falsifiability approach (if its falsifiable it's science) and the peer review attitude (the peer review process and scientific consensus is what defines science).
My approach is that you need to close a loop. You need to do something useful whose success depended on the truthfulness of the research, and only to the extent of this dependence was anything proved.
Curiosity to understand how the world works is enough as motivation in science. Applications can also provide ideas but then you can also diverge from them and follow directions that look intrinsically interesting or perplexing.
What you note is not addressing the core problem, namely that academia selects for certain non-diverse personality traits, like very high conscientious ess, wanting to please authority, tolerance of monotony etc which all lead to it becoming an inside baseball, working to scratch each other's backs.
You want to force them to turn away from their navel gazing by orienting them to concrete useful applications but what you really should incentivize is research whose main characteristic isn't that it's useful or all the current trendy hivemind consensus thinks it's what you're supposed to do, but the pursuit of understanding, the desire to see clearly and get closer to truth. And this doesn't just depend on what you incentivize with metrics but also what personalities are allowed in in the first place.
I hardly care if nobody will gain anything from what you built - I only care that in building it, you proved your understanding. It doesn't even have to be building something useful. Even mathematical proofs count. It wasn't enough to hold a nice story in your head about the behavior of mathematical objects to get a proof - you had to use that understanding to write the mathematical proof. You did something that would fail if your understanding was incorrect - and everyone can objectively judge your success.
Things psychological research has taught us about clinical psychology; the therapeutic relationship is the only thing that reliably predicts helping the client. Not the school of thought, not training, whether you vibe with the therapist.
Work on perception and memory has held up astonishingly well. Behavioral experiments from the 1800s and 1900s basically nailed down the properties of the retina, a hundred years before we had the techniques to measure it physiologically.
It's also integrated in a lot of "real world" applications: a lot of work has gone into building psychoacoustic models for audio codecs, color spaces for image reproduction, etc. Findings about attention and eye movements influence UX, and all sorts of products exploit behavioral biases (often for nefarious ends).
My view is that replication is just one safeguard within science, but is not the only thing needed. Confirming or refuting isolated factoids doesn't tell us much more than the factoids themselves.
I think a science needs to develop towards theories that connect those factoids into a framework. Psychology has not reached that stage yet. Compare to physics or chemistry. Some commentary has suggested that the replication crisis extends to those fields too. My graduate research project in physics was never replicated. But physics has a framework of theory that remains strong by connecting studies to one another, so that if one or more isolated studies fail on replication, the whole framework remains strong.
In some sense, "useful" could include merely "useful to science."
Guess what, it's very easy to find things when 1 in 20 of your papers can produce them by random chance (95% CI). When not outright participating in scientific fraud and p-hacking.
Perhaps the real lesson to learn from all of this is that we should defund every field that does not adhere to strict replicability controls. When funding follows findings you have set-up the perverse incentive to find something when there is nothing to get funding.
I wish I could be more hopeful, but it seems like a large portion of researchers in fields like psychology are too worried about their prior, poor quality research to embrace change.
A bigger problem may be that the entire body of published knowledge, and even the choice of categories that are the focus of study (such as "personality" and "intelligence") are too numerous and prevalent throughout our entire culture to readily abandon.
The right thing to do if half of your knowledge base is bunk, might be to erase all of it and start over, but that's virtually impossible.
My conjecture is that all truths must be experienced. This aligns with the notion of “nullius in verba”, the original motto of the Royal Society, arguably the birthplace of modern science.
Science takes place in a laboratory. Whether or not ink on a page is true depends on nothing other than replicating the methods for oneself.
That the current environment is for printing ink on paper and calling it a day tells me that we’ve moved on from science as an epistemological solution to the notion of truth and regressed to an era of truth emanating from privileged authorities.
How would we ever progress without a change in method from observation to belief?
It wouldn't, because Newton considered himself primarily an alchemist and philosopher, and viewed all of his "scientific" endeavors as a means to the end of understanding God through understanding Creation.
You made the mistake of reaching too far back in time, searching for an example of someone who should have been aghast at the premise of applying faith to science, and found someone for whom they were one and the same. Although he did reject a lot of the orthodox views of the church, he certainly didn't reject religion outright for its lack of verifiability.
If we want to talk about religion and taking no one’s word for it you’ve already started us in this direction by pointing out his rejection of church orthodoxy, which is basically the extreme end of Protestant practice.
If you believe that God wrote the Bible and that the church is made up of corrupt men and that only your personal understanding of the word of God is the path towards the True, well, you’re completely at odds with the epistemology of the Catholic Church with regards to religion and very much primed to take the same approach of a personal relationship with God and his word to a personal relationship with natural philosophy.
That's like doctors studying phrenology or physicst studying phlogiston. Outside of a history lesson of what not to do, it's not really useful.
If you think Freud is well-accepted you might be getting your views on Psychologists based on TV shows, or at minimum are vastly overgeneralizing.
For example, the general public, and even scientists and engineers, still talk about "heat flow" as though temperature itself is a fluid being transfered between objects. This is incorrect physically, but nevertheless there are clear mathematical analogies between how objects in contact with each reach equilibrium, and other physical systems, like two containers filled with water to different levels and connected with a pipe at the base. The reason for this is entirely historical, and if one is not mindful of that the terminology can be very misleading.