Replication Crisis – Next Scene

Everybody talks about replication again. This is due to the results of Many Labs 2. You can read about it among others in NATURE, THE ATLANTICFUTURISM.COM, etc.

 

Over the past few years, an international team of almost 200 psychologists has been trying to repeat a set of previously published experiments from its field, to see if it can get the same results. Despite its best efforts, the project, called Many Labs 2, has only succeeded in 14 out of 28 cases. Six years ago, that might have been shocking. Now it comes as expected (if still somewhat disturbing) news.  (https://www.theatlantic.com/science/archive/2018/11/psychologys-replication-crisis-real/576223/)

 

This what we are facing today is a consequence of many problems described in our book a few years earlier. Chapter 6 of the Psychology Gone Wrong was entitled:

Scientific Conspiracy? The Myth of Replication

Below you will find some excerpts from our book devoted to the problem of replication.

We would like our readers to imagine a group of villain scientists who cooperate in order to falsify and distort all the knowledge about humans. They are completely devoted to their conspiracy. It is not an easy task. After all, the world’s most brilliant minds will immediately mobilize to counter their devious plot. World’s most penetrating gazes will be upon them and most advanced technologies will be used against the conspirators. Such an evil plot would seem impossible to succeed. It would have to remain hidden from watchful eyes; the plot’s mechanisms would have to work flawlessly, even without supervision. You have probably already spotted that our science fiction, even after just three sentences, is nothing but a potboiler. But you’d be surprised to learn that such mechanism actually exists! People created it, although they were not in a conspiracy. It is not conspicuous, though nobody even tries to hide it. It works without any supervision and it work flawlessly, perhaps with minor exceptions. Furthermore, the majority seems to actually appreciate the fact of its existence. How is this possible? What is it? Who created it?

Every scientist, who attempts to test any hypothesis, should start his work with a detailed literature review in order to check whether someone had already conducted similar studies. It is possible that his hypothesis was verified long time ago. Following this simple rule saves tremendous amounts of time and work, thus protecting scientists from wasting their scientific resources. This is not always the case, as sometimes, if someone failed to prove given hypothesis, the researcher has almost no chances to find out about it! Even if the same hypothesis was analyzed and studied by many scientists, and none of them reached interesting results, the chances to learn about their failures are still close to none. Why? Psychology journals do not publish results of negative studies, especially those considered inconclusive. The majority of the journals mention it even in the instructions for authors. If the study did not verify assumed hypotheses, most journals will reject them, devoting precious journal space for “publishable” results.

Why do they do it? This might be justified to a certain degree. Imagine a group of researchers making some completely irrational assumptions about human nature and conduct intensive studies that will, obviously, yield negative results. They would soon flood all journals’ pages with their complete and utter nonsense. Similarly, authors unable to properly plan, conduct and conclude a project would demand to be published as well. Those reasons we understand. But the unwillingness (it became an unwritten law) to publish negative results applies and extends to all research, even if they exactly replicate previously conducted studies. It appears that another custom in science somehow developed spontaneously. Most editors and reviewers believe that if there are reasons not to publish negative results, it is best not to publish them at all. Let’s take a look at the consequences of this common approach, as there is a possibility that such behavior is actually the proverbial throwing the baby out with the bathwater.

Lack of access to negative research results causes an enormous waste of time, energy and scientific resources, simply because researchers cannot know if someone has ever worked on the issues that interest them. Therefore numerous scientists, all around the world, waste their efforts, time and public money from research grants trying to investigate the same problems. For this reason alone, a couple of decades ago, attempts were undertaken to actually create a journal that would publish negative research results only. Marvin Goldfried and Gary Walters even proposed a name for such journal: Journal of Negative Research printed in a similar way to Psychological Abstracts.[i] After many years, the situation did not change much. Actually, it is getting worse. “Negative results’ now account for only 14% of published papers, down from 30% in 1990.”[ii]

For all readers, who managed to read our book up to this point, the gigantic waste of time and energy executed by psychology researchers isn’t really a major surprise. Consequences of non-publication of negative results undermine the very objectives of science: the pursuit of truth. In the following parts of this book, we will discuss, among other things, the efficacy of various psychotherapies. Nevertheless, let’s now try to imagine a researcher who checked several therapies for their efficacy and did not achieve any statistically significant results. For most editors of psychological journals, such work is not worth publishing, as it is inconclusive. However, if the research was methodologically correct, it does present tremendously important discovery, one that has no chance to reach wider public. Furthermore, we could witness a plethora of articles with so-called meta-analyses. A researcher who conducts a meta-analysis does not research the topic on his own. Instead, he uses articles previously published by other scientists to reach his conclusions. Meta-analysis can therefore be extremely useful, as it gathers experiences and results of many scientists in one, single analysis. If a researcher would be interested, for example in the efficacy of Gestalt therapy, he could collect tens or hundreds of articles, that include similar parameters and he can draw a common conclusion from all previous research. You can probably see the problem already; this powerful tool becomes useless if researchers can only collect articles that were considered publishable by journal editors. Without the insight into all conducted research on a given topic, including all negative trials, a meta-analysis will only further distort our perception of reality.

This phenomenon of severe distortion of knowledge has gained a few original labels. “File drawer effect” was first proposed by R. Rosenthal.[iii] It is also sometimes called a publication bias.[iv] An unimaginable amount of knowledge is lost in desk drawers, archive shelves and document shredders. After all, storing useless, un-publishable research results is pointless.

Can you now see the mysterious intrigue that successfully attempts to distort all knowledge about humans? It would be hard to think of anything more original, less conspicuous and as affective as the mechanism we have just described. This tendency of hiding negative research results from other people is not limited to psychology. It is dangerously common in other social sciences and in medicine as well.[v] Research shows however, that it is by far worst in psychology and psychiatry. Daniele Fanelli found that “the odds of reporting a positive result were around five times higher for papers published in Psychology and Psychiatry and Economics and Business than in Space Science.”[vi]

Unfortunately, this is not the end of the negative consequences of publication bias. Do you recall the case of Cyril Burt? By falsifying research data, his made-up theory on mechanisms of inheriting intelligence became widely accepted. As you remember, other published studies yielded very similar results to those published by Burt and his followers. If someone at this time conducted an inconclusive study related to inheritance of intelligence, would he had any chance to publish his results? Or would he be classified as “unpublishable”?

Things might have looked the same when it comes to replicating the results of Watson’s Little Albert experiment described previously in chapter five. The file drawer effect probably affected research results contradicting the concepts of the formation of phobias derived from Little Albert’s case.

Publication bias will always focus attentively on all hypotheses and theories that have a chance to shine in printed journals, and that temporarily become accepted. This mechanism resembles the one described by psychologists as a self-fulfilling prophecy. It allows us to easily explain why so many ridiculous ideas constantly spread in circulating scientific thoughts. Unfortunately, this is a very primitive mechanism, the same that sustains belief and faith of many people in horoscopes…

The priming effect, well known and popular in psychology, serves as an excellent illustration of this phenomenon. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. One of the most famous studies from 1998 showed that thinking about a university professor before answering general knowledge questions lead to higher scores than imagining a football hooligan.[vii] Media usually happily report such research results. They are easy to understand, they usually slightly oppose common intuition, and are also very promising. Wouldn’t you like to ace all general knowledge quizzes by applying simple mental tricks? Research on the priming effect has rapidly spread across psychology over the past decade. Some of their insights have already made it out of the labs and into the brains of policy geeks eager to poke the masses. But in April 2013, a paper in PLoS ONE, a renowned journal, reported that nine separate experiments failed to reproduce the results of a famous study from 1998.[viii]

Similar doubts regarding the validity of priming, especially in such simplified form, were already raised long before. Unfortunately, the amount of research papers confirming this effect overwhelms the few voices of common sense. It is worth mentioning that at the time of writing those words, scientific databases in the field of psychology report 20,878 papers that contain the word ‘priming.’

The problem surfaced however not because of the few attempts to replicate original research results, but due to the intervention of one of the world’s most renowned psychologists, a Nobel Prize winner in economics, Daniel Kahneman. He summarized his worries and concerns about the condition of research on priming in his open letter sent in 2012 to several dozen social psychologists. The media quickly picked this up, and the case became widely known. What was in Kahneman’s letter?

As all of you know, of course, questions have been raised about the robustness of priming results. The storm of doubts is fed by several sources, including the recent exposure of fraudulent researchers, general concerns with replicability that affect many disciplines, multiple reported failures to replicate salient results in the priming literature, and the growing belief in the existence of a pervasive file drawer problem that undermines two methodological pillars of your field: the preference for conceptual over literal replication and the use of meta-analysis. Objective observers will point out that the problem could well be more severe in your field than in other branches of experimental psychology, because every priming study involves the invention of a new experimental situation.

For all these reasons, right or wrong, your field is now the poster child for doubts about the integrity of psychological research. Your problem is not with the few people who have actively challenged the validity of some priming results. It is with the much larger population of colleagues who in the past accepted your surprising results as facts when they were published. These people have now attached a question mark to the field, and it is your responsibility to remove it…. My reason for writing this letter is that I see a train wreck looming. I expect the first victims to be young people in the job market. Being associated with a controversial and suspicious field will put them at a severe disadvantage in the competition for positions. Because of the high visibility of the issue, you may already expect the coming crop of graduates to encounter problems….

I believe that you should collectively do something about this mess. To deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is self-defeating. Specifically, I believe that you should have an association with a board that might include prominent social psychologists from other fields. The first mission of the board would be to organize an effort to examine the replicability of priming results, following a protocol that avoids the questions that have been raised and guarantees credibility among colleagues outside the field. …

Success (say, replication of four of the five positive priming results) would immediately rehabilitate the field. Importantly, success would also provide an effective challenge to the adequacy of outsiders’ replications. A publicly announced and open effort would be credible among colleagues at large, because it would show that you are sufficiently confident in your results to take a risk.[ix]

The Kahneman phrase most quoted by media was: “I see a train wreck looming.” It is difficult to say, whether author’s action, led by his concerns regarding the validity of a certain research area (and psychology as a whole), did more good or harm, but it definitely pointed attention to the problem of reliability of conducted research, and the discussion is still ongoing. As we write these words, The Head Conference organized by the Edge Foundation and entitled What’s New in Social Sciences is taking place. Rob Kurzban had a speech in this conference on “P-Hacking and the Replication Crisis” and he clearly declared that the current situation prevailing in psychology worldwide should be finally recognized and called a crisis.

I really wanted to take this opportunity to have a chance to speak to the people here about what’s been going on in some corners of psychology, mostly in areas like social psychology and decision-making. In fact, Danny Kahneman has chimed in on this discussion, which is really what some people thought about as a crisis in certain parts of psychology, which is that insofar as replication is a hallmark of what science is about, there’s not a lot of it and what there is shows that things we thought were true maybe aren’t; that’s really bad.[x]

The file drawer effect is ubiquitous and it virtually renders replication, one of the most precious inventions in the history of science, useless. Kurzban referred to replication as the hallmark of what science is. Repeating the results of other scientists allows us to pick-up mistakes, random occurrences and also allows us to confirm results that others achieved. However, previously described limitations imposed by scientific journals simply mean that replication can only work in one way: by confirming previous results. Unfortunately, even this statement is overly optimistic, as replication in psychology is nothing but a myth. Most editors and reviewers reject publishing research results that replicate the work of others. The rejection letter usually sounds like this: “We regret to inform authors that the submitted manuscript is only a simple replication of already existing works. Our journal publishes only articles that introduce new discoveries and extend the knowledge about human behavior.” And that’s it; an article that would add another brick to the wall of human knowledge ends up in a drawer. Such letters from journal editors are very common. The requirement of originality of research submitted for peer-review is almost always already included in the instructions for authors.

Significant in this regard is what happened in March 2011, after Daryl Bem published his paper in the Journal of Personality and Social Psychology that found evidence for Psi, or extra-sensory perception. Bem stands by his work, but many psychologists question his analysis and found it hard to believe that the study was even published.

At least one team, led by Richard Wiseman, PhD, of the University of Hertfordshire, was unable to replicate Bem’s findings — but JPSP didn’t publish that research because the journal does not publish replications. Psychological Science also rejected Wiseman’s replication study, for the same reason. It was eventually published in the open-access journal PLOS One, and Wiseman set up a website for others to document their attempts to replicate Bem’s study. And, in December, JPSP published a meta-analysis of the original studies, and replication attempts, by Carnegie Mellon University’s Jeff Galak, PhD. But critics say that the situation exemplifies the difficulty of getting any replication work published.[xi]

Such habits were bitterly criticized by authors Christopher J. Ferguson and Moritz Heene in the article “A Vast Graveyard of Undead Theories: Publication Bias and Psychological Science’s Aversion to the Null.” They wrote, “The aversion to the null and the persistence of publication bias and denial of the same, renders a situation in which psychological theories are virtually unkillable. Instead of rigid adherence to an objective process of replication and falsification, debates within psychology too easily degenerate into ideological snowball fights, the end result of which is to allow poor quality theories to survive indefinitely.[xii]

A notable exception to the “non-publication principle,” one of the very few examples that we know of is Representative Research in Social Psychology. This journal was founded in the late 1960s by the initiative of students from the University of North Carolina in response to a very long waiting time for publication in other journals. The main reason for its establishment was the complete lack of any journal that would actually publish methodologically correct research papers, including replications, regardless of whether their results were positive, negative or inconclusive. Since 1970, when the first issue of Representative Research in Social Psychology was released, the journal’s editors remain faithful to the original promise. They still indicate that the magazine specializes in publishing all replicated and all negative and inconclusive results.

Another place worth mentioning that publishes replications and null hypothesis studies is the Journal of Articles in Support of the Null Hypothesis, established in 2002. This is an open-access magazine, however the number of published articles is very modest, reaching only a few per year.[xiii]

When mentioning initiatives aimed at restoring the deserved honour of the process of replication, we cannot skip the Many Labs project. A group of scientists from 36 laboratories and research institutes around the world who have decided to verify and replicate 13 classic psychological experiments. The project was coordinated by Richard Klein and Kate Ratliff from the University of Florida, together with Michealangelo Vianello from University of Padova (chief statistician) and Brian Nosek from Center for Open Science. A total of 6,344 people from 12 different countries were involved in the attempt to replicate classic experiments. Results of Many Labs replications were openly documented and will be by published in Social Psychology. It is worth mentioning that the project was accepted for publication before the replication started. All available data, results, analyses, methods of data collection and movies documenting the experiments are publicly available on the project’s webpage.[xiv]

Within the same Open Science Framework another, a more ambitious Reproducibility Project is being implemented.[xv] Previously mentioned Brian Nosek and dozens of other psychologists are attempting to reproduce as many studies as possible that were published in the 2008 volumes of three prominent journals: the Journal of Personality and Social Psychology, Psychological Science and the Journal of Experimental Psychology: Learning, Memory, and Cognition. The group is working on about 50 studies and hopes to get 100 or more researchers join the project. “The goal, Nosek says, is both to investigate the reproducibility of a representative sample of recent psychology studies, and to look at the factors that influence reproducibility. That’s important, he says, because ‘being irreproducible doesn’t necessarily mean a finding is false. Something could be difficult to reproduce because there are many subtle factors necessary to obtain the results. And that’s important too, because we tend to overgeneralize results.’”[xvi]

One of possible solutions that would allow publication of larger numbers of repeated studies is the idea of registered replication. At this moment, the website of the Association for Psychological Science declares they will start a series of publications called Registered Replication Reports in the journal Perspectives on Psychological Science. The description, which we can find in their Mission Statement, precisely describes the idea of registered replications.

A central goal of publishing Registered Replication Reports is to encourage replication studies by modifying the typical submission and review process. Authors submit a detailed description of the method and analysis plan. The submitted plan is then sent to the author(s) of the replicated study for review. Because the proposal review occurs before data collection, reviewers have an incentive to make sure that the planned replication conforms to the methods of the original study. Consequently, the review process is more constructive than combative. Once the replication plan is accepted, it is posted publicly, and other laboratories can follow the same protocol in conducting their own replications of the original result. Those additional replication proposals are vetted by the editors to make sure they conform to the approved protocol.[xvii]

Unfortunately, at this moment only one research project was reported.

A website called PsychFileDrawer.org plays an important role in the attempt to save replication research where researchers can post their unpublished negative results. It was launched by Hal Pashler from the University of California, San Diego and Barbara Spellman, a psychologist from University of Virginia. The site has gotten some positive attention, but so far only 43 studies are posted. “We have a high ratio of Facebook likes to actual usage,” Pashler says. “Everybody says that it’s a good idea, but very few people use it.”

The problem, he says, is probably that researchers — especially grad students and other young researchers likely to do replication work — have little incentive to post their findings. They won’t get publication credit for it, and may only annoy the authors of the original studies.

“I’m happy we made [the site],” Pashler says “but so far it has just ended up spotlighting the incentive problem.”[xviii]

Pashler’s reflections do not sound optimistic. Don’t forget that among thousands of psychological journals, all initiatives described above are nothing more but a drop of water in the ocean of published research papers, more interesting, more creative and resulting in even less dependable and less replicable results. It is also very interesting to notice that most of the attempts to change the unfavorable status quo are undertaken by single individuals, small groups of researchers, or (rarely) by non-mainstream associations. Large organizations, like the American Psychological Association, responsible in fact for what is really happening in the field of psychology, seem to completely ignore the problem. Even when they seem to notice it for a moment, their actions are negligible in relation to the seriousness and scale of the problem; usually they are limited to initiating discussions.

The policy of not accepting articles for publication, that we have written so much about, is not the only reason for the reluctance to conduct replication research. “Among the top problems are that funding agencies aren’t interested in giving money for direct replication studies and most journals aren’t interested in publishing them. So researchers whose careers depend on winning grants and publishing studies have no incentive to spend time and effort redoing others’ work.”[xix] Moreover, as Brian Nosek, a social psychologist at the University of Virginia, says, the incentive system at work in academic psychology is heavily weighted against replication: “There are no carrots to induce researchers to reproduce others’ studies, and several sticks to dissuade them.”[xx]

But even if we assume that institutional obstacles are removed, there are still psychological barriers that will continue to discourage scientists from reproducing studies of their colleagues. This problem was accurately summarized by Robert Kurzban:

I would go to give talks in places and, lo and behold, it turns out there’s this kind of background radiation—there’s the dark matter of psychology, which is a few people who fail to replicate and don’t publish their work and also don’t talk about it because the fact that you’ve failed to replicate has a reputational effect, right? The person who’s in charge of this literature says, “Oh, these guys were going after me,” and so maybe you don’t talk about it in polite company. Right? It’s sort of like sex; it’s the thing that we’re all doing, we’re all replicating, we just don’t want to talk about it too much, right?[xxi]

In the simplest of studies, where the strength of the linear relationship between two variables is defined as the correlation coefficient, the resulting number is usually given with the accuracy up to two decimal places. John Hunter, in his article “The Desperate Need for Replications” calculated, that in order to give the exact result of such correlation we would need a staggering number of at least 153,669 test subjects!123 For accuracies to one decimal place we “only” need to examine 1,544 subjects. In social science, it is very unusual and extremely rare to even reach second numbers of participants. The average numbers of test subjects in one study reported in the Journal of Consumer Research is 200. In order to be able to achieve the required accuracy for correlation to one decimal place, we should repeat the study at least eight times. If we would like to be accurate to two decimal places we would need to repeat the study at least 800 times, provided that each one involves at least 200 participants. For the above to be true, we must also assume that all of those studies would be perfectly conducted and be free of methodological errors (such as non-random selection of participants), etc. Don’t forget that this is all to analyze the simplest and most basic relationship between just two variables. A test so simple… that nowadays almost nobody bothers with it. In reality, even in simple studies we analyze many more variables. Somehow we tend to forget that requirements of the sample size for more complicated studies increase exponentially.

According to Hunter, the ‘novelty approach’ as the main criteria for selecting papers for publication is fatal to science, hence his cries for replications. In his view, science needs hard facts and evidence good enough to accurately base our predictions on, yet we still lack those. Other authors also notice the absence of replications and the need for them, however rarely.[xxii] They all point to life sciences as a model to follow in this regard, but according to critics, it’s primarily in the life sciences where lack of replications results in printing and spreading many absurd and misguided concepts.[xxiii]

Let’s put some numbers to Hunter’s postulates. Arif Jinha calculated that up to 2010, as this is when he was doing his computations, there were about 50 million scientific articles published around the world.[xxiv] Today, every year over 2,000,000 papers are being published, and the number of publications per year doubles every 20 years. This is a result of rapid progress in science, but it also has some negative implications.

While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.[xxv]

Taking into account the fact that scientists should cite all articles that they have based their research on, it simply means that if an article is never cited, it had absolutely no influence on any scientist in the world. It therefore means that it had precisely zero impact on the advancement and progress of science. Even if it was read, the reader must have concluded that what he just read was worthless. This never-cited mountain of articles produced by scientific community remains unproven, unverified, unreplicated and probably unread. If we subtract from those numbers some of the previously mentioned scientific sins, such as self-plagiarism or single citations (often a polite gesture or a need of reciprocity), this pile of scientific junk stretches even further. Wouldn’t it be better if instead of “creatively” producing completely useless research papers, to properly replicate the existing ones in order to finally create solid foundations for our knowledge?

But those are not the only consequences. Think about it; someone has to devote a lot of time to read and review all this nonsense. Reviewers get increasingly more papers to review and most likely analyze them less thoroughly. Libraries have to catalogue all this junk and spend a significant amount of money to do it. Scientific databases are becoming overloaded. Young scientists find it increasingly more difficult to find valuable, relevant evidence hidden in stacks of worthless scientific “creativity.” It is good to know what is happening in your own field, isn’t it?

During the massive floods in Europe in 1997, one of the major problems people faced in flooded areas was the lack of drinking water. Isn’t it similar, when the world of science is being flooded with unnecessary, unwanted and useless streams of new data, results and publications, but lacks scientific evidence and replication? Isn’t the flood of scientific papers grotesque?

Philosophers of science proudly mention replication as a mechanism that distinguishes the system of knowledge created by science from other structures that lack self-correcting mechanisms that would allow detection of misconduct, fraud and lies. Unfortunately, in the real world, replication is a myth. It is the myth in which researchers of human nature are willing to believe, the myth that psychologists are proud of, the myth whose presence in psychology was accurately summarized by Robert Kurzban: “I’m saying that in many ways the replication crisis in psychology is a little bit like the weather, right? We all talk about it but no one really does anything about it. We do a little about it here and there.”[xxvi]

[i] M. R. Goldfried and G. C. Walters, “Needed: Publication of Negative Results,” American Psychologist, 14, (1979): 598.

[ii] No Author, “How Science Goes Wrong,” The Economist (October 2013): http://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong

[iii] R. Rosenthal, “The ‘File Drawer Problem’ and Tolerance for Null Results. Psychological Bulletin, 86, (1979): 638-641.

[iv] See I. Peterson, “Publication Bias: Looking for Missing Data,” Science News, 135, (1989): 5.

[v] See J. Cohen, “A New Publication Bias: The Mode of Publication,” Reproductive BioMedicine Online, 13, (2006): 754-755.

[vi] D. Fanelli, “Positive Results Increase Down the Hierarchy of the Sciences,” PLoS ONE, 5, (2010): 4.

[vii] A. Dijksterhuis and A. van Knippenberg, “The Relation between Perception and Behavior, or How to Win a Game of Trivial Pursuit,” Journal of Personality and Social Psychology, 74, (1998): 865–877.

[viii] D. R. Shanks, B. R. Newell, E. H. Lee, D. Balakrishnan, L. Ekelund, L., et al. “Priming Intelligent Behavior: An Elusive Phenomenon,” PLoS ONE, 8 (2013).

[ix] “Kahneman on the Storm of Doubts Surrounding Social Priming Research,” Decision Science News (October 2012): http://www.decisionsciencenews.com/2012/10/05/kahneman-on-the-storm-of-doubts-surrounding-social-priming-research/

[x] R. Kurzban, “P-Hacking and the Replication Crisis,” The Head Conference – What’s New in Social Sciences (December 2013): http://edge.org/panel/headcon-13-part-iv

[xi] L. Winerman, “Interesting Results: Can They Be Replicated? APA Monitor, 44, 38 (February 2013): from: http://www.apa.org/monitor/2013/02/results.aspx

[xii] C. J. Ferguson and M. Heene, “A Vast Graveyard of Undead Theories Publication Bias and Psychological Science’s Aversion to the Null,” Perspectives on Psychological Science, 7, (2012): 555-561. http://pps.sagepub.com/content/7/6/555.full

[xiii] See, http://www.jasnh.com/

[xiv] See, https://openscienceframework.org/project/WX7Ck/files/

[xv] See, https://openscienceframework.org/project/EZcUj/wiki/home/

[xvi] Winerman, “Interesting Results.”

[xvii] See: http://www.psychologicalscience.org/index.php/replication, retrieved December 7, 2013.

[xviii] Winerman, “Interesting Results.”

[xix] Ibid.

[xx] Ibid.

[xxi] Kurzban, “P-Hacking.”

[xxii] See, B. Schneider, “Building a Scientific Community: The Need for Replication. Teachers College Record, 106, (2004): 1471-1483.

[xxiii] W. Broad and N. Wade, Betrayers of the Truth. Fraud and Deceit in the Halls of Science. (New York: Simon & Shuster, 1982).

[xxiv] A. Jinha, “Article 50 Million: An Estimate of the Number of Scholarly Articles in Existence,” Learned Publishing, 23, (2010): 258-263.

[xxv] M. Bauerlein, M. Gad-el-Hak, W. Grody, B. McKelvey, and S. W. Trimble, “We Must Stop the Avalanche of Low-Quality Research,” The Chronicle of Higher Education. (June 2010): http://chronicle.com/article/We-Must-Stop-the-Avalanche-of/65890/

[xxvi] Kurzban, “P-Hacking.”

More in: 

One response to “Replication Crisis – Next Scene

  1. Pingback: Is the Glass Half Empty or Half Full? Latest Results in the Replication Crisis in Psychology | Psychology Gone Wrong·

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s