Why there are so pitifully little evidence for psychotherapy effectiveness? Part 1

Rembrandt van Rijn Lekcja Anatomii Doktora Deymana (1)Some readers of my blog complained about the disproportion in describing problems of academic psychology and psychotherapy. So,  I take a break in presenting scientific frauds. Today’s post is the first from the series devoted to problems of finding objective data from research on the effectiveness  of psychotherapy.

In 1959, Robert A. Harper identified 36 various therapies.[1] By the end of the 1970s, Herink reported that the number of name-brand psychotherapies exceeded 250.[2] But as early as 1986 Daniel Goleman mentioned more than 460 various kinds of psychotherapy in his ‘guidebook'[3], and by the turn of the twenty-first century revised estimates reached 500.[4] Nowadays they are certainly much more numerous.

However, if we try to find objective data from research results, we will encounter considerable difficulties. These are inherent in the very nature of the subject and, as we will soon see, not only originate from its complexity, but are also indicative of psychotherapy’s weaknesses.

The first difficulty, resulting in the relative scarcity of data or their incompleteness, is the unwillingness of inventors of therapies and therapists themselves to submit their ideas or practices to empirical verification.

Freud himself was opposed to doing empirical research on defense mechanisms, including repression. In reply to the letters of Sears, who had written to him about his idea of investigating projection, Freud said that conducting experimental research on projection and other defense mechanisms was pointless, as these were so vague and delicate phenomena that they might only be observed during clinical interviews, and not during experimental research which imposes a certain level of standardization.[5]

Thus, it is clear that ever since its creation (with which Freud is commonly credited) psychotherapy has deterred empirical verification and scientific confirmation of its efficacy. But Freud was not the only one to systematically discourage it. His successors claim that therapy is “a deep experience that occurs between two people and that this can only be communicated very inadequately to another person.”[6] Proponents of Gestalt therapy also admit they are not interested in statistical evaluation of its effectiveness. Since Gestalt therapists, as early as during their first encounter with patients, do not assume that their behavior will change in any particular way, the only measureable outcome might be patients’ subjective conviction that their condition has improved, a conviction which cannot be accurately gauged. Similarly, Richard Bandler, co-originator of neuro-linguistic programming, went on record with his repeatedly expressed disdain for a scientific testing of NLP hypotheses; he argued that his system was an art, not a science, and hence the probing of its assumptions was pointless or even impossible, since art cannot be examined with the methodology used in psychology.

Hans Strotzka analyzed the reasons for this reluctance to submit psychotherapy to empirical confirmation. He described them in 1983 in his paper meaningfully entitled “The Psychotherapist’s Fear of Empirical Research” where he cited the following arguments of therapists:

The reasons for the reluctance of psychotherapists to submit psychotherapeutic sessions to empirical research are the following: (1) Psychotherapy is a “hermeneutic” process that cannot be measured (basic argument); (2) empirical research interferes in the process (technical argument); (3) empirical research contradicts the specific private working contract of psychotherapy (ethical argument); (4) in comparative psychotherapy, individual therapists, not systems, are compared (personal argument); and (5) in many institutions, care has a higher priority than documentation (pragmatic argument). It is asserted that these arguments are mainly rationalizations and that empirical research is necessary for the survival of psychotherapy as a respected discipline in medicine.[7]

However, the problem was fully brought to light only in 2009 due to a paper by Timothy B. Baker, Richard M. McFall and Varda Shoham as well as its review published in Newsweek. Sharon Begley, the science editor wrote:

It’s a good thing couches are too heavy to throw, because the fight brewing among therapists is getting ugly. For years, psychologists who conduct research have lamented what they see as an antiscience bias among clinicians, who treat patients. But now the gloves have come off. In a two-years-in-the-making analysis to be published in November in Psychological Science in the Public Interest, psychologists led by Timothy B. Baker of the University of Wisconsin charge that many clinicians fail to “use the interventions for which there is the strongest evidence of efficacy” and “give more weight to their personal experiences than to science.” As a result, patients have no assurance that their “treatment will be informed by science.” Walter Mischel of Columbia University, who wrote an accompanying editorial, is even more scathing. “The disconnect between what clinicians do and what science has discovered is an unconscionable embarrassment,” he told me, and there is a “widening gulf between clinical practice and science.”[8]

As might have been expected, such strong statements set off a fierce public debate during which the author of these words found herself at the receiving end of criticism. The main attack, naturally, came from the camp of clinicians-practitioners, but the arguments were not new. They might as well be classified according to the proposal of Hans Strotzka quoted above; as in his research, so in this debate the commonest argument was the one about the specificity of the psychotherapeutic process and the difficulty of measuring it. Indeed, variables involved in psychotherapy are hard to measure; they are numerous, and mutually interact on many levels. Perhaps this field does require a specific approach. But if this is so, then the burden is on the originators of a therapy (just like on the inventors of a drug) to develop a methodology or approach, which would unambiguously demonstrate the positive outcome of the therapy (or the lack of such) and the differences among modalities (or the lack thereof). This simple truth is expressed in the words of D. Myers:

“If at feet 7 and age fifty-nine I claim to be able to dunk a basketball, the burden of proof would be on me to show that I can do it, not on you to prove that I couldn’t.”[9]

Complaints about methodological inadequacies in measuring therapeutic outcome have for decades accompanied every criticism leveled at it. However, with no possibility of evaluating the effects of therapy, practicing it might have far worse consequences than bragging about being able to do a slam-dunk without as much as stepping onto the court.
Unfortunately, this state of affairs seems to be rather firmly established, and hence almost impossible to change.

In any enterprise that invokes science in order to establish its authority, the responsibility rests with the new treatment, invention, or product to prove effectiveness and safety; the burden of proof rests with the engineer, the physician, the drug maker, and the psychotherapist. As a scientific enterprise, psychotherapy clearly carries the burden of its clinical ambitions to certify the achievement of its clinical goals – cure, prevention, and rehabilitation. Just as clearly, the burden has been shifted by cultural influences to the shoulders of the skeptic. The long-standing tolerance for the ambiguities of psychotherapy’s outcomes speaks volumes about social attitudes, about the quiet but deep meaning of psychotherapy in the United States as a secular religion – a social ideology and a series of rituals that justify and dramatize embedded culture preferences.[10]

In order to make the process of evaluating therapeutic outcomes more real, and to steer clear of the argument between scientists and clinicians, let us assume for the sake of our analysis that it is not aimed at settling academic disputes. Let’s assume that we would like to find a therapy, which we might, with a clear conscience, recommend to someone dear to us. Could we, on the basis of the above statements, recommend our loved ones one of the ways of having a “deep experience”, which cannot be verified? I don’t know about you, dear reader, but I, for now, will hold on and keep on looking for the objective indices of effectiveness.
A rather similar approach was adopted in this regard by the British Psychological Society as it explained its stance on the compulsory registration of psychotherapists.

Existing psychotherapeutic techniques are insufficiently grounded in formal evaluative research … until this is done, the claims of the different psychotherapeutic methods rest on the clinical experience of practitioners rather than on public evidence capable of withstanding critical scrutiny.[11]

Allowing subjectivity in the evaluation of therapeutic outcome causes considerable confusion. Who should assess it? The patient? The therapist? The patient’s family and friends? At least several research studies employed many perspectives in outcome evaluation: that of the patient, of his or her nearest and dearest, of competent judges, and of the therapist. Many of the studies revealed divergences in the assessment of the same therapy’s effectiveness.[12] The most striking were those which showed that therapists were less accurate in assessing changes in patients’ behavior than other people, such as outside observers or even patients themselves.[13] Practitioners, in their turn, warn that:

But you cannot be sure that when patients say they are better they are really are. Patients, who are always trying to be good, kind people, are likely to tell the therapist what he wants to hear. After all, if this nice young chap has gone to all this trouble, it’s a pity to disappoint him.[14]

To be continued.

[1] Harper, R. A. (1959). Psychoanalysis and psychotherapy: Thirty-six systems. Englewood Cliffs NJ: Prentice-Hall.
[2]R. Herink (Ed.) The psychotherapy handbook. The A to Z Guide to More than 250 Different Therapies in Use Today. Meridian Books, New York, 1981.
[3] Goleman, G. (1986, September 23). Psychiatry: guide to therapy is fiercely opposed. New York Times.
[4] Eisner, D. A. (2000). The death of psychotherapy: From Freud to alien abductions. Westport, Conn.: Praeger.
[5] Maruszewski, Pamięć autobiograficzna.
[6] Symington, N. (1986). The analytic experience: Lectures from the Tavistock. London: Free Association Books.
[7] Strotzka, H. (1983). The psychotherapist’s fear of empirical research. Psychotherapy and Psychosomatics, 40(1-4), 228-231.
[8] Begley, S. (2009, October 1). Ignoring the evidence. Why Psychologists Reject Science? Newsweek, Retrived April 6, 2014, from http://www.newsweek.com/why-psychologists-reject-science-begley-81063
[9] Myers, D. G., (2002). Intuition. Its powers and perils. New Haven & London: Yale University Press, p. 235.
[10] Epstein, W. M. (2006). Psychotherapy as religion. The civil divine in America.  Reno & Las Vegas: University of Nevada Press, p. xi.
[11] British Psychology Society, (1980). Statement on the statutory registration of psychotherapists. Bulletin of the BPS”, 33, 353-356. As cited in: R. Persaud, Stay sane. p. 64.
[12] E.g., Keniston, K., Boltax, S., & Almond, R. (1971). Multiple criteria of treatment outcome. Journal of Psychiatry, 8,  107-119; Horenstein, D., Houston, B. K., & Holmes, D.S. (1973). Clients’, therapists’, and judges’ evaluations of psychotherapy. Counseling Psychology, 20, 149-158; Garfield, S.L., Prager, R.A., & Bergin, A.E. (1971). Evaluation of outcome in psychotherapy. Journal of Consulting and Clinical Psychology, 37, 307-313.
[13] Harty, M., & Horowitz, L. (1976). Therapeutic outcome as rated by patients, therapists and judges. Archives of General Psychiatry, 33, 957-961.
[14] Rowe, D. (2012). Introduction. In J. M. Masson, Against therapy. [Kindle version]. Retrieved from Amazon.com.

3 responses to “Why there are so pitifully little evidence for psychotherapy effectiveness? Part 1

  1. Great post! The points about the burden of proof and the unjustified risk of harm involved when one fails to meet it are very important and worth emphasizing. Thank you.

  2. While I was in therapy, I was absolutely brainwashed to believe this ritual was magical and transformative. Like a dupe who paid for a high-priced “human potential” seminar at a hotel, I threw myself into the ritual and convinced myself therapy was cleansing me of my defects. One therapist even sent me for a beauty makeover to go with my metamorphosis. It wasn’t until years later I realized that not only had the process failed to solve anything, it brought out the worst in me.

    More alarming, my therapists were as duped as I was. I would have been an unreliable statistic had my “outcome” been charted soon after my so-called treatment.

    My first question around efficacy studies: how does anyone measure the quality of a human life?

    • I do not think that we should measure the quality of human life. And I do not think that therapy should be aimed at enhancing this quality. Before any therapy will start, clear objective should be agreed by therapist and his/her patient. To measure effectiveness of psychotherapy is to measure ability to obtain these objectives.

Leave a comment