(In the order of their appearance; revised 3/2006)
(In the order of their appearance; revised 3/2006)
The Altshuler study is cited in almost every discussion of this issue. But it is not a “randomized trial” — it is a “observational study”, which watched what happened to patients over time. Here’s the key: the research team had nothing to do with medication decisions. Doctors and their patients decided whether to continue antidepressants or not. Those who stopped them are very likely to be different, somehow, than those who continued them. And thus we are not comparing apples and apples here.
Dr. Altshuler and her colleagues were very clear about this, but sometimes when their results are being cited, this “observational” nature of the study is not emphasized. Mood experts, like other scientists, readily agree that a study in which patients are randomly assigned to one approach or another is a completely different kind of science than observational work, and that the data from such a randomized trial are far more reliable. On that basis, only the Ghaemi study described below is worthy of attention. However, it too is a preliminary study, not a “final word” by any stretch of the imagination.
As you are likely to see all these studies cited, here are details on each, proceeding roughly in the order in which they appeared (I’ve not changed the wording of the original analysis of each when later studies have appeared, so it might sound a little odd that way).
Here is the crucial figure, then a breakdown of the numbers — then a discussion of the article overall.
This paper was one of the first sources of data on this approach. However, it is important to keep this study in perspective, perhaps all the more so because there is so little data to go on in the first place. Although this study is of great importance and is already influencing treatment, we should keep in mind some of its design features.
Most importantly, this was not a randomized trial. This was a “naturalistic follow-up study”. That means that these patients were monitored while they and their doctors did what they thought was the right thing. They could choose to continue the antidepressant, or not, depending on what seemed to be the right thing for that patient.
The group on whom they report was selected from 549 patients who had received an antidepressant in addition to a mood stabilizer, and had recovered from a bipolar depression. Only those who took the antidepressant for more than two months were included. Thus, any patients who had a negative reaction to the antidepressant in some way, and thus had their antidepressant stopped, were not included in the study. Only those patients who clearly tolerated being on the antidepressant were included. Presumably, any patient who showed some possible evidence of her/his bipolar disorder worsening in the first two months of being on it would probably have had the medication stopped — and that patient would not be included in this analysis.
This “first cut” eliminates two thirds of all the patients (360 out of 549). Two thirds of the patients did not do well enough on antidepressants to stay on them for more than two months. That’s interesting. You don’t usually hear that result from this study reported. This “cut” is illustrated by the first red arrow in Figure 2 above.
The remaining one-third of the patients are the only ones we’re looking at as we wonder what happens to patients with bipolar disorder who take antidepressants. So just remember, this study does not really tell us about the safety of using antidepressants in bipolar disorder (that’s Controversy one) — but rather about the outcomes of those who can clearly tolerate them for two months.
Now, in the 189 patients who did stay on their antidepressants more than two months, what happened? A little less than half of them (44%, 84 people) got better enough to be considered really recovered. The other 105 people also were no longer considered in this study, as shown by the second red arrow in Figure 2.
Remember, the original question was whether those who recovered on antidepressants would be more likely to stay recovered, if they stayed on their antidepressant, or if they tapered it off. So, finally now, let’s watch to see what happens in the 83-out-of-549 patients who originally got an antidepressant and got better enough to get in this study in the first place.
In this rather select group of patients, those who stayed on the antidepressant were much less likely to relapse into depression than those who stopped the antidepressant. Only a quarter (24%) of those who stayed on at least a year relapsed, whereas nearly three quarters (70%) of those who stopped within 6 months relapsed. Those who stopped earlier relapsed at a rate four times higher than those who stayed on.
But, remember, those who stayed on are those who were doing well enough to stay on. They and their doctors decided to keep going. Might there have been something about those who stopped earlier that led to their relapse, and also led them and/or their doctors to stop the antidepressant? What if those who were more unstable, who had subtle signs suggesting a “roughening” of their course, like worsening sleep, were among those whose antidepressant was stopped (e.g. because their doctor saw the roughening, attributed it as perhaps associated with the antidepressant, and therefore began tapering it)? And what if some stopped their antidepressant on their own, thinking they were doing well, but did not taper it? Often a sudden change like that can lead to roughening, and roughening can be the beginning of a full relapse.
So in some ways it seems that this “naturalistic follow-up” design runs the risk of demonstrating a rather circular conclusion, something like: those patients who do well on their antidepressant will do well on their antidepressant.”
Nevertheless, one can also interpret these results as comparing the treatment strategies of two different groups of psychiatrists — those who believe it’s best to stop the antidepressant sooner, and those who prefer to continue it. From that point of view, the results do suggest that the “continue” strategy works far better, at least for patients who have done well for two months on their antidepressant.
Unlike the reports by Altshuler et al above, and Joffe et al below, this study randomly assigned patients to stopping or staying on antidepressants. It was presented at a bipolar meeting in June 2005. Details here are based on the poster presented by Dr. Ghaemi, used by permission.
These patients were part of a larger group studied by a system of bipolar researchers (STEP-BD). When they recovered from depression, on a mood stabilizer and an antidepressant, they were invited to participate in this study. Roughly half (36 of 66) were assigned to the Stop group, and the rest to the Continue group. Choices about other medications they were taking, including mood stabilizers, were left to collaborative decision-making between them and their doctors, not dictated by the study.
The results must be interpreted carefully. There are three main results, to my eye:
1. When examined overall, without statistical adjustment of any kind, outcomes for the two groups were roughly the same. Unlike in the non-randomized Altshuler and Joffe studies, the Stop group did no worse than the Continue group. However, there were several factors which tended to make the groups differ. For example, there was an effect of randomization itself: those who happened to be assigned to a group they were hoping to be in did better (whether that was to Stop or Continue). Interesting, isn’t it?
2. Those patients who had rapid cycling (more than 4 mood episodes per year) tended to relapse into mood symptoms more quickly if they were in the Continue group. This is consistent with the idea that antidepressants can destabilize the long-term course of bipolar disorder and can prevent full response to a mood stabilizer, at least for those with a rapid cycling course — although really proving that concept will take several more studies along these lines.
3. When adjusted for this kind of group-associated difference (e.g. whether you were in the group that got what they were hoping for, or whether you have rapid cycling), the results were the opposite of the Altshuler and Joffe groups: the Stop group did better than the Continue group.
This study was similar in design to the Altshuler et al investigation, except that it is much smaller and importantly, we don’t know how many patients were followed in order to come up with the 59 whose outcome is reported. As emphasized in the accompanying editorial by Calabrese, these “…data do not confirm or extend the results of Altshuler”, as the study authors had asserted, because “the reader is unable to ascertain the magnitude of the universe, the denominator.” In other words, this study could have the same problem as shown above for the Altshuler et al report: how many patients were studied leading to this final result? Note that this problem is not seen in the approach used by Ghaemi et al (as well as the even more important randomized design thereof).
Here is Dr. Ghaemi commenting on the Joffe study (personal communication, used by permission), followed by a translation to less technical English:
I like Joe’s editorial, really great, but I would go even further. The problem with this study, and Altshuler’s (and any observational study including mine [referring to one of his earlier studies]), is that it is non-randomized, which is a problem of validity, and not just generalizability.
We don’t know if the results are valid, because there are all sorts of confounding factors that could explain them (in other words, why they got or did not get antidepressants is driving the outcome, rather than the actual drug effect). Confounding bias is systematic error, in contrast to chance which is random error. Thus it is an axiom of clinical trials that if one has confounding bias, then replication is irrelevant. Since there is SYSTEMATIC error, studies with the same design will systematically replicate the same error.
Thus “confirming and extending” is irrelevant…
In other words, without a randomized design, we cannot really examine whether the Joffe study really means what it appears on the surface to mean, and what the authors interpreted it to mean. Only a randomized trial would allow that — or at least it would be far superior. Now that we have one, even one, even a small one, the Altshuler and Joffe studies should be pushed to the background in favor of these more reliable data. Basing clinical decisions on open trials, instead of a randomized trial, would go against the usual approach in clinical reasoning.
Published later as: Ghaemi et al. Antidepressant discontinuation in bipolar depression: a Systematic Treatment Enhancement Program for Bipolar Disorder (STEP-BD) randomized clinical trial of long-term effectiveness and safety. J Clin Psychiatry. 2010 Apr;71(4):372-80.