Does lamotrigine work, or not?

(updated 12/2015)


This page used to summarize two reviews of lamotrigine. The whole story is less important now, in Dec 2015, with the publication of another study that ought to put to rest some of the doubt that those older papers introduced.

Lamotrigine works. Not much question anymore. Indeed it clearly works better than a placebo.  Just don’t take folate at the same time, it could block lamotrigine’s benefits. Those are the two results from the new study.

I’m going to leave the old stuff below in case I need it later, but if you’re reading this (who cares this much? ) you’d be better off taking a good close look at the study above.  Or just getting on with other learning about lamotrigine.

The old stuff

I mainly wrote it to keep track of these two papers, and especially to figure out: why is there any skepticism at all about whether it works? In our shop, it seems like one patient after another benefits. Who did they study, to think it was not effective? Read on only if you care…

First an analysis appeared saying lamotrigine really was no better than a placebo. Then a more recent paper reached the opposite conclusion (and one author was on both papers!). The more recent analysis shows that people with severe depression respond better to lamotrigine than to a placebo, whereas for less severe depression, the results are not as clear.

First: It doesn’t work better than placebo

Usually when two different research studies reach opposite conclusions, it is important to look at both of them. However, in this case, the same data were used in both studies. The second, more recent paper, just used a more sophisticated way of looking at the results (these are “meta-analyses”, which take previous research and re-examine the results, combining multiple studies into a single grand tally).

Therefore, in this case, the first paper is somewhat moot. The second paper effectively replaces it. Here’s the reference if you want it: Calabrese. If you’re satisfied with that, you may skip to my summary of the second paper below.

However, no less an authority than Nassir Ghaemi, one of psychiatry’s best logical thinkers, wrote a lengthy essay about evidence in psychiatry, and used lamotrigine research as an example. I would not dismiss his view lightly. He says that if research data do not support the efficacy of a treatment, then we should not use it. I disagree, despite having discussed my views with Dr. Ghaemi several times (in other words, he has not won me over entirely yet).

Here’s my view, to contrast with Dr. Ghaemi’s. The challenge for a clinician is to balance research results with clinical experience. Her or his experience is not useless. When research data do not jive with clinical experience, then we have to re-examine our practices. However, as Dr. Ghaemi emphasizes, the trick is not to be led by limited research data (especially when we see only the positive, published studies; not the unpublished, negative studies, which have been systematically hidden from us by the pharmaceutical companies; see his article for details there).

Dr. Ghaemi would very likely agree about a further risk: limited data might actually bias our view of our own clinical experience, because often one sees what one expects to see. If research results lead us to expect that a treatment approach really works, this will help us produce benefits for our patients, even if the treatment is no better than a placebo, because better placebo effects are generated when a clinician wants to help, and believes she is likely to help (this nature of placebo effects was understood as far back as as the 1700’s, by the wayPhelps). To the extent that this occurs (to my knowledge it’s not been studied), this is a serious problem. And yet ultimately I think — if one is really paying close attention to what patients say about their experience — treatments that really work better than a placebo will demonstrate their advantages, and those which do not will prove repeatedly disappointing, performing below expectations and thereby changing those expectations, increasing skepticism. Of course, one must also remember leeches and bloodletting. Physicians believed in their efficacy for a very long time.

However, the more recent of these two meta-analyses, to which we now turn, much better matches my clinical experience. It’s a more refined analysis. So I give it more weight than does Dr. Ghaemi. See what you think.

Then: It does work better than placebo 

Dr. Calabrese worked with a different team, on the same data, looking more closely at who responded to lamotrigine and who didn’t. If patients with severe depression were examined separately from those with mild-moderate depression, a very different result emerged.Geddes

To understand the results in the graph below, you need to understand what a “meta-analysis” is and how its results are presented. If you’re new to this, hang on, it’s not that tough (you’re about to get a simplified view of this statistical approach). If you know that statistical approach, skip to the Results.

In simple terms, a meta-analysis is like taking 4 different bus-loads of people going to a football game, putting them in the same room, and asking “who are you rooting for?” Whereas any given bus might be overwhelmingly for the Beavers, and another clearly in favor of the Ducks, when you put all 4 buses together you get a more representative sample of the attendees at this game. Not perfect, of course, but better than a sampling a single bus, right?

Lamotrigine was researched as a treatment for bipolar depression in 5 different major studies. In four out of five of them, it was no better than a placebo (leading to the earlier Calabrese paper noted above). But in each study, there was a clear trend toward being better than a placebo. It’s as though in each bus the crowd is leaning toward the Beavers, but there are a significant number of Ducks fans in there diluting the enthusiasm.

But when you combine the folks from all 4 buses, the room is now quite overwhelmingly filled with Beaver fans drowning the plaintive cries of Duck lovers. Hey, in Oregon, this really happens, every year. The U. of Oregon is just down the road from my little Corvallis, home of the mighty Beavers. Well, not so mighty, most of the time; but many people here live in annual (perpetual?) hope of a triumph. At least every year there’s a chance to beat Oregon. But I digress . . .

The point: in the graph below, you’ll see each of the 5 individual studies displaying results, lamotrigine versus placebo. But then those results will be lumped together, and averaged — and presto, where most of the studies did not show lamotrigine as better than placebo, when the studies are averaged, the medication does emerge as superior. How can that be? It’s as though in each bus, there are slightly more Beavers than Ducks. For any given bus, the difference is small, almost unnoticeable. But if you put enough busloads together, then you can see the numerical superiority of the Beaver fans. Not their teams, necessarily . . .

Okay, now let’s look at how this appeared in the key graph from that study.


In case you understand how the results of a meta-analysis are displayed, I’ll just show you the result first. But if this graph means little or nothing to you, jump to the next bit of text and I’ll walk you through it, okay?



Two sets of results are shown here. The first set of 5 squares appears above the subtotal indicated by the upper open diamond, presenting the results for patients, in 5 different research studies (shown by code number), whose initial depression scores were not very high. Less than 24 on the Hamilton Depression Rating Scale (HDRS) is not severely depressed, but it’s not mild depression either. A person can get in a depression study with an HDRS of 17 or more. (I’ll tell you what the squares and the lines mean in a minute, if you’re not familiar with these graphs.)

By contrast, the second set of squares above the widest open diamond present the results for patients with moderate to severe depression — HDRS 24 or higher when they entered any of these same five research studies.

As you can see, the squares for the more severely depressed group (HDRS > 24) are farther to the right. Here’s what that means. The bold vertical line marks the question “was lamotrigine better than a placebo?” If a square is to the right of the bold vertical line, the answer is yes. (I won’t confuse you with the meaning of the other lines and the square sizes, but have explained those in a note below* if you’d like to hear more).

As you can see, for the more severely depressed patients (the lower set of 5 squares), lamotrigine was “more better” than placebo, versus the less severely depressed patients, where lamotrigine is much closer to placebo. Technically the average of those upper squares (represented by the upper open diamond) does not statistically outpace placebo, whereas it does, officially, for the lower set (lower open diamond).

*Further explanation of chart details:

  • Each square represents the size of the study (how many patients were involved): bigger squares mean bigger numbers of patients.
  • The position of the square represents the average result in that study, for that group of patients.
  • The bottom open diamond represents overall average improvement, versus placebo, of all patients in all studies. The dotted line extending upward from that diamond is supposed to help you compare the two groups, relative to the overall average.
  • The horizontal stripe for each box gives you a sense of how big the range of the results was in that study, from some patients doing really well to other patients not changing much relative to placebo. Note that the small samples have wider lines. That’s because the line represents “standard deviations”, which are larger when the sample is small.
  • Technically, if a line or a square touches the bold vertical line, the results shown there are not “statistically signficant”. The most important result, statistically, is shown by the middle open diamonds, which is well to the right of that vertical line. This means that the more depressed subset of patients is quite clearly responding to lamotrigine, versus placebo.
  • Finally, for you statistically oriented types, note that the lowest diamond, representing the average of the entire group of patients, also does not quite touch the vertical line. This means that when you combine all the patients from these studies, the average improvement is officially” (statistically speaking) greater than placebo. Interestingly, this is true even though many of the studies were negative”, as emphasized by Dr. Ghaemi in his discussion: in those studies the square or its horizontal line, crosses the vertical bar.
  • One more try to make that last point clear: even though most of the studies are officially “negative”, meaning that statistically lamotrigine did not outperform placebo, when you combine all the results, the slight positive trend in several of the “negative” studies yields an officially “positive” result. Dr. Ghaemi emphasizes that this statistical re-hash, while acceptable, should not to be given the level of credence that we give to each individual study considered separately, because it is an after-the-fact analysis. If you already thought lamotrigine was better than a placebo, you might be inclined to trumpet the results of the meta-analysis; whereas if you thought lamotrigine was a dud, you could point out that most of the studies were negative, only two of them positive (as indicated by their horizontal line not touching the bold vertical line).