Three
Card
Monte
A Convenient
Re-Analysis
Six days before Austin Pledger swallowed his first Risperdal, Janssen scientists and marketing executives met with an advisory board of doctors in a luxury hotel suite in New York. The group wrestled with problems concerning the prolactin and gynecomastia data that had come in from the clinical study Gorsky and his team had ordered up, hoping to put the issue to rest.
This new study was actually a study of studies. It pooled the one study called “INT-41”—which had the largest number of participants and the worst results and had devoted what those who conducted it called “special attention to prolactin”—with four smaller, more general studies that had produced less troubling numbers.
Although this approach diluted the bad news for Janssen, there were still two problems.
First, the gynecomastia rates remained high.
Second, one table showed a statistically significant relationship between elevated prolactin and breasts among boys who had been taking the drug for eight weeks. In other words, it looked like causation had been established.
According to later testimony, at that meeting, the doctor advisors and the Janssen team came up with a solution that, they decided, could remove many of the gynecomastia cases but in a way that was scientifically legitimate.
There would later be bitter disputes in court about whether it was the outside doctor-advisors or the Janssen people who came up with what they thought could be a defensible way of doing what notes of the meeting called a “re-analysis” of the data. But everyone in the room was being paid by Janssen, and there can be no dispute that the method they devised would make Janssen’s numbers look a lot better. Nor was there any dispute that the idea of re-analyzing the data only came up after they had seen the initial negative numbers.
The retroactive redesign of the study began when someone pointed out that the children in the group who were 10 or over were likely to be going through puberty. Therefore, their hormone levels, including prolactin levels, were likely to be elevated. So why not remove them from the count of gynecomastia cases?
The group agreed to see how that “re-analysis” affected the numbers.
The Phony Denominator
Two months later, on August 22, 2002, a revised version of the all-important study was circulated among Janssen development executives.
The result was a table showing a much lower rate of gynecomastia—just eight-tenths of 1 percent. Moreover, the comparison of boys with raised prolactin levels who had ended up with the disease was now no longer statistically significant. Proving that that relationship was statistically significant – or, rather, that it wasn’t – was the key purpose of the study.
You only need to have gotten past a third-grade math lesson to understand how scientists from the world’s leading health care company and its hired-hand doctors distorted complicated clinical findings.
The re-analysis had worked. The data that Gorsky and his team had envisioned nearly four years earlier to rebut competitors’ claims about gynecomastia was finally ready.
However, even assuming the legitimacy of removing the boys who were 10 years old and over, the table produced numbers created by an obvious arithmetic sleight of hand. You just have to have gotten past a third-grade math lesson in numerators and denominators to understand how this group of scientists from the world’s leading health care company and its hired-hand doctors distorted this series of complicated clinical findings and dense set of the data.
The cases of boys 10 and over with gynecomastia had been eliminated from the numerator—the group in the table that counted those suffering from gynecomastia. However, all the children, no matter their age, were still counted in the denominator. In other words, five boys under 10 years old had been shown to have developed breasts, but all 592 children—over and under 10—were included in the total to tabulate the percentage: five is 0.8 percent of 592. However, only 358 of the children were under ten. Thus, the supposed 0.8 percent represented 0.8 percent of all 592 children, but the real number—the real denominator—should have been 358, which is the number of children under 10. That would have yielded a percentage of 1.4 percent, not 0.8 percent, because five is 1.4 percent of 358. In fact the real percentage should have been derived from the percentage of the number of boys, not boys and girls, under 10 with breasts, or 255. And five is 2.0 percent of 255, a number that likely would have gotten the attention of Benita Pledger and her doctor.
And, again, that assumes that retroactively removing the boys 10 and over was justifiable, which those who had originally designed the study had not assumed. Had those boys not been removed, the percentage of all boys with gynecomastia would have been 4.5 percent: 22 cases out of 489 boys. A J&J witness in a case brought by a boy who had developed male breasts later attempted to offer a rationale for including the original denominator, but even one of the doctors involved in the study would later concede that the denominator should have been changed.
More important to the statisticians worrying about a statistically significant cause-effect relationship between raised prolactin levels and gynecomastia, the new table obscured a deeply troubling finding: that when all the boys who had been taking Risperdal for eight to 12 weeks were examined, those with raised prolactin levels tracked 98 percent of the time with those suffering from gynecomastia.
That eight-to-twelve week treatment time period, in fact, was consistent with medical theories that lawyers in suits against Johnson & Johnson would later introduce—that the gynecomastia didn’t take hold and become permanent until the breast tissue fiber generated by the prolactin had been given time to grow.
After going through various drafts and table reformulations, this re-done study, with the fictional denominator and without the table showing that statistically significant relationship, is what Dr. Findling and two other academic luminaries would ultimately attach their names to as co-authors. Also listed as co-authors would be three Johnson & Johnson employees—two doctors and Carin Binder, the executive spearheading the publication of the article. The article disclosed the affiliations of all of the authors and that a J&J Canadian subsidiary had “supported” their research. But that raised no eyebrows in the academic medical community because by now most such published research was paid for by the drug companies whose products are its subjects.
A Janssen executive complained that a J&J-funded article about Risperdal contained a “nauseating amount of information” about side effects.
The study’s findings would immediately be circulated to the sales teams in the field and be formally published in the highly respected Journal of Child Psychiatry under the title, “Prolactin Levels During Long-Term Risperdone Treatment in Children and Adolescents.”
The original table that did not eliminate the boys ages 10 and older—and that was stipulated in the original protocol for the study—was removed after the first draft. It was then put back in the final draft of the article after some of the doctors asked that it be reinstated—over the complaint of Janssen’s Binder, the article’s only non-doctor, who wrote in an email to the team of Janssen marketers and scientists that it contained a “nauseating amount of information” about side effects.
However, that table was given little discussion in the text, except to explain why including boys 10 and over made the table unimportant. Moreover, the specific chart of data showing the statistical significance for the eight-week treatment period was neither mentioned in the text nor shown in any table.
Scholarly articles in medical journals always include an abstract at the top, so that doctors can glean the gist. The abstract of what would become known in court as “the Findling article” declared, without qualification, “There was no direct correlation between prolactin levels and [side effects].”
Pay No Attention to the Label
The data related to children’s prolactin levels may have been massaged into shape. But when it came to the other key Risperdal target that had been roped off by the FDA—the elderly—Johnson & Johnson faced a new threat by the fall of 2002.
For more than a year, FDA staff had been reviewing the data about side effects experienced by the elderly who were taking Risperdal or its competitors. In September 2002, the FDA wrote to Janssen ordering that an additional warning be posted on the Risperdal label highlighting the strokes and other cardiovascular adverse events related to its use among the elderly. This was the same month that an internal Janssen email from a woman working on the Omnicare deal had reported that the sales team had been “hammered” by the company’s own lawyers because the carefully choreographed deal with Omnicare might look like an illegal kickback scheme.
By engaging in a series of meetings and exchanging multiple written submissions, the company dragged out implementing the label change until April 2003, by which time J&J’s sales force was armed with materials that attempted to make it a non-event. As the new label was being distributed along with an FDA-required “Dear Doctor” letter alerting physicians to the change, Janssen called an emergency meeting of its regional and district managers. According to a suit later filed by a Janssen saleswoman, they were told “to continue promoting the off label use of Risperdal because Janssen was going to eventually secure a dementia indication [on the label].”
However, the continuing flow of negative data from clinical studies did not make that seem likely. In March 2002 another study came in that showed not only a high incidence of strokes, but also that the drug was no more effective than a placebo in treating dementia.
One scientist involved in the study soon developed symptoms of Credo-itis. After the results had been kept locked up for nearly six months, he sent an email on August 31 to his boss. “Respecting fully any confidentiality agreement that I have with Janssen,” he wrote, “it is obvious to me, and others who may not be so bound and who have learned about the data, that this trial is on its face nearly completely negative. … Janssen has been sitting on the trial results for a long time. Yet it has a moral and ethical responsibility to publish quickly and in a way that can be understood. …”
The study was not released to the medical community (via mentions in seminars) until 2004.
Meanwhile, call reports continued to reflect the team’s focus on dementia-related and other mood disorders.
“Dr. ___ said he had little experience with Risperdal and the elderly,” a Michigan sales rep reported. “Explained how it could help with sundowning syndrome”—a term for confusion and agitation that occurs in the late afternoon among people with dementia. “Told me that he would try it on Gloria’s mother.”
The Michigan saleswoman did not know that her report and hundreds of others like it were destined to become evidence in a series of investigations about to be sparked by an auditor in Pennsylvania, and by a fellow saleswoman who didn’t share her enthusiasm for pushing Risperdal on people like Gloria’s mother.