<img alt="" src="https://secure.perk0mean.com/171547.png" style="display:none;">

'Clinical Trial Transparency and ALLTrials' by Ben Goldacre [Video]

Ben_Goldacre_Clinical_Data_Transparency

The pharmaceutical industry has always had ethics to consider in various forms or another. Here, Dr Ben Goldacre, author of 'Bad Pharma' presented a keynote speech on clinical trial transparency and the ALLTrials campaign.

In this video hear Ben present 'ALLTrials: Transparency is moving forwards, industry can benefit from doing the right thing'. This presentation covers:

  • The prevalence of missing results
  • Arguments for and against transparency
  • The history of failed efforts to solve the problem
  • 4 things we want in clinical trials
  • The ALLTrials campaign

 

 

 

 

After Ben Goldacre's keynote session a Q&A on Data Transparency in clinical trials was also held and it was a chance for those within the pharmaceutical industry, and representatives from some of the companies mentioned in Bad Pharma to engage in questions and debate with with Dr Ben Goldacre on some of their differences in opinion. 

 

Q&A Video Transcript

[Katherine]

Thank you very much, and we will now move to the panel discussion. So if I can ask Kevin and Ben to come up and take a microphone spots and I think I'm going to be fed some questions from, we've had questions coming in through the webcast. There's been an opportunity for people to post questions. In the initial phase, I can't think anybody would want to ask me a question. That's not a front. Please do indicate would you like to ask a question within the room?

[Ben]

Are these chairs quite low, do we look like teeny tiny parrots?

[Katherine]

I'll stay standing. Kelsey.

[Kelsey]

I guess this question could be addressed to all of you, and I think it is a very interesting aspect that we need to consider especially when we think of releasing confidential patient information. How do you foresee this continuing as we to start to include more genomic bio marker information for personalized and preventative medicine that uses genomic information, how will that affect the confidentiality and release of patient information? That can go to whoever wants to take it. 

[Katherine]

I'm going to pass it to Kevin to answer that one. 

[Kevin]

I'm sorry. Right in my eyeline, let's move that out of the way.

[Ben]

It's because you're so teeny tiny.

[Kevin]

Yes, right? You and me both. In principle, I don't know that. Sorry, I can see somebody maybe yearning to get closer to this microphone. I don't know that that really impacts any of the principles that have been discussed. 

For me, data will become more and more transparent that is, more and more of it will become available and whether a part of that involves a big chunk of genomic data, either captured as part of the file or sometimes after in retrospective tissue evaluation. I don't think that has I don't think that has any bearing on the principle, on the big issues that we discussed. 

It just opens up a richer source of data. It may lead to the situation where you more accurately, or after the fact more accurately identify patients who are benefiting most, based upon some genomic but the whole analysis of such data is extremely complex as you know, and the issue of scientifically the reproducability of any of those especially based on a whole bunch of genes can you really do it the next time around? 

Maybe transparency will help you and your more data to check these things out. But, the principles I don't think are affected.

[Ben]

Well no, I think the principles are exactly the same, I think it's a more extreme case. What I find really interesting is everybody wants to talk about the practicalities of individual patient data sharing and the complexities that that throws up, and I don't think we could say anything useful on how to manage that in less than a week of constant speaking. 


I find it fascinating though, that everybody wants to talk about that. Nobody wants to talk about the summary results. we act as if this has been fixed. The best currently available evidence is the HTA review, but since then there have been further studies. For example, one published in PLUS just last month, which looks at the most recent three years worth of trials registered on ClinicalTrials.com, and find again that only half are published within two years. 

We've completely failed on this and I want to know why everybody wants to talk about individual patient data, and nobody wants to talk about summary results, when that is the clear easy win to improve patient care. And when it is an unanswerable case when there is no legitimate reason for withholding summary results. 

I wonder if it's because we're all quite techie people in this room so we like difficult technical problems; we don't like difficult cultural problems. And also, I think the issue of access to some results has kind of got the stench of failure around it, because people have been talking about it since at least 1986, and we failed to fix it. 

And actually also I suspect some of you probably recognize that if you stand up and talk about it as I have, as an early career researcher and relatively junior doctor, that you get smeared. Because people don't know want to talk about it, and so I get that you want to have a really interesting conversation about what kind of escrow strategies can we have and SAS are building this brilliant system that everyone can into where GSK can't see Boehringer's data but a third-party researcher can get summary results out of all of it. 

But, and not to range but, I wish that you could all be as interested in the issue of summary results, because I can't make informed decisions about which best for my patience because information is being withheld right now today about the drugs I am prescribing right now today.

[Kevin]

Can I make a quick comment on the individual? 

With the individual patient data issue Ben, I think that that little bit the EMA's policy to validate their decision-making and so on. You can't, an independent researcher can't re-create create the summary level data, but they can't re-create the analyses unless they have individual patient data. 

So in a way, part of EMA's policy kind of requires it in order to. 

[Ben]

But there are endless trials where we don't even have, we're not talking about trying to reproduce the summary results. There are whole trials where we have no information about the methods or results at all.

[Kevin]

Sure. 

[Ben]

And that is a public health emergency. And everybody wants to talk about how difficult it is to share individual patient data. 

[Kevin]

I don't have an issue about sharing summary-level data, personally I don't understand it. But all I'm saying is that some of the discussion around individual patient data is partly due to this desire to take a pharma-reported study and not just accept its summary data but let's get hold of the real data and figure out, you know, what the real answer is. 

I think there is something behind that, but I do agree that for the vast majority of of public good they could trying to access. Having availability at the summary level data should in and of itself be sufficient and personally I agree that it's hard to imagine why there would be an issue in making summary data available. 

[Ben]


Well, I thought your slide about what can be done with individual patient data was actually very good and very useful and you say that it can be used to validate somebody's own analysis of their own transaction. And of course that is one use but it's only one use. We know that is important because we know for example if we look at Viex for example, I don't want to use the wrong word here, but the fact that you couldn't really see that there was a harmful signal in there came about because of the way that the data was analyzed, and there was some have argued, obfuscation around that, access to individual patient data was clearly valuable and important in that situation. 

It would have helped to spot that flawed, if we want to be generous to analysis. But also, you know the other examples you gave are really, really important. If we want to know is really true that antidepressants, SSRI more effective in people with severe depression. At the moment, all we can do is bundle up the trials which have inclusion criteria of more severe depression or mild to moderate depression.. 

But actually we know when we start looking at IPD from these trials that people break the inclusion and exclusion criteria routinely or people get screened and then included but then there are actual base line handy, depression is completely different. So we can get enormously important information from pooling together that kind of information. 

But the thing that I found most strange about your presentation, and I don't want to say anything mean. 

[Kevin]


Oh go ahead. I don't mind. People have said a lot worse. I promise you.

[Ben]

You spoke as if all industry analysis of their own data are absolutely perfect. And if anybody else tried to have a go at it then they would get it wrong. That is simply not the case. You say that in support of this is not all triumphs. This is the triumphs that are given over in support of an initial marketing authorization and I can give you endless examples of crap analysis by industry. 

I mean, all of the trials in the paper on SRIs, where we saw it switched from 38 positive results to 48 positive results, where they turned 11 trials with negative results trials with positive results. But those were all industry trials, that was fun and the analysis was rigged. So, we don't have a world in which there is the dichotomy which I think it may have been accidental, in which there are perfect industry analyses that are perfectly regulated and then other people want to come along and do haphazard studies, and crucially, secondly, there will always be some crap work out there. 

The public will be misled by some inconsistent observation lepidemiology. Welcome to the real world. Welcome to my world. You know, we have to make decisions based on inconsistent data. We critically appraise it, and there is a rich ecosystem out there researchers and other organizations like cochrane who critically price the stuff, who review it who come to informed judgements about what's best. 

And I think you implicitly seek to replace that, rich ecosystem of people transparently and openly criticizing each other's work with a single monotheistic authority of a regulator. You perhaps want more transparency for that regulator in the conversations that they have. But you want the monotheistic regulator I don't think that reflects the character of science. 

I also don't think it's very safe and I also don't think it's very realistic in an environment where the credibility of regulators has been recently undermined. We know that regulators in all fields, in the energy industry, get captured by the big six players We don't take things entirely on trust. 

We need transparency, and we also have to remember the best currently available precedent for industry regulator aggressive and replacing access to data is Roche in which they argued that cochrane were not technically competent to conduct a systematic review and meta analysis. That is the best currently available precedent we have, but what happens if we allow industry to take control of who has access to the information. They all argue that cochrane, that probably the organizations are more systematic these measure are going to give us. ought to be the ones that invented the core but they're not competent. 

I think that we don't live in a world of perfect industry analysis, and we have to accept a rich ecosystem with fraud stuff and if somebody does a crap analysis and they don't give their stats analysis beforehand. I'm perfectly capable of saying that's a crap analysis and you didn't give your SAP.

[Kevin]

Yeah, so, sorry no, I know you make lots and lots of points. 

Firstly just to kind of go back, I didn't want to give the impression that all industry analysis is perfect in just the ones I do. What I was trying to articulate, and obviously not been clear enough is that within the context of the EMEA and their transparency policy, so they are making available the information been given to them. 

Which I accept is only a small bit, you saw Catherine's iceberg. But in the process, it just seemed to me to be strange that the EMEA would have their set of standards, and how effectively they hold companies to them account, we can debate, but there are some standards to which companies are held to account in the evaluation of submission of such data beforehand. 

And it seems strange to me that they then take the step of releasing that information through their policy and not requiring the same standards to be applied. Not requiring those receiving that information to comply to the same set of standards. That's the point I'm making. Whether that would make everything "tickity-boo" is a different matter, but he just seemed very strange. 

[Ben]

But what the problem is, your argument for words.

[Kevin]

No, no, no. The flip side would be is, what is, the what is the- well, let's say it another way. If the post release standards that EMA might apply. It's okay to put forward an evaluation without some kind of a plan as to what the analysis is that you're going to do, if it's okay to do that the analysis is that you are going to do that and from that we can gain the information, make public health decisions and so on and so forth why can't the same sets of standards be applied before then. Why isn't there one set of standards. 

[Ben]

Because it's risky to have a single adjudicator on the quality of science, and it's risky for that to regulate. So nobody is saying for one minute that it is okay to do a crap post op analysis with no statistical analysis. Nobody is saying that's good. 

Nobody is saying that we are going to pull out the results and make clinical decisions on the basis of that. The problem is you can't realistically hope to have one organization EMA adjudicating on the entire scientific community's work.

[Kevin]

But I'm not suggesting that. 

[Ben]

But that is what you are saying. You're saying the EMA should make a decision question about whether somebody's analysis is of an acceptable quality.


No, what I said was you can't have one body They're not the adjudicators of all things that are scientifically sound. But they are the adjudicators of the quality of scientific evaluation that goes into them in support of getting a proper license. Everybody here is subject to that. Now if that still same organization as I know I am going to release this information -that I've just insured complied to a whole list of standards - I'm going to release that out there, and I'm going to put that forward. 

I'm not going to ask at the rate that the requester has anybody competent to do analysis. I'm not going to ask the requester whether they even have a plan as to what their analysis will be you can just do what you like. 

[Ben]

That's not quite what the EMA are proposing. 

[Kevin]

That's what's written in the guideline. 

It reads that way and it makes me a little suspicious that and part of this, part of that is a kind of reaction to being told by the ombudsman that they came to say, "okay, bang, we'll put it all out there there, and you can do what you like. We are not going to be applying the same set of standards as beforehand, because somehow that kinda says that we are done We've addressed our issue that we were criticized. 

And that's how you can read when you read that policy. I mean, you could change. We've got until the end of the year it's like that right now.

[Ben]

If we crystalize the disagreement that we have, I mean first things, firstly to be absolutely clear. It's a slightly ridiculous argument because, I think everybody should post their statistical analysis. 


[Kevin]

So do I. We can agree on that. 

[Ben]

But I can understand why the EMA have done what they do because I think people should, but I wouldn't make it mandatory that they have to. And I think the disagreement that we have is: you think, that the quality of statistical analysis of trial data should be policed by an individual body. 

I think that they should be policed by the academic and medical community who already have to deal with the fact that there already is a barrage of crap research out there all the time.

[Katherine]

And on that point I'm going to suggest we cut that decision, hold that thought, and I'm going to ask some more questions.

[Neil]

Can I ask a question? 

[Katherine]


I'm sorry, could you give your name and affiliation first? 

[Neil]

My name is Neil, I'm from Kowa, a small company that nobody’s ever heard of. The discussion that has focused on phase three trials and particularly on the regulatory approval, which is a snapshot in time, but it seems to me that the studies that you're looking at are like the aeroplanes that come back with holes in their petrol tanks. 

The missing data, the studies that have that have been downed are for drugs which have either not completed a development program so the regulators never get to see and of course for the phase four studies. And all of the examples that you've given of drugs that have had to be withdrawn form the market because of problems have actually come to grief because of Phase 4 studies and if there had been widespread review at the time of their approval I think probably everybody would've agreed that they warranted approval, so what's causing lot of discussion at the moment is about dealing with EMA and what EMA are doing. 

But I suspect that there is very little to gain from a widespread review of a group of studies that EMA had just reviewed. EMA do get to see all of studies and they will to hear about the studies that are being conducted in Russia, Mexico and India, and they may or may not get to see the data, it depends on what they're interested in. Usually they're interested in studies which are done in well, Europe and the U.S. And I just want I wonder whether the focus is too much on what's happening at the regulation stage.

[Ben]

I have no interest in phase three trials as a special case. The AllTrials is called the AllTrials campaign for a very good reason. We are interested in all the trials that conducted in any part of the world, in all of the uses, licensed or unlicensed, of all of the treatments currently being used by doctors and healthcare professionals around the world, and that's the problem. 

Nobody can give you that list. The European License Agency cannot give you that list, nobody can tell you all of the trials that have been done on telephones. Nobody can tell you all of the been done on any drug. And when you tell the public that, they go well, how many people work in this industry? What on earth are you playing at? 

Now as you go further back in time it gets more and more difficult to have a hope of creating a comprehensive list, and there are shadier corners. For example, if one part of a company in one part of the world gives a grant to a third party to do a piece of research then that may be more difficult to find than GSK doing their own trial on their own that you were saying five minutes ago, but I'm amazed at how unflustered the medical and academic industry community has allowed itself to be, over the simple urgency of just creating a simple list. 

It's very interesting. When you talk to policy makers who are actually many of them, I would say quite angry with not just industry but with their own civil servants and regulators. And I won't name any names here; it's nobody I have mentioned so far. I think some of them are quite angry with the people who are working for them, because when I've come along to speak with very senior members of the UK and other governments, they have said, we were told trial registers that have all of this stuff on. 

And when you start explaining to them, well, the European trials' register is just all the trials done within European nations since 2004 and it doesn't really include this or that. They just look amazed. They assumed, because I think they were either implicitly or explicitly misled by their own staff they assume this was a list of all the trials, they thought there was a list of all the trials ever done on paroxetine. 

They thought there was a list of all the trials ever done on Citanpron aren't any of these drugs. and they are amazed. And the public are amazed. And that's why the ABPI have been so eager to smear me. Because I have dared to mention this extraordinary flaw in the information architecture of evidence-based medicine. 

So I don't care about phase three trails as a special case.

[Neil]

I'm not sure that ABPI speaks for all the industry that.

[Ben]

Well no, they speak for you. You have to accept that. Steven Whitehead, when he says these issues are not current, he speaks for you. You sir. If he does not speak for you then you must stand up and say, Steven White head of the ABPI is incorrect to say that these issues are not current. 


And until you do so then it is reasonable for policy makers, who I think are not impressed, and for the wider academic and medical community to say, well how extraordinary. This man who speaks for you says that these issues are not current. 

[Neil]

So what can be done to try to fix the the issue which is elsewhere. That is industry is happily jumping through hoops now to try and get the Phase three issue fix, but there are academic investigators who are conducting trials, they're not registering. 

They don't write the CSRs, should they be? Are we going to double standards again, or are we going to have one set of standards for everybody? And indeed its a good apply outside of the field of clinical trials because there's a lot of discussion about climate change is going to have huge effects on everybody and yet that means that there is far less opaque, more opaque rather than trial data. 

[Ben]

Selective reporting is an issue in psychology neuroscience absolutely everywhere, and I think these places that needs to be fixed, but in evidence based medicine, in the information we use to make decisions about what treatments a patients gets, it's most urgent. But I know that's animated by the other stuff. 


You say what can we do, is your question, what can we do to get a complete list? Because if it is, I have some concrete samples, and some of them are business opportunities for you in the room. 

[Neil]

Well, partly . . . But I mean

 

[Ben]

There is this legislation already, or codes of practice. People, academic researchers in the UK are already supposed to register and they just fail to. 


Actually, one of the best outcomes, I think, of the campaign that we've led is, for example the select committee triggered MRC, and NHIR to do a simple audit of compliance with reporting and registration of trust and actually, that simplest first step I think is very important. It's amazing how we don't apply to the world of research the simple tools that we use in clinical medicine all the time. 

routine open audit of compliance with the things that we aspire to comply on is a very simple thing. I am constantly approached by people from industry who come up to me and say - I mean, realistically, the industry kind of splits in half. It's a CEO issue. There are some CEOs is I think a very old school and too big to fail and think they can wish the world in the way they like it and there is something will accept their transparency is an ability and they want to show leadership on this issue, but the best companies come to me and they say, look. 

We think we're actually pretty good. At least since about 2001 or at least since ICH-GCP maybe '98. We think we're pretty good at registering everything and posting all results. We think we've got about 98% compliancy. I always say prove it. Do an audit. Like take the credit. If you have done well, then, be open and clear about that. And then secondly what we need, and I'm building this at the moment with the open knowledge foundation, is a database called open trials which is a giant linked index, a database of all information about all trials, for the first time ever, matching together all of the trials that have been conducted and completed. 

Registries with all of the information that is available about them, not just results on registries or individual industry repositories, or in academic but also links to CSIs where they've been made available, press releases where they've been made available, protocols where they've been made available, and so on. 

And from that, we can then derive secondary variables of, the following companies are the worst for missing data, the following sites are the worst, the following PIs are the worst, the following countries are the worst. And also the best. And I think I think routine open audit and at least the ambition to create a single coherent list are the way forwards. 

And, you know, that that is what public, and actual policy makers think we have been doing since the year dot. They are blown away that this is a new idea and they are blown away by the idea that somebody's saying that we should do this, received such an extraordinary reaction from organizations like the ABPI and actually much less so really from EFPIA who I think it's a widely regarded as a more serious organization. 

[Katherine]


So on that note I think that I will ask to allow one more question. I see a hand over there. Yeah. One more question then will take a tea break.

[Andy]

I'm Andy. I work for a small organization called Roche. From Kevin's talk, I realized that I'm on Omeprazole. I have high cholesterol. I had a grand daddy who worked on World War II planes. 

And I work on orphan niche indications and now, obviously at the time of being with Roche think connection as well. I just want to say that I think that with access to clinical trial data, in the future, I think we will see the extent of our conservatism within a lot of the analysis that we do. As a statistician I work for patients, 20 odd years in the industry. 

And I don't work to get a Porsche. I don't work for the money in that way. I work for the patients and we fight all the time as statisticians to protect the blind and protect against bias as far as we can, and I think that will come out, but I don't have that many arguments with Ben actually, even though i work for Roche, on the clinical trials data being more readily available. 

But when we talk about this data, access to clinical trial data, or this potential to provide I think you have insights, I agree. And especially across compounds and across companies right then we failed but I think there's a very real danger that access to that data to certain people offers the potential to give most misleading results which will be attempted blown out of proportion and cause harm to patients to my patients to ourselves so, for me, it's just, I work. 

This question really is, how do you think we can mitigate against that, because we know this will happen, and we know there is potential for harm.

[Ben]

There are crap analyses published all the time in the academic literature and they're laughed at, or they're derided. Or some people that aren't very bright take them seriously. But you would be contributing a tiny tiny uplift in the overall quantity at the crap work published endlessly blooming academic literature. 

Actually, individual patient data has been pooled and shared for decades now, so if the thing that worried about is going to happen, it would already have happened. So can you give an example of a situation where sharing individual patient data from clinical trials has led to the public inclinations being misled in large numbers, and then changing their behaviour, and then an impact on morbidity and mortality.

[Andy]

A specific example? It's just It's the worry in my mind.

[Ben]

The fact that it's never happened. The fact that you can't even for example even though individual patient data showing has happened for a long time I think it is very reassuring. And also the fact that we already have a rich ecosystem for dealing with brand quality research, my column or any review of the journals that people do on a weekly basis, like Richard Lehman You'll see trials all the time published top five journals that are crap. 

They switch their primary outcome. They do subgroup analysis. They measure a surrogate outcome that's never been validated. We already have bad trials in the world. We already have bad analyses in the world, the sky hasn't fallen in, and I think the risks out weight the benefits. And it's a bit like any other problem that a regulator faces. 

It's about trading off between the risks and the benefits and I think the risks which industry are suspiciously keen to talk up, I think are greatly out weighted by the benefits. And I don't mean to be mean when I say that.

[Andy]

No, no. That's okay. really of a mind, it's the medium more than anything. And that is the big worry. 

Obviously when you talk about measles vaccinations it's kind of a separate thing, but is big media foray, which I think there is a big risk. There's a lot of big dates to come out, and there's a bigger risk than we think. 

[Ben]

If we worry about public trust and the public being misled I've probably written more than almost anyone else with the definite exception of Brian Dear about Measles and MMR hoax as I think it now. 

the reason why stories about industries hiding information about harms of medicines, the reason why those stories are believable to the public is because this has actually happened, right. When the public see, the representative of Abbvie at the launch of the EFPIA transparency plant standing up and saying, adverse event reports are commercially confidential information, that 's what makes the MMR measles hoax believable to the general public. 

You do not increase public trust in medicine by hiding information from the public by withholding it from doctors and researchers. If we want public trust, if we want to stop stupid stories like the MMR hoax then we have to improve transparency, not reduce it.

[Andy]

I agree, but I think we need to extend some of the very good statistical practices that we have in the industry and practices around data handling rules and documentation as that comes out.

[Kevin]

I'm going to make one quick comment if I can, because I know we've run out of time. 


The issue, you asked for an example Ben, within the context of drug regulation, as imperfect as it is, there are many examples of sponsors who have come forward with flaky post ops, subset, failed trial, lets figure out where to make a positive analyses. You see them publicly sometimes in community processing the FDA where they just expose people, say this is not good enough, go away. 

And the reason that we're all, under this huge number of regulations in industry, is to precisely stop that stuff happening. Now, I'm not saying it works or it's fantastically effective There's a bunch of standards in place to prevent crazy stuff. That's where the regulators have been placed, and I think it's fair to say, that when you release some of these data into the public domain, especially with clinical trials, or big drugs from big companies, there is, this unease, this kind of sense that there is a kind of long line of folks who want to go out and prove by access to some of the individual patient data how bad bad pharma really are. 

And they'll do that in an environment where they're not subject to the same set of constraints that were applied at the time the analysis was done. But I understand what you said, but there is this broader argument you're offering that says the academic world in and of itself through the process of science, peer review, and everything else should weed out the rubbish. 

I understand that. The truth of the matter is, I don't know that that weed works brilliantly well. I'm not saying that the industry, the regulations render works any better, but at least it's a codified laid down set of minimum standards. That is, you must walk this line, otherwise it won't wash its face.

[Ben]

What about phase four trials? You're just giving up on phase four trials. 

We're giving up on trials of unlicensed usage.

[Kevin]

That 's not true. But that assertion is false. Because it's still a trial sponsored by the industry. They're still subjective, I hope.

[Katherine]

On that note we are going to stop for tea and coffee. We are allowed five minutes by the organizers so I propose any discussions follow in the tea break. Thank you very much indeed.

 

Subscribe to the Blog