Shares

All informed health decisions are based on an evaluation of expected risks and known benefits. Nothing is without risk. Drugs can provide an enormous benefit, but they all have the potential to harm. Whether it’s to guide therapy choices or to ensure patients are aware of the risks of their prescription drugs, I spend a lot of time discussing the potential negative consequences of treatments. It’s part of my dialogue with consumers: You cannot have an effect without the possibility of an adverse effect. And even when used in a science-based way, there is always the possibility of a drug causing either predictable or idiosyncratic harm.

An “adverse event” is an undesirable outcome related to the provision of healthcare. It may be a natural consequence of the underlying illness, or it could be related to a treatment provided. The use of the term “event” is deliberate, as it does not imply a cause: it is simply associated with an intervention. The term “adverse reaction,” or more specifically “adverse drug reaction,” is used where a causal relationship is strongly suspected. Not all adverse events can be be causally linked to health interventions. Consequently, many adverse events associated with drug treatments can only be considered “suspected” adverse drug reactions until more information emerges to suggest the relationship is likely to be true.

Correlation fallacies can be hard to identify, even for health professionals. You take a drug (or, say, are given a vaccine). Soon after, some event occurs. Was the event caused by the treatment? It’s one of the most common questions I receive: “Does drug ‘X’ cause reaction ‘Y’?” We know correlation doesn’t equal causation. But we can do better than dismissing the relationship as anecdotal, as it could be real. Consider an adverse event that is a believed to be related to drug therapy:

  • First, is the event an extension of the drug’s pharmacology? Is it predictable, based on how the drug works? For example, narcotics predictably cause constipation and cognitive impairment. Oral antibiotics cause diarrhea because they kill the normal flora in our colon.
  • Secondly, what was observed in clinical trials? The product monograph or prescribing information usually summarizes which adverse events were reported in trials, and which were more frequently observed than in the placebo group.

Neither of the two approaches is comprehensive. If the suspected event is rare, it may not have shown up simply by chance alone, in the clinical trial. Or the event may develop slowly: clinical trials have a fixed duration, while treatments in the real world can last decades. The patient population in clinical trials is usually healthier and on fewer other medications than those in the real world. So to truly understand the adverse event profile, we need to look at real world data. We could turn to epidemiological studies that evaluate safety in real-world settings, often using massive treatment databases. While not as robust as data from a randomized controlled trial, and prone to misuse,  epidemiological studies can answer questions about vaccine safety or identify subtle effects of drugs on common events, like heart attacks.

If the event in question doesn’t show up in any other data source, but is suspicious, it might be appropriate to submit a report to the manufacturer or to the national drug regulator. Importantly, this is a suspected reaction — we cannot be certain the event was caused by a drug based on a single observation. But multiple reports, sent independently, could “signal” the need for more investigation. Consequently, countries with robust regulatory systems have all established systems for collecting spontaneous reports of harms. Systems generally include:

  • mandatory reporting from drug manufacturers of any adverse event reported to the company
  • optional or mandatory reporting from health professionals who become aware of possible adverse events
  • the option for consumers to report harms directly to regulators (bypassing manufacturers or health professionals)
  • collaboration with other countries to share information on adverse events that are collected.

There are four essential elements to any adverse event report:

  • An identifiable patient: there must be a specific patient known to be involved, and adequate information is necessary (gender, age, etc.).
  • An identifiable reporter.
  • A suspected drug: the drug should be known. If a patient is on multiple drugs, they all need to be described in the report.
  • A suspected adverse event or fatal outcome: specific signs and symptoms are required, and ideally a diagnosis as well (e.g.,”rash” isn’t specific enough to compare between reports).

Multiple reports are typically required to generate the safety “signal”, a flag that an association merits further investigation. The FDA compiles all adverse event reports it collects in the Adverse Event Reporting System (AERS), and continually analyzes that database, publishing possible signals that it has identified. While consumers and health professionals can submit reports to the FDA directly, the overwhelming majority of adverse events are reported by pharmaceutical manufacturers, who are required by law to forward all adverse events they identify. If an event described is serious and not already listed in the prescribing information, it must be forwarded to the FDA within 15 days. Here are the statistics from its Adverse Events Reporting System (AERS):

The blue bars are direct reports, submitted by health professionals or the public — about 4% of the total submitted in 2010. The remainder are submitted by manufacturers. Even massive numbers like these may not be sufficient to identify potential signals of adverse events related to treatments, so countries collaborate and share information: a bigger net can catch more safety signals, it seems. The Uppsala Monitoring Centre is an international collaboration, combining reports submitted by over 100 countries. It has amassed a database of over 7 million reports. Adverse event databases are great resources for identifying safety signals — albeit with some significant limitations.

 

The Trouble with (V)AERS

The misuse of vaccine-related adverse event reports — called VAERS, for Vaccine Adverse Event Reporting System — is a common tactic of antivaccine groups who believe that vaccines are unsafe and cause significant harms. And a database of suspected harms is a gold mine to those seeking anecdotes. Antivaccine groups have been known to mine the VAERS database, and draw causal relationships where none have been established.

There is no question that adverse effect databases can serve as a valuable resource as part of an overall program to monitor the safety and efficacy of a drug or vaccine. However, in isolation, these databases have limited utility. Patterns or “signals” are recurrent events observed in the data. They are hypothesis-generating — not hypothesis-answering. Most importantly, these databases cannot estimate the incidence of any adverse event. In order to estimate the incidence of an event, we need to know how many times it has occurred in a specified population size: the denominator, which is the total number of patients that have taken the drug (or vaccine, as the case may be). No denominator, no incidence. AERS cannot provide that information, given reporting is spontaneous, incomplete, and the size of the population taking the drug isn’t known. The FDA makes this very, very clear:

AERS data do have limitations. First, there is no certainty that the reported event was actually due to the product. FDA does not require that a causal relationship between a product and event be proven, and reports do not always contain enough detail to properly evaluate an event. Further, FDA does not receive all adverse event reports that occur with a product. Many factors can influence whether or not an event will be reported, such as the time a product has been marketed and publicity about an event. Therefore, AERS cannot be used to calculate the incidence of an adverse event in the U.S. population. [emphasis added]

The old adage that garbage in = garbage out holds true with adverse event databases. The quality of reporting is as important as, if not more important than, the quantity of these reports. Low quality reports, with incomplete data have the potential to increase the challenge of finding true safety signals. Trends in AERS and VAERS reports may also be subject to external pressures unrelated to drug effects. Relationships have been established between H1N1 media stories and VAERS reports. Vaccine litigation can do the same thing. Neither are real, but you need to look beyond the databases to answer the question. It would be interesting to see if some of the current medical-legal drug controversies (e.g., oral contraceptives, antipsychotics, and antidepressants) also have identifiable litigation-related trends in AERS databases.

 

The Power of Nocebo

Are all reported side effects real effects? Probably not. Any double-blind clinical trial will describe the side effects reported with both the active treatment, and the placebo. And adverse events from placebos can be so significant that they lead to treatment discontinuation. A systematic review of trials in patients with fibromyalgia noted that 67% of patients reported adverse events in the placebo arms, and 10% discontinued “treatment” with the placebo due to adverse effects. In a study of allergy treatments, 27% of participants reported allergic symptoms to a placebo challenge. And in a study that compared ASA (aspirin) versus placebo, describing potential adverse effects of therapy led to a “sixfold increase in the number of subjects withdrawing from the study because of subjective, minor, gastrointestinal symptoms” compared to study sites that did not provide that caution.

The recognition of nocebo effects is another consideration when evaluating individual adverse event reports.

 

Free the data!

While collecting and analyzing adverse event reports has been standard practice since the thalidomide disaster, the data only became publicly available more recently. Several years ago, the Canadian Broadcasting Corporation (CBC) used access-to-information laws to obtain access to Health Canada’s entire adverse event database, and posted the data online. Health Canada subsequently made the data available directly.  The FDA makes similar data available, which you can query directly. But given the limitations described above, the utility of these databases to the public or health professionals isn’t clear.

It seems clear to a new company, adverseevents.com, which aims to make AERS data more easily accessible, and to make money while doing so as the CMAJ described last week. Brian Overstreet, CEO, made the following comments:

We live in an information age, and there is an overwhelming pool of potential information for consumers to look at online. What’s lacking is a real statistical overview. We can come in and say, listen it’s nice that 200 people on this discussion board say they got an upset stomach, but we have 50 000 case reports, and from those we know 27% have an upset stomach. Having hard data to back up the real world perception I think is very, very valuable.

The FDA, and even they don’t know for sure, but they estimate maybe 10% of the serious adverse events are reported. But as much as we’re talking about a limited data set, three million case reports in the last seven years, it’s not a small data set. It’s a pretty robust data set. The data’s never going to be perfect, but it’s better than nothing, and so long as we’re treating it properly, the end result should be valuable.

The obvious problem with this approach, as I’ve pointed out above, is that it ignores the significant limitations of the data itself. You cannot estimate incidence without a denominator. And there is no denominator in the AERS data. These data limitations don’t stop the company from comparing within classes of drugs based on reported events, or even identifying which drugs are most likely to be associated with death.

Another group that recently announced a query service for adverse events reports is headed by Dr. David Healy, psychiatrist and author of the book Pharmageddon. He has launched an independent adverse event collection site, Rxrisk.org, which states:

There comes a point where, even if the clinical trial data says otherwise, it is just not reasonable to say the problem can’t be happening in at least some people. You and your healthcare team have been handed a megaphone!

It appears that Rxrisk.org will both analyze FDA-reported adverse events, as well as collect reports directly, and facilitate their submission. In an interview, Healy made the following comment:

People need to wake up and stop thinking that clinical trials have provided all the answers. We need to get back to believing the evidence of our own eyes. The key thing is to get the data. There isn’t anyone else getting data like this.

Given the dangers of drawing conclusions from spontaneous reports, and considering how the VAERS databases have been misused to make erroneous inferences about vaccine safety, I’m somewhat skeptical of what rxrisk.org and adverseeevents.com will contribute to our understanding of drug safety. The sites are not yet fully operational, so this may be a topic I’ll revisit in a future post.

 

Improving our drug safety monitoring systems

We all share the goal of wanting to understand the true risks of drug treatments. Part of supporting informed evaluations of risk and benefit means regulators must continuously monitor the real-world safety profile of licensed drugs. Adverse event databases perform a critical role in identifying possible safety signals and generating hypotheses that require additional analysis. The significant challenge is to differentiate between the useful signals and the noise, and recognize biases in our observations and in the way we collect this data. Otherwise we may well assign causality where none may exist — another sort of poor outcome.

Adverse effect reporting systems are designed to enhance patient safety. They are one tool, unquestionably useful, but limited in utility. If we don’t keep these limitations in mind, we run the risk of worsening, not improving, our understanding of a treatment’s risk and benefit.

Shares

Author

  • Scott Gavura, BScPhm, MBA, RPh is committed to improving the way medications are used, and examining the profession of pharmacy through the lens of science-based medicine. He has a professional interest is improving the cost-effective use of drugs at the population level. Scott holds a Bachelor of Science in Pharmacy degree, and a Master of Business Administration degree from the University of Toronto, and has completed a Accredited Canadian Hospital Pharmacy Residency Program. His professional background includes pharmacy work in both community and hospital settings. He is a registered pharmacist in Ontario, Canada. Scott has no conflicts of interest to disclose. Disclaimer: All views expressed by Scott are his personal views alone, and do not represent the opinions of any current or former employers, or any organizations that he may be affiliated with. All information is provided for discussion purposes only, and should not be used as a replacement for consultation with a licensed and accredited health professional.

    View all posts

Posted by Scott Gavura

Scott Gavura, BScPhm, MBA, RPh is committed to improving the way medications are used, and examining the profession of pharmacy through the lens of science-based medicine. He has a professional interest is improving the cost-effective use of drugs at the population level. Scott holds a Bachelor of Science in Pharmacy degree, and a Master of Business Administration degree from the University of Toronto, and has completed a Accredited Canadian Hospital Pharmacy Residency Program. His professional background includes pharmacy work in both community and hospital settings. He is a registered pharmacist in Ontario, Canada. Scott has no conflicts of interest to disclose. Disclaimer: All views expressed by Scott are his personal views alone, and do not represent the opinions of any current or former employers, or any organizations that he may be affiliated with. All information is provided for discussion purposes only, and should not be used as a replacement for consultation with a licensed and accredited health professional.