ChatGPT 3.5 went from nonexistent to ubiquitous in less than a year. If you haven’t used it yet, it’s a conversational artificial intelligence (AI) model developed by OpenAI which understands and generates human-like text. It appears to demonstrate an ability understand complex questions and provide responses in what looks like natural English. However, it can generate responses that sound plausible, but are wrong. Two different group recently demonstrated this, testing if ChatGPT could answer questions about drug interactions – whether two (or more) drugs taken together can change the way a drug behaves in the body.
These findings come from posters presented at the American-Society of Health-System Pharmacists Midyear conference in December 2023. Unfortunately I could not locate the posters online, so will summarize the work based on other reports.
In the first study, real questions submitted to a drug information center were posted to ChatGPT. Drug information centres exist to provide pharmacists and other health care professionals with comprehensive support for drug-related questions. Usually if a question is submitted to a drug information center, it’s because the answer isn’t obvious or readily available.
In this study, 39 questions that had been answered by a drug information services were entered into ChatGPT. The AI provided no response, an inaccurate response, or an incomplete response to 74% of questions. One example cited was if it was safe to take Paxlovid (nirmatrelvir/ritonavir) with verapamil. ChatGPT did not identify the interaction between ritonavir and verapamil. Ritonavir is a CYP3A4 inhibitor, meaning it interacts with hundreds of drugs. (This is why each prescription of Paxlovid for COVID-19 requires a detailed review of any existing medications and natural products/supplements.)
The researchers asked ChatGPT to provide references to support its recommendations. This occurred for eight of the 28 questions, but all eight contained fabricated references with made-up titles and non-functional URLs.
In the second poster, Shunsuke Toyoda and colleagues from Torrance Memorial Medical Center in California found that when they assessed the performance of ChatGPT against a drug database, ChatGPT failed to detect at least half of the known side effects for 26 out of 30 FDA-approved drugs. While Toyoda is quotes as saying that AI-based drug information will not replace a pharmacist “in our lifetime“, I am not quite as certain – as these models are progressing very, very quickly.
In February, our own Steven Novella seemed optimistic about how long it will be before an AI like ChatGPT is a routine part of medical practice. I agree with Steve that there is enormous potential, and given the rate at which these models are evolving, perhaps these criticisms will be irrelevant in another year (or more) when these types of errors have been eliminated.
Another paper published recently is perhaps some justification for Steven’s optimism. This paper was entitled Effectiveness of ChatGPT in clinical pharmacy and the role of artificial intelligence in medication therapy management and was published in the Journal of the American Pharmacists Association in March 2024. In this study, researchers used ChatGPT 4, which is the next iteration of ChatGPT, and is supposed to be more accurate and less likely to “hallucinate” like 3.5.
Researchers generated 39 patient cases: 13 simple, 13 complex, and 13 very complex. ChatGPT was asked to:
- identify the interactions between drugs, diseases, substances, and supplements.
- recommend different medications to prevent potential interactions and enhance patient outcomes; and
- devise management and monitoring plans.
Here, the findings were very different. ChatGPT solved 39/39 (100%) of patient cases, including identifying 100% of the drug interactions, even in cases deemed “very complex” by pharmacists. While some limitations were identified in the AI’s ability to recommend alternative medical therapy, the results were still very impressive. Imagine if a health-specific AI could preferentially used validated drug information resources or other trustworthy health datasets?
As a Gen-X pharmacist who started practice just as the internet was arriving, I admit that I’d been looking at LLMs like ChatGPT with some skepticism. But as I understand the rate of progress of these systems, it’s now apparent that AI could transform health care delivery, and the role of health professionals like pharmacists, far more than the internet has. My understanding of its potential was too limited, and I didn’t understand just how quickly these systems are improving. There’s enormous potential for AI to improve the patient experience and lead to better, safer, and possibly even more cost-effective care. That may mean the traditional roles of health professionals will change.