We asked ChatGPT and Dr Google the same questions about cancer. Here’s what they said

You may have heard of ChatGPTa type of chatbot that uses artificial intelligence (AI) to write essays, turn computer novices into programmers and help people communicate.

ChatGPT could also play a role in helping people make sense of medical information.

While ChatGPT won’t replace talking to your doctor anytime soon,
our new research shows its potential to answer common questions about cancer.

Here’s what we found when we asked the same questions of ChatGPT and Google. You might be surprised by the results.



Learn more:
Dr. Google probably isn’t the worst place to get your health advice


What does ChatGPT have to do with health?

ChatGPT was trained on massive amounts of text data to generate conversational responses to text queries.

ChatGPT represents a new era of AI technology, which will be paired with search engines, including Google and Bing, to change the way we browse information online. This includes how we search for health information.

For example, you can ask ChatGPT questions such as “Which cancers are the most common?” or “Can you write me a summary in plain English of common cancer symptoms that you should not ignore”. It produces smooth and consistent responses. But are they correct?



Learn more:
Bard, Bing, and Baidu: How Big Tech’s AI Race Will Transform Research and All of Computing


We compared ChatGPT with Google

OUR recently published research compared how ChatGPT and Google answered common questions about cancer.

These included simple factual questions such as “What exactly is cancer?” and “What are the most common types of cancer?”. There were also more complex questions about cancer symptoms, prognosis (how a disease is likely to progress) and side effects of treatment.

To simple fact-based queries, ChatGPT provided succinct answers similar in quality to the feature snippet from Google. The feature snippet is “the answer” that Google’s algorithm highlights at the top of the page.

While there were similarities, there were also big differences between ChatGPT and Google responses. Google provided easily visible references (links to other websites) with its answers. ChatGPT gave different answers when asked the same question multiple times.

Is cough a sign of lung cancer?
Shutterstock

We also assessed the somewhat more complex question: “Is cough a sign of lung cancer?” “.

The Google feature snippet says a cough that doesn’t go away after three weeks is a primary symptom of lung cancer.

But ChatGPT gave more nuanced answers. He indicated that a long-standing cough is a symptom of lung cancer. He also clarified that coughing is a symptom of many conditions and that a doctor would be needed to get a proper diagnosis.

Our clinical team thought these clarifications were important. Not only do they minimize the likelihood of an alarm, but they also provide users with clear instructions on what to do next – seek medical attention.

How about even more complex questions?

We then asked a question about the side effects of a specific cancer drug: “Does pembrolizumab cause fever and do I need to go to the hospital?”.

We asked ChatGPT this question five times and received five different answers. This is due to ChatGPT’s built-in randomness, which can help communicate in a human-like way, but will generate multiple answers to the same question.

All five responses recommended talking to a medical professional. But not everyone said it was urgent or clearly defined the potential seriousness of this side effect. One response said fever was not a common side effect, but did not explicitly say it could occur.

In general, we rated the quality of ChatGPT’s answers to this question as poor.

Woman on a sofa with a towel on her forehead and a thermometer in her hand

Does pembrolizumab cause fever and do I need to go to the hospital?
Shutterstock

This is in contrast to Google, which didn’t generate a code snippet, likely due to the complexity of the question.

Instead, Google relied on users to find the necessary information. The first link directed them to the manufacturer’s product website. This source made it clear that people should seek immediate medical attention if they develop a fever with pembrolizumab.



Learn more:
ChatGPT has many uses. Experts explore what this means for healthcare and medical research


After that ?

We have shown that ChatGPT does not always provide clearly visible references for its answers. It gives varying responses to a single given query and is not updated in real time. It can also produce incorrect answers in a confident way.

Bing’s new chatbot, which is different from ChatGPT and has been released since our review, has a much clearer and more reliable process for describing referral sources and it aims to stay as up-to-date as possible. This shows how quickly this type of AI technology is developing and that the availability of increasingly advanced AI chatbots is likely to grow significantly.

However, in the future, any AI used as a virtual health care assistant will need to be able to communicate any uncertainty about their answers rather than inventing an incorrect answer, and consistently producing reliable answers.

We need to develop minimum quality standards for AI interventions in healthcare. This includes ensuring that they generate evidence-based information.

We also need to assess how AI virtual assistants are implemented to make sure they improve people’s health and don’t have any unintended consequences.

There is also the possibility of medically-oriented AI assistants being Dearwhich raises questions of equity and who has access to these rapidly evolving technologies.

Finally, health professionals must be aware of these AI innovations to be able to discuss their limitations with patients.


Ganessan Kichenadasse, Jessica M. Logan, and Michael J. Sorich co-authored the original research paper mentioned in this article.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top