A (human) doctor advises on using ‘Dr’ AI wisely


We should be diligent about removing personal information from the scans, lab reports and doctor’s letters we upload into AI as we don’t know how that data might be used. — Freepik

Across Malaysia, patients are quietly turning to artificial intelligence (AI) for health information.

This is happening regardless of whether doctors approve or not.

Blood test results are copied into chatbots late at night.

Symptoms are typed out when worry strikes.

Explanations are searched for before clinic visits, or after when questions remain unanswered.

This is not unexpected behaviour.

It reflects an unmet need for better understanding of their health condition or illness – understanding that may be lacking due to brief consultations or complex medical terminology.

Easy to query

Quite often, questions arise only after the consultation is over.

This is where digital tools offer reassurance, especially during moments of anxiety or uncertainty.

For some, asking an AI chatbot feels easier than speaking up during a rushed appointment.

There is no fear of sounding foolish, no pressure to keep questions short, and no embarrassment in repeatedly asking the same thing.

In daily clinical practice, this shift is already visible.

Many patients now arrive with a basic understanding of their condition.

They may know the names of their tests, the reason they were ordered or the usual treatment options.

This would have been unusual a decade ago.

In the past, most patients waited for doctors to explain everything from scratch.

Today, conversations often begin where AI explanations leave off.

When handled properly, this can lead to better discussions and more meaningful shared decisions.

Easing understanding

AI is particularly good at explaining simple medical concepts.

It can clarify what a blood test measures, what HbA1c means in diabetes, or why blood pressure control matters even when symptoms are absent.

For common long-term conditions such as diabetes, hypertension (high blood pressure), asthma or iron deficiency, AI explanations are often clear and structured.

In many cases, they are easier to understand than printed brochures.

Patients who use AI sensibly often come away with a better grasp of why tests are done and why treatment needs to continue long term.

Used appropriately, AI can be very helpful.

It can help patients prepare questions before seeing a doctor.

It can clarify laboratory or radiology reports that are usually difficult for patients to decipher.

It can clarify discharge summaries or clinic letters written in technical language difficult for laymen to understand.

It can reinforce lifestyle advice and remind patients of warning signs to watch for.

It can provide insight and deeper understanding of their treatment.

In Malaysia, where consultation time is often limited and specialist access varies, this support can make a real difference.

Lack of context

The main limitation of AI is context.

Medicine is not just a collection of facts.

It involves judgement, probability and an understanding of the whole person.

AI struggles when symptoms are unusual, when patients have multiple medical conditions, or when treatment decisions involve trade-offs.

It may list many possible causes without explaining which are common and which are rare.

This can create unnecessary fear.

A mildly abnormal result may sound alarming even when it is clinically insignificant.

AI cannot examine a patient, read body language or understand individual circumstances.

Misunderstanding test results is another common problem.

Reference ranges are not treatment targets, but this distinction is often lost.

Different AI platforms may also give different answers to the same question, leaving patients confused about what to believe.

There is also the danger of false reassurance or unnecessary alarm.

Important symptoms may be minimised if questions are poorly framed.

At the same time, harmless symptoms may be linked to serious illnesses.

Both outcomes are problematic.

Patients may delay seeking care, or they may become anxious without reason.

Just plain wrong

Another issue patients should be aware of is something known as AI “hallucinations”.

This refers to situations where an AI system gives information that sounds confident and well-written, but is actually wrong or partly made up.

This may include incorrect explanations, outdated guidelines, or references to tests or treatments that do not exist or are not relevant locally.

The problem is not that AI is intentionally misleading, but that it is designed to produce fluent answers, even when it is uncertain.

For patients, this is risky because errors are not always obvious.

A response may look professional and reassuring, yet still be inaccurate.

This is why AI output should always be checked with your doctor, especially when it involves diagnosis, treatment changes or urgent symptoms.

AI should also not be mistaken for a true second opinion.

A medical second opinion comes from another doctor who reviews records, examines the patient and takes responsibility for the advice given.

AI does none of this.

It carries no legal or ethical accountability.

It is best seen as an educational aid.

When AI advice differs from a doctor’s recommendation, that difference should prompt discussion, not independent action.

Care about your privacy?

Privacy deserves serious attention.

Many patients upload lab reports, scans and clinic letters into AI tools without considering where that data goes.

In Malaysia, awareness of health data protection is still uneven.

Patients should assume that anything shared on a public platform may be stored or reused.

Identifying details such as names and identification numbers should be removed whenever possible.

In recognising the increasing use of AI for healthcare purposes, large AI companies like OpenAI and Anthropic are already planning to incorporate specific tools to help consumers seeking information about health-related matters.

ChatGPT Health for instance safeguards patient privacy by maintaining a fully isolated environment within ChatGPT, where health-related conversations and files from connected apps like Apple Health or medical records are kept separate from general chats and secured with default encryption.

Data usage is strictly limited, with health conversations excluded from training OpenAI’s foundation models by default and accessible only by limited authorised personnel for safety purposes.

Aiming for better care

Responsible use of AI is straightforward.

Use it to improve understanding, not to make final decisions.

Share AI-generated questions openly with your doctor.

Never change or stop treatment based solely on what an AI tool suggests.

Clear questions help, but interpretation must come from a clinician who knows the full picture.

Doctors are already adapting to more informed patients.

Less time is spent defining basic terms, and more time is spent correcting misunderstandings and explaining why general advice may not apply to a particular individual.

This is not a threat to medicine.

It is a shift that demands better communication and clearer counselling.

There are moments when human doctors must always take over.

New diagnoses, major treatment decisions, worsening symptoms and emotionally complex situations cannot be handled by AI.

Empathy, judgement and responsibility remain human roles.

AI does not face consequences; doctors do.

AI in healthcare is not about replacing doctors.

It is about changing how information flows between patients and clinicians.

Used wisely, it can help patients understand their health better.

Used poorly, it can mislead and cause harm.

Both patients and doctors share responsibility.

The goal is not smarter technology; the goal is better care.

Dr Alan Teh is a consultant haematologist and bone marrow transplant physician with an interest in the practical use of digital tools in clinical practice. For more information, email starhealth@thestar.com.my. The information provided is for educational and communication purposes only, and should not be considered as medical advice. The Star does not give any warranty on accuracy, completeness, functionality, usefulness or other assurances as to the content appearing in this article. The Star disclaims all responsibility for any losses, damage to property or personal injury suffered directly or indirectly from reliance on such information.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Health

Remote surgery comparable to in-person for these procedures
Food noise: Are you constantly thinking about food?
Warning signs for pancreatic cancer
When a teen's body starts resisting insulin
Who is the intensivist?
Things that are quietly harming our hearts�
Two out of five cancers are preventable
Those 'educational' videos aren't helping your child speak
Reset your brain with a nap
How to eat healthily by the decade

Others Also Read