Artificial intelligence. Will I be my own doctor?

Science & Medicine
In recent years, we've heard a lot about Artificial Intelligence (AI). From phones to search engines, from social networks to hospitals: AI is changing our world. In medicine, its arrival has raised many questions, especially among patients: Will AI help doctors? Will it replace them? Or will it make every patient their own doctor?

edited by Niccolò Maurizi

A recent article published in the prestigious journal Nature addresses this very topic. Let's take a look at what's happening, in simple, concrete terms.

What can Artificial Intelligence do in medicine?

AI can help medical staff better organize appointments and manage hospitals, analyze complex tests like X-rays, CT scans, and MRIs, and suggest personalized therapies based on each patient's data. The goal is not only to speed up work, but above all to improve access to care and reduce errors.

But are there also risks?

Like any powerful tool, AI also carries risks. Specifically, decisions made by AI aren't always easily explained; errors could arise due to faulty data or imperfect reasoning models; or there could be a loss of trust if we don't understand how AI reaches its conclusions. For these very reasons, the European Union has introduced a new AI Act, which carefully regulates the use of AI in "high-risk" sectors, such as medicine.

Artificial intelligence in medicine

What should AI that truly helps doctors look like?

The authors of the article propose an interesting idea: AI must learn to reason like a team of doctors, not like a single computer. In clinical practice, especially in complex cases like cancer, decisions are never made by a single doctor. Multiple specialists gather (as in the case of tumor boards) to discuss the data, comparing different perspectives. During these meetings, doctors don't speak in incomprehensible technical jargon, but use simple, shared concepts: the type of tumor, its stage, the presence of other health problems, the patient's age and frailty. According to the researchers, AI should learn to communicate in the same way: not just saying "I saw something unusual," but explaining how it analyzed the data and the clinical concepts it bases its recommendations on.

AI in medicine should not act like a single, isolated computer, but rather function like a team of doctors: capable of comparing data from different perspectives and, above all, of explaining itself clearly and understandably.

What are the most promising tools?

One of the most interesting approaches is called the Concept Bottleneck Model (CBM).
These are AI models that do more than just give an answer,
but they also provide the “stages” of reasoning that led them to that answer.
For example, an AI that helps diagnose lung cancer will not only say “there is cancer,” but also:

  • There is a mass in that area.
  • The mass has certain characteristics of shape and color.
  • Other clinical factors of the patient suggest a certain type of tumor.

In this way, the doctor (and also the patient) will be able to understand better
and have more confidence in the proposed decision.

Will AI really replace doctors?

No. At least not in the future that is being built now.
The idea is for AI to become an aid, not a substitute. The most advanced systems will be designed to work alongside doctors, always leaving the human doctor the final say in decisions, and be transparent and correctable: if they make mistakes, the doctor will be able to intervene and correct them.

“The doctor will not be replaced: his role will become even more central.”

Will I be my own doctor?

Another common fear is that, with AI, patients will have to do everything on their own.
Here too, the answer is reassuring: no. AI will be able to help monitor health (with apps, smartwatches, and rapid diagnostic tools), but the doctor's role will remain crucial in interpreting data, recommending treatments, and providing emotional support to the patient.
Indeed, precisely because there will be more information available, the doctor will be even more important in helping the patient navigate the many options.

The problems to deal with

However, for all this to work, it will be necessary to work on some key points:

  • Education: doctors, engineers, and patients will have to learn to understand and use AI correctly.
  • Collaboration: mixed teams of doctors, technologists, and legal experts are needed to develop truly useful tools.
  • Clear rules: like the new European AI Act, to ensure security and transparency.
  • Human control: AI must always be supervised by competent people.

Artificial Intelligence in medicine is not an enemy, but a tool. If done well, it can help doctors work better and patients receive more precise and humane care.
The doctor will not be replaced: his role will change, becoming more central in coordinating information, choosing the best treatments, and accompanying the patient on their health journey.
We patients will not be alone in front of a machine: we will continue to find a real person ready to listen to us and guide us.
Technology changes, but care remains human!

“AI is not an enemy, but a tool: control always remains human.”

References:

Banerji CRS, Chakraborti T, Ismail AA, Ostmann F, MacArthur BD. Train clinical AI to reason like a team of doctors. Nature. 2025 Mar;639(8053):32-34.