The French High Authority for Health has recently approved the use of generative AI in hospitals across France. With the rise of technologies like ChatGPT and Mistral, the authority has issued guidelines to govern this promising yet risky practice, emphasizing the importance of verification and confidentiality.
Amazon co-founder MacKenzie Scott has donated over $19 billion to charity in just five years
Diamond batteries powered by nuclear waste promise 28,000 years of clean energy
A Tool for Improvement
in Healthcare
Artificial Intelligence, especially generative tools such as ChatGPT, Mistral, and CoPilot, are quickly becoming widespread across various sectors, including healthcare. Observing this trend, the French High Authority for Health has endorsed their use within hospitals. The authority acknowledges that these systems could serve as a tool for improvement
within the healthcare system. The High Authority believes that AI could particularly help by saving time for healthcare providers, enhancing their work quality, and aiding their professional practices, such as synthesizing scientific literature or managing hospital resources.
Medical Confidentiality and AI ‘Hallucinations’: Critical Concerns
Despite its enthusiasm, the French High Authority for Health remains extremely cautious. The authority stresses the need for a measured
and careful
approach, as these tools come with risks that must be controlled. The main hazard identified is the potential for AI ‘hallucinations,’ where the AI might present completely false information as factual. Another significant concern is maintaining confidentiality. The authority emphasizes that no information related to medical secrets or patient identification should be disclosed in requests.
The A.V.E.C.
Method to Guide Usage
To assist healthcare providers, the authority has published a concise guide structured around the somewhat quaint acronym A.V.E.C.
: A
for Learning to master the tool and its limitations; V
for Consistently verifying the sources and generated content. E
for Assessing the tool’s relevance in daily practice. And C
for Communicating with patients and colleagues about its use. The authority also insists that doctors should not rely entirely on AI, so as not to dull
their own skills, citing the example of a triage nurse who must be able to work without the tool.
What’s the Verdict?
The French High Authority for Health has shown pragmatism. Instead of banning a tool that many already use, it aims to regulate its use. A 2024 Ifop survey showed that one in five doctors had already used AI for interpretation, and more than half were open to it. Thus, the approach is necessary. The real challenge is not the guide itself, but the actual training of healthcare providers and the implementation of technical safeguards. AI is a powerful ally in sorting information, but the risk of rushed diagnoses due to blind trust in an AI ‘hallucination’ is real. And you, how do you feel knowing your doctor might use ChatGPT to treat you?