How AI is entering EU clinics: Key findings from the first WHO Europe report

Author: Tatyana Hurynovich

How AI is entering EU clinics: Key findings from the first WHO Europe report-1

On April 19–20, 2026, WHO Europe and the European Commission released a report titled "Artificial intelligence is reshaping health systems: state of readiness across the European Union." This publication provides the first systematic overview of how European Union member states are integrating artificial intelligence into healthcare and evaluates the preparedness of their systems for such implementation. The study is based on data collected between 2024 and 2025 and covers the majority of EU member states.

According to the report, 74% of EU countries are already utilizing AI-assisted diagnostics. These systems primarily focus on analyzing medical imagery—such as X-rays, CT scans, and MRIs—while also assisting physicians in data interpretation and clinical decision-making. In many nations, these tools are operating in experimental, pilot, or partially routine capacities, yet they are already integrated into the workflows of diagnostic departments.

Simultaneously, 63% of EU countries use chatbots for patient interaction. These systems help answer basic questions, direct patients to the appropriate specialists, streamline appointment scheduling, and alleviate the burden on primary care. However, the report does not claim that chatbots are replacing human doctors; instead, it emphasizes their role as supportive tools during the initial stages of patient contact.

At the same time, another clear trend has emerged: only 8% of EU countries have established dedicated national AI strategies specifically for healthcare. Most states currently rely on broader national or regional digitalization and AI frameworks rather than developing specialized, in-depth programs for medicine. The report highlights that this lack of focus creates gaps in coordination, regulation, and risk assessment.

WHO Europe and the EU attribute the rise of AI in medicine to the aftermath of the COVID-19 pandemic, which accelerated digitalization and increased the strain on healthcare systems. Faced with staff shortages and overwhelmed facilities, AI tools are viewed as a way to speed up diagnostics, reduce physician workloads, and improve service accessibility in remote areas. Within this same context, the report notes that AI implementation must be coupled with infrastructure improvements, staff training, and robust data protection.

An assessment of the regulatory landscape was a key component of the report. It emphasizes that the introduction of the EU AI Act in 2024 established a common legal framework for AI within the EU, covering high-risk sectors such as healthcare. The document notes that this regulatory boundary allows for a combination of technological advancement and oversight regarding safety, transparency, and ethics. Nevertheless, regions and individual countries point out that meeting the law's requirements demands additional resources and time.

The report also identifies three primary categories of benefits provided by AI in healthcare. The first is the quality of medical care, including faster image analysis, support in detecting pathologies at early stages, and the standardization of clinical approaches. The second is accessibility and equity, expanding coverage particularly in remote and rural areas where specialists are scarce. The third is system efficiency, which involves reducing staff burden through the automation of routine procedures and the optimization of logistics and documentation.

However, the report does not shy away from the downsides. Among the key risks identified by WHO Europe experts and their partners are potential algorithmic errors, especially when models are tested on limited datasets or applied in unconventional clinical scenarios. It is emphasized that issues regarding transparency and explainability remain critical, as many AI systems are still difficult for both doctors and patients to interpret.

Furthermore, the document specifically addresses the need for ethics, privacy, and accountability. It notes that any decisions made with AI involvement must be clear, controllable, and compatible with patient rights. The report highlights that several countries are already launching projects to monitor the impact of AI systems while strengthening testing and certification requirements.

A dedicated section of the report focuses on international and cross-sector cooperation. It points out that in several countries, UNICEF and other global partners are involved in projects to implement AI solutions in healthcare, particularly in resource-limited regions. The document stresses that sharing expertise, data, and standards between nations can accelerate the development of safe and effective solutions, though it requires unified approaches to transparency and responsibility.

At the same time, the report itself lacks some of the phrasing often found in media coverage. For instance, there is no generalized claim by WHO Europe regarding "90%+ AI accuracy"; specific accuracy figures depend on individual systems, tasks, and studies rather than being presented as a universal standard. Similarly, the document contains no direct assertion that AI "reduces errors by 30%"; such a generalized figure is not included in the report, though such effects may be observed in specific pilot projects.

Ultimately, the report serves as a document that both records current progress and highlights risks, gaps, and the need for further work. It demonstrates that AI is already becoming part of everyday healthcare practice in most EU countries, but a significant journey remains before full integration and mature regulation are achieved across all areas.

4 Views
Did you find an error or inaccuracy?We will consider your comments as soon as possible.