UNESCO AI Safety Conference: Key Takeaways for Healthcare and Life Sciences

Excelya company logo Author: Celia Wilson LinkedIn logo displayed alongside the profile URL of an Excelya representative., Global AI Performance Manager
Published on: 11/03/2026
 

Thought Leadership

AI Without the Neon Lights

Reflections from the International Conference for Safe and Ethical AI at UNESCO

 
 

No loud neon signs.
No branded giveaways.
And certainly no overly enthusiastic salespeople trying to demo their shiny new AI.

Instead, the halls of UNESCO were filled with flags from every nation, quiet intensity, and conversations that moved far beyond product features. We heard discussions about life, death, national security, social cohesion, and even equations. Real equations, used to confront the challenges of today and those coming at us faster than many expect.

 

International Conference for Safe and Ethical AI at UNESCO

UNESCO 2026 International Conference on Safe and Ethical AI

A couple of weeks ago, the International Conference for Safe and Ethical AI was nothing like the usual pay-to-play tech gathering. It was an invitation-only event featuring UNESCO’s Director-General, Turing Prize laureates, leading researchers, and policymakers from around the world.

Excelya was not only the only CRO invited, but also the only representative of clinical-trial-related AI in healthcare. That came as a genuine surprise. Many vendors and consultancies in pharma claim their AI is safe and ethical. In reality, that standard requires real expertise, scientific rigor, and continuous commitment.

 
 
Personal Reflection

A Personal Moment of Validation and Challenge

 
 
 

On a personal level, the conference was both validating and challenging. In a field where the frontier is constantly moving, it is easy to feel behind, as though your best work is never quite enough.

That is why it was genuinely meaningful to see AI architectures I have built reflected on the board as best in class for grounding outputs.

But the conference was also deeply challenging. Over three days, the conversations extended far beyond medicine and regulation. We heard about:

  • Child safety and cognitive development,
  • The implications of AI in warfare and national defense,
  • Economic and environmental disruption, even from highly optimized AI systems.

 
Key Insights

Two High-Impact Takeaways Worth Sharing

 

There were dozens of papers and posters, far too many to cover in a single blog post. But two clear, high-impact insights stayed with me.

Insight 1

Technical Insight: A Simple Twist on LLM-as-Judge with Outsized Value

Technical insight on AI output evaluation

One effective method for evaluating the quality of AI outputs, raised by OpenAI researchers in discussions around superintelligence and steganography, is “a spin on classic LLM as judge”.

Have two independent reasoning models answer one question:

“If you were the system, would you have produced this result?”

Their response can be yes, no, or abstain.

It is simple, practical, and more robust than rule-based evaluation alone. For teams building agentic systems, it is an elegant addition to any quality framework.

 
Insight 2

Child Safety: Don’t Expose Children to AI “Companions”

Child safety and ethical AI

This topic landed heavily with the audience.

Neurologists and child-protection experts explained that trust in children is built through micro-interactions, and that the personification of AI, especially through friendly voices, avatars, or conversational cues that mimic friendship, has a stronger neurological effect on people under 25.

That effect peaks during the teenage years, when the brain is still learning boundaries through human, and sometimes antagonistic, social interactions.

The result

Children influenced by AI “companions” are more likely to follow harmful suggestions, from impulsive purchases and social withdrawal to, in extreme cases, self-harm and even death.

This is not speculative harm. It is already happening in clinical and child-protection settings.

There is no justification for children to be exposed to AI-driven toys, AI “friends,” or AI tutors, especially in the home environment.

I am proud that Excelya takes this seriously. We will soon launch optional internal sessions to support parents and schools with practical guidance on helping children navigate AI safely.

Why Excelya’s Approach to AI Deserves Trust

Excelya takes the implications of AI seriously. In doing so, we have built real-world expertise grounded in rigor, responsibility, and practical application.

Our ambition is to develop AI capabilities that are not only innovative and high impact, but also genuinely aligned with human needs. In life sciences, that standard matters even more. The technologies we build and use must support better decisions, stronger research, and ultimately better outcomes for patients.

In a world where AI is shaping healthcare at unprecedented speed, Excelya is building the kind of expertise that deserves to be trusted.

Similar resources

Blogs

Excelya Achieves Veeva Services Partner Certification

Excelya Marketing Department
Excelyate Publications

ACDM 2026 Recap: Key Takeaways from Excelya’s Team

Hector De la Fouchardière & Marina Chenot
Excelyate Publications

Unlocking Real-World Evidence with the OMOP Data Model

Andrés Malatesta | Biostatistician