Personalised Lighting with Machine Learning
Product and UX lead on an EU-funded research project applying machine learning to personalise health-promoting lighting for individual patients — navigating the design challenges of AI-driven behaviour in a clinical setting.
- Role
- Lead Product Experience Engineer
- Year
- 2022–2026
- Organisation
- Chromaviso A/S
Outcome
Contributed to a validated research framework for ML-based lighting personalisation, with direct product implications for Chromaviso's next-generation platform.
Health-promoting lighting works better when it’s personalised. A circadian light protocol optimised for the average patient is less effective than one calibrated to an individual’s sleep pattern, light sensitivity, and clinical condition. The research case for personalisation is clear. The product and UX case is harder.
This EU-funded research project gave me the opportunity to work at the intersection of machine learning and clinical product design — in a domain I’d spent years understanding from the ground up.
The problem space
Personalised lighting in a clinical setting raises questions that don’t arise in consumer personalisation:
Transparency — a patient has a right to understand why their lighting is behaving as it is. An opaque ML system that silently adjusts their light exposure is a harder ethical position than a fixed protocol. How do you surface enough of the model’s reasoning without overwhelming clinical staff with ML internals?
Trust and override — clinical staff will override any automated system when they judge it necessary. How do you design the override interaction so it provides useful signal back to the model rather than just breaking the feedback loop?
Individual vs. ward — personalising lighting for one patient in a shared room affects other patients. The model has to work at the room level, not just the individual level. This is a design constraint with no good analogue in consumer personalisation.
Data and privacy — the model learns from patient behaviour, which means handling sensitive health data with appropriate care. This shaped both the system architecture and the interface design around consent and data visibility.
My role
As the product and UX lead on the project, I worked alongside researchers and ML engineers to:
- Define the interaction model between the ML system and clinical staff — what the system communicates, when it intervenes, and how overrides are handled
- Design the feedback mechanisms that allow the model to learn from clinical staff behaviour without creating perverse incentives
- Conduct user research with clinical staff around their attitudes toward automated lighting systems — their trust thresholds, their override habits, and their information needs
- Prototype and test interfaces for communicating model state and confidence to non-technical users
What this project clarified about AI product design
The core design challenge in AI-powered clinical tools is calibrated trust — exactly the problem I’d explored academically in my earlier research on decision aids. The question isn’t “how do we get users to trust the system?” It’s “how do we get users to trust it at the right level — enough to let it work, not so much that they stop paying attention?”
That framing — designing for appropriate trust, not maximum trust — is the most important thing I’ve taken from this work, and it applies far beyond lighting.