Applying ISO 14971 to IVD devices
A low-light tube luminometer is, on the surface, a straightforward instrument. A sample tube is placed in a dark enclosure, light emission is collected, converted to current by a photodiode, integrated over time, digitised, and reported as a number. There are no moving parts, no high voltages, no obvious safety hazards.
And yet, from an ISO 14971 perspective, it is a deceptively risky device.
The primary hazard is not electrical or mechanical. It is the possibility that the instrument produces a stable, plausible light reading that no longer corresponds to reality — and does so without any indication to the user.
Defining the Hazard Properly
The first step in applying ISO 14971 meaningfully was to explicitly define incorrect light measurement within a plausible range as a hazardous situation. Not a defect, not a nuisance, but a hazard. This reframing changed everything that followed.
Once this was written down, it became clear that many traditional controls — power-on self-tests, checksum validation, pass/fail production tests — were irrelevant. The instrument could pass all of them and still quietly drift.
The Measurement Chain as a Risk Surface
In a tube luminometer, the measurement chain spans optics, analogue electronics, timing, firmware, and interpretation. Risk emerges not at the weakest component, but at the interfaces.
For example, the photodiode front end operated at femtoampere to picoampere levels. Early prototypes relied on software offset subtraction to correct slow drift. On paper, this worked. In practice, small leakage paths formed over time due to contamination and humidity. The leakage was stable, temperature-correlated, and well within the dynamic range of the ADC. The result was a clean baseline shift that software happily subtracted — until it didn’t.
The solution was not a better algorithm. It was to redesign the analogue front end so that leakage currents were physically suppressed rather than mathematically removed. Guarded high-impedance nodes, conservative spacing, removal of solder mask in critical areas, and material choices that reduced moisture absorption all pushed leakage far enough below the noise floor that offset subtraction became a minor correction rather than a crutch. Under ISO 14971, this was an inherent risk control: the hazardous situation became much harder to realise.
Optical Stability as a Risk Control, Not a Performance Detail
Optically, the original design assumed that the LED excitation source and the tube-to-detector coupling were stable between calibrations. Field data proved otherwise. LED output degraded slowly, and tube holders showed subtle wear and contamination.
The critical insight was that optical degradation does not announce itself. It masquerades as a change in sample emission.
The redesigned system introduced a reference optical path that sampled the excitation source independently of the sample measurement path. This reference was deliberately made insensitive to the sample geometry but sensitive to source ageing. When the reference and measurement channels drifted together, the system flagged loss of validity rather than silently correcting the result.
This did not eliminate recalibration, but it transformed recalibration from a blind schedule into a risk-based action. From an ISO 14971 standpoint, detectability improved dramatically.
Timing and Digital Quiet Avoid Believable Bias
Another failure mode emerged from timing interactions. Digital activity — USB communication, background processing — was phase-locked to the integration window. This introduced small, repeatable biases without increasing noise.
The solution was architectural rather than algorithmic. Integration windows were synchronised to a clean reference clock, and digital activity was explicitly frozen during sensitive analogue periods. This reduced peak throughput slightly, but it ensured that residual errors were stable and characterisable rather than firmware-dependent. Again, the aim was not perfection, but observability.
Designing Degradation to Be Visible
One of the most valuable ISO 14971-driven changes was the addition of measurements that were never shown to the user. Dark measurements taken under controlled conditions, bias current monitors, and internal reference checks were logged and trended over time.
None of these corrected the reported luminance value directly. Their role was to reveal when assumptions were no longer holding. When dark current crept upward or reference divergence exceeded limits, the instrument stopped asserting validity.
This was a conscious decision to fail honestly. A temporarily unavailable result is inconvenient. A confidently wrong one is dangerous.
Output Presentation as a Safety Decision
Finally, output presentation was reconsidered. Earlier versions aggressively averaged readings to present impressively stable numbers. The revised design preserved a small amount of visible variability and exposed confidence indicators derived from internal health metrics.
Users initially worried about “noisier” output. In practice, they quickly learned to trust a system that occasionally admitted uncertainty more than one that never did.
The Outcome
None of these changes dramatically improved headline sensitivity. Some increased BOM cost. Several added design complexity.
But together, they transformed the risk profile of the instrument. Incorrect readings became rarer, and — crucially — much harder to hide. Under ISO 14971, residual risk was no longer dominated by silent measurement error but by detectable, manageable conditions.
Looking back, the most important lesson was this: in low-light luminometry, correctness is not a single calibration event. It is a behaviour over time. ISO 14971 is valuable precisely because it forces engineers to design for that behaviour, not just for specifications.
An instrument that tells the truth is not one that never drifts.
It is one that makes drift impossible to ignore.