Stefano Gaburro, PhD, CCC on ECG Software Risk
Expanding Stefano Gaburro, PhD, CCC's viral post on why ECG software drives arrhythmia detection, QTc decisions, and strategy.
Stefano Gaburro, PhD, CCC recently shared something that caught my attention: "Your preclinical ECG analysis software doesn't just process data. It decides what you can see." He also called out the common habit of treating the choice between ecgAUTO (emka TECHNOLOGIES), Ponemah (Data Sciences International), and NOTOCORD-hem (Instem) as "a procurement exercise" where you "pick whatever matches your telemetry hardware" and move on.
I agree with the provocation. In preclinical cardiac safety, analysis software is not a neutral accessory. It is part of the measurement system. And when the measurement system changes, the scientific conclusion can change with it.
Below, I want to expand on what Stefano is really pointing to: software selection is a risk decision that can alter arrhythmia detection rates, shift your confidence in QTc findings, and even influence whether a program triggers additional clinical work under evolving regulatory expectations.
The uncomfortable truth: software choices change the answer
Stefano highlighted a data point that should make any safety pharmacology leader pause:
Snapshot ECG analysis detects premature ventricular beats in about 0.16% of subjects.
Continuous 24-hour analysis with the right software detects them in roughly 19-26%.
That difference is not a small uplift. It is a fundamentally different view of the animal's rhythm burden.
If you have ever had a debate like "Did we really see ectopy?" or "Is that finding real or just noise?" then you already understand the stakes. The tool you use defines:
- What gets detected (sensitivity)
- What gets labeled as an event (classification rules)
- What gets presented for human review (workflow and triage)
- What gets summarized in a report (metrics and default settings)
When people say "the data is the data," they are often forgetting that a large portion of "data" in ECG analysis is the result of detection and classification algorithms. Software does not only show the signal. It interprets it.
Snapshot vs continuous: not just time, but context
On paper, snapshot ECG and continuous ECG are both "ECG." In practice, they answer different questions.
What snapshot ECG tends to do well
Snapshot (or short epoch) analysis is often used because it is simpler operationally:
- Shorter review time
- Lower data volume and storage burden
- Easier standardization across studies
- Fewer moving parts in the pipeline
It can be appropriate when you are primarily focused on steady-state intervals (PR, QRS, QT) under controlled conditions.
What continuous analysis adds
Continuous 24-hour analysis adds context and captures reality:
- Circadian effects (autonomic tone changes)
- Transient arrhythmias that would never appear in a 10-30 second window
- Drug effects that occur only at specific exposure windows
- Event clusters (burden) rather than isolated beats
Stefano's point is that when you switch from snapshot to well executed continuous analysis, you are no longer "upgrading." You are asking, and answering, a different regulatory question: what is the true arrhythmia burden and how confidently can we quantify it?
QTc sensitivity now has strategic consequences
Another part of Stefano's post that deserves expansion is the regulatory implication. He connected preclinical QTc sensitivity to the updated ICH E14/S7B Q&As and the concept often described as the "double negative."
In simple terms, the direction of travel in modern cardiac safety is toward integrated evidence: if preclinical and early clinical data both fail to show a concerning signal (the double negative), you may reduce the need for a standalone Thorough QT (TQT) study for some programs.
That makes the quality and sensitivity of your preclinical QTc package more than a scientific concern. It becomes a development lever. If your tools under-detect or poorly quantify effects, you may:
- Miss a signal you should have managed earlier, increasing downstream risk
- Or fail to build a strong enough negative package, increasing the chance you still need additional clinical work
Either way, the software choice can materially affect timelines, cost, and regulatory confidence.
Why pattern recognition can beat attribute rules
Stefano also noted that pattern recognition approaches can outperform pure attribute-based methods by an order of magnitude for arrhythmia burden accuracy, validated against board-certified veterinary cardiologists.
That statement lands because it maps to a real technical limitation in many ECG pipelines.
Attribute-based methods (what they are)
Attribute-based classification often relies on fixed thresholds and rules tied to measurable features:
- RR variability thresholds
- QRS width cutoffs
n- Morphology correlation thresholds - Template matching rules
These can work, but they can be brittle when signals are noisy, when morphology shifts with posture, or when ectopy blends into normal variability.
Pattern recognition (why it matters)
Pattern recognition approaches (including advanced morphology clustering and more contextual classifiers) can incorporate:
- Beat-to-beat relationships (sequence context)
- Multiple features simultaneously (not one threshold at a time)
- Better handling of borderline cases and gradual morphology drift
The practical outcome is not "fancier analytics" for its own sake. The outcome is fewer missed events and fewer false positives, plus better estimates of true burden.
And burden is what regulators and internal governance teams increasingly care about. A single isolated beat is rarely the story. The story is whether a compound meaningfully changes rhythm behavior.
Procurement is not the right frame: think system-level capability
Stefano criticized the industry habit of selecting software based on what matches the telemetry hardware and calling it done. I would push that even further: the selection should be based on system-level capability and program strategy.
Here is a more useful way to frame it.
1) Define the decision you need to support
Before comparing platforms, define what you must be able to conclude:
- Do we need high confidence in ectopy detection and burden?
- Are we likely to face close QT scrutiny (class risk, exposure margins, mechanism)?
- Do we need robust audit trails and reviewer workflows for a CRO-client chain?
2) Evaluate sensitivity, workflow, and review ergonomics
Ask questions like:
- How does the tool handle continuous data review at scale?
- What is the human-in-the-loop experience for confirming arrhythmias?
- Can you validate performance against expert adjudication?
3) Consider the broader ICH S7A core battery
A nuance Stefano added is often missing from surface-level comparisons: DSI is no longer only "cardiovascular telemetry." The ecosystem spans multiple core battery endpoints (for example, Ponemah for cardiac safety, NeuroScore for CNS, and Buxco FinePointe for respiratory).
This matters because a single hardware and software ecosystem that credibly covers cardiovascular, CNS, and respiratory endpoints can change:
- Total cost of ownership
- Training burden and SOP harmonization
- Data integration across safety domains
- Vendor and qualification complexity
In other words, you are not just buying ECG software. You are choosing an operational model for generating nonclinical safety evidence.
My takeaway: treat software like a scientific instrument
The heart of Stefano's post is a warning: if you treat ECG analysis software as an afterthought, you may accidentally accept lower sensitivity, different classifications, and ultimately different answers.
"That is not a feature upgrade. That is a different answer to a regulatory question."
If you are in a position to influence tool selection, my recommendation is to bring the decision back to first principles: what are we trying to detect, what level of confidence do we need, and what downstream decisions will this evidence drive?
Because in cardiac safety, the tool does not just process the signal. It shapes the claim.
This blog post expands on a viral LinkedIn post by Stefano Gaburro, PhD, CCC, I show you how to derisk your quality control with informed decisions| Microbiology and Neuropharmacology PhD | Keynote Speaker l Book Author. View the original LinkedIn post →