Ozan Okutan on CSA vs CSV: A Reality Check
A practical take on Ozan Okutan's viral post: what CSA changes vs CSV, and how to apply risk-based assurance in regulated teams.
Grow your LinkedIn to the next level.
Use ViralBrain to analyze top creators and create posts that perform.
Try ViralBrain freeOzan Okutan recently shared something that caught my attention: "It’s Friday again, so naturally, we’re gathering to pay tribute to the holy cow of CSV (Computerised System Validation). What could be more fulfilling?" Then he doubles down, calling CSA an "idolised figure" and warning readers to "proceed with extreme caution" and take it "in small doses."
That mix of sarcasm and fatigue will feel familiar to anyone who has sat through one more slide deck about validation, only to leave with the same questions you had before. And the truth is: the friction Ozan is poking at is real. Many organizations talk about CSA (Computer Software Assurance) like it is a miraculous replacement for CSV. Others treat it as a rebrand that adds more paperwork. Neither extreme is helpful.
So let’s respond to Ozan’s point in a more practical way: what CSA is actually trying to fix, what it changes (and what it does not), and how quality teams can use it without turning it into yet another endurance test.
Why CSV became the "holy cow"
CSV earned its reputation for a reason. In regulated environments, computerized systems can directly affect patient safety, product quality, and data integrity. When the consequences are high, teams understandably reach for control. Over time, those controls often become ritualized:
- Writing long validation plans and scripts for low-risk functions
- Treating every requirement as equally critical
- Over-relying on vendor paperwork or, conversely, distrusting it completely
- Producing test evidence that proves you ran tests, not that the system is fit for intended use
The result is the situation Ozan is joking about: validation as ceremony. You can be busy for weeks and still feel unsure about what risk was actually reduced.
CSA in plain language (no prophets required)
CSA, as promoted by FDA guidance and industry groups like ISPE, is not "no validation." It is software assurance that is:
- Risk-based: more effort where failure matters most
- Evidence-based: right-sized documentation that demonstrates confidence
- Outcome-focused: proving intended use, not proving you can follow a template
In other words, CSA tries to shift the question from "Did we follow the CSV playbook?" to "Do we have sufficient assurance this software will do what we need it to do, consistently, for its intended use?"
CSA is not a shortcut. It is a different aim: confidence through risk-focused evidence rather than uniform ceremony.
CSA vs CSV: what actually changes
When people argue about CSA versus CSV, they often argue past each other. CSV is the broader discipline of validating computerized systems. CSA is an approach within that discipline for software features and assurance activities.
Here is what tends to change when you genuinely adopt CSA:
1) You stop treating all requirements equally
Under CSA, you explicitly identify which requirements or features are high risk (for example, those affecting patient safety, product acceptance, sterility decisions, or release) and which are administrative or low impact.
Practical shift: You write crisp, testable requirements for high-risk functions and accept lighter-weight statements for low-risk workflows.
2) You evolve testing from scripted theater to smart evidence
Traditional CSV often leans on step-by-step scripts for everything. CSA encourages more unscripted or exploratory testing where appropriate, plus automation, plus leveraging vendor testing, as long as the evidence is credible and traceable to risk.
Practical shift: For a low-risk report filter, a short test note with screenshots might be fine. For a high-risk calculation, you still design rigorous test cases, boundary conditions, and independent review.
3) You document the decisions, not just the activities
CSA does not mean less thinking, it means making your thinking visible. The documentation burden moves toward risk rationale and assurance arguments.
Practical shift: A strong risk assessment and traceability matrix can reduce the need for bloated protocols that repeat the same information.
4) You use suppliers more strategically
CSA encourages leveraging supplier documentation where it is relevant and trustworthy, but also demands you assess it.
Practical shift: Instead of filing a vendor IQ binder and calling it a day, you evaluate: What did the vendor test? Under what conditions? Does it cover your intended use and configuration?
A simple CSA workflow you can actually run
If you want CSA to be harmless (as Ozan jokes it might be for medical device folks), it helps to operationalize it in a way that teams can repeat.
Step 1: Define intended use like you mean it
Write the intended use in terms of:
- Business process (what outcome you need)
- Users (who relies on it)
- Data (what data is created, modified, or used for decisions)
- Interfaces (what connects to what)
This prevents the classic trap where validation is done against vague goals.
Step 2: Do a risk assessment that is not performative
A useful risk assessment ties hazards to software failure modes and patient or product impact. For medical devices, this should align with your quality system and design controls; for pharma, it often aligns with data integrity and GMP impact.
Keep it concrete. Examples of higher-risk areas:
- Automated acceptance decisions and release status
- Calculations that drive dosing, specifications, or limits
- Audit trail functionality and electronic records controls
- Interfaces that transfer critical data between systems
Step 3: Pick assurance activities that match the risk
This is where CSA becomes real. For each risky feature, decide what evidence will create confidence:
- Focused functional testing for high-risk requirements
- Negative testing and boundary testing for calculations
- Role-based access verification for record control
- Challenge tests for audit trail and data integrity
- Supplier evidence plus targeted in-house confirmation
For low-risk features, document lighter evidence. The discipline is in defending why it is sufficient.
Step 4: Keep traceability tight, not heavy
Traceability should show the line from intended use and risk to test evidence. It does not need to become a spreadsheet monster.
A good rule: if a reviewer asked, "What did you do to ensure the highest-risk functions work?" you can answer in two minutes with links to the evidence.
Step 5: Build it into change control
CSA pays off over time if you treat assurance as continuous. When a system changes, you reassess risk and focus regression testing where it matters.
Common ways CSA gets turned into another endurance test
Ozan’s queasiness about the CSA presentation is a warning sign. Here is what usually goes wrong:
- CSA becomes a new template stack rather than a new decision style
- Teams keep all the old CSV deliverables and add CSA deliverables on top
- Risk assessments are generic and never drive testing choices
- Organizations declare "exploratory testing" but fail to train people to capture credible evidence
- Internal auditors expect the old artifacts, so teams keep producing them
If your CSA rollout adds work but does not remove low-value work, people will (rightfully) resent it.
Medical device vs pharma: different labels, same need for clarity
Ozan also teases that medical device folks can relax because CSA is "supposedly harmless." In reality, device and pharma teams share the same core challenge: demonstrate control over software that affects quality.
What differs is often the regulatory framing and internal culture:
- Device organizations may align CSA with QMSR/QSR expectations, design controls, and software used in production and quality systems (think 21 CFR 820 context).
- Pharma organizations may frame CSA around GMP systems, electronic records expectations, and broader data integrity concerns.
Either way, the winning move is the same: write down intended use, identify risk, then show targeted evidence.
A quick note on why Ozan’s post worked
His post is a mini case study in communicating a dry topic with personality. The hook is immediate, specific, and polarizing (the "holy cow of CSV" line). It gives people permission to admit the pain, which drives comments.
If you are trying to make quality and regulatory topics more readable for your team, start by tightening your opening line and naming the tension. If you want help brainstorming openings, a free hook generator can be useful as a starting point.
Making CSA genuinely useful
Here is the standard I use when evaluating whether a CSA approach is working:
After reading your assurance package, a competent reviewer should understand the system’s intended use, the key risks, and why your evidence is enough.
If you can hit that standard with fewer pages, great. If it takes more pages for a high-risk system, that is also fine. The goal is not minimal documentation. The goal is credible assurance.
And that, I think, is the real point behind Ozan Okutan’s satire: stop worshipping the process. Use the process to protect patients, products, and data.
This blog post expands on a viral LinkedIn post by Ozan Okutan, Senior Quality Engineer. View the original LinkedIn post →
Grow your LinkedIn to the next level.
Use ViralBrain to analyze top creators and create posts that perform.
Try ViralBrain free