Back to Blog
Amber Vodegel on Scaling Digital Health Without Harm
Trending Post

Amber Vodegel on Scaling Digital Health Without Harm

·Digital Health

A deep dive into Amber Vodegel's viral post on trust, incentives, and equitable digital health that protects data and serves women.

LinkedIn contentviral postscontent strategydigital healthwomens healthhealth equityethical AIdata privacyWorld Economic Forum

Amber Vodegel recently shared something that caught my attention: "I’ve just arrived in Davos for the World Economic Forum... for a few intense days of conversations about health, technology and impact." She added that many discussions circle one tough question: "how do we scale trust, access and quality in digital health without exploiting data or excluding the most vulnerable."

That single line captures the real tension in digital health right now. We all want scale: more users, more insights, more outcomes. But scale without guardrails can turn into extraction. And scale without inclusion can widen the very gaps healthcare is supposed to close.

Amber also said something I keep coming back to: "Women’s health does not fail because of a lack of technology. It fails because the incentives are wrong." I want to expand on that idea, because it explains why so many well-funded health apps feel underwhelming in practice, and why the next era of digital health needs a different operating model.

The Davos question: scale trust, access, and quality

When Amber frames the Davos conversations around trust, access, and quality, she is pointing to three variables that are tightly linked, but rarely optimized together.

  • Trust: People must believe the product is safe, effective, and on their side.
  • Access: The people who most need support must be able to use it, afford it, and understand it.
  • Quality: The experience and outcomes must be clinically sound, culturally appropriate, and continuously improved.

What makes this hard is that many scaling tactics push against at least one of those goals.

Trust is fragile in digital health

Trust can be lost quickly when users suspect:

  • Their intimate health data is being monetized.
  • Their data could be used against them (insurance, employment, stigma).
  • The product is not designed for them, but for an investor deck.

In women+ health, the trust bar is even higher because the data is often deeply sensitive: cycles, fertility intentions, sexual health, menopause symptoms, pregnancy loss, chronic pain, mental health, and more.

Access is not just about whether an app exists in an app store. It is about:

  • Language and health literacy
  • Low bandwidth environments
  • Device constraints and shared phones
  • Payment friction and ability to pay
  • Cultural context and safety

If a product assumes credit cards, stable internet, and privacy at home, it can accidentally exclude large populations.

Quality has to be measurable and accountable

Quality is often treated as a feature set: more trackers, more content, more reminders. But quality in health should mean:

  • Evidence-based guidance (and clear boundaries when it is not)
  • Safe triage and escalation pathways
  • Bias-aware AI and data practices
  • Continuous monitoring for harm, not just engagement

Amber’s framing matters because it forces us to ask: what does "growth" look like if we refuse to trade away trust, access, or quality?

“Women’s health does not fail because of a lack of technology”

I agree with Amber: the tech is not the main bottleneck. The incentives are.

Digital health has plenty of technical capability:

  • Smartphones and sensors
  • Telehealth and remote monitoring
  • AI-driven triage and personalization
  • Population analytics

Yet many solutions still do not translate into equitable outcomes. Why? Because business models and success metrics shape product decisions.

"Women’s health does not fail because of a lack of technology. It fails because the incentives are wrong."

If a company is rewarded for time-on-app, it will optimize for attention, not resolution. If it is rewarded for subscriptions, it will focus on users who can pay monthly, not those with the highest need. If it is rewarded for advertising, it will design around targeting, not care.

The hidden costs of common consumer health models

Amber noted that current consumer health tech platforms are often built around "subscription models" and "advertising models," or they are "tools that were never designed for real users, marginalised communities, or global use." Let’s unpack why that matters.

Subscriptions can harden inequity

Subscriptions are simple to understand and easy to forecast. But health needs are not evenly distributed across income.

  • People with chronic conditions or complex life stressors may need support most.
  • Those same users may be least able to pay.

A subscription wall can quietly turn a health product into a premium wellness product.

Advertising pulls incentives away from care

Ad models create pressure to maximize engagement and audience value. In health, that can lead to:

  • Data collection beyond what is necessary
  • Content designed for clicks, not clarity
  • Conflicts between user interest and advertiser interest

Even if a company promises privacy, the structural incentive still leans toward more tracking and more segmentation.

“Not designed for real users” shows up in the details

When a product is not designed for global use or marginalized communities, it looks like:

  • One-size-fits-all symptom libraries that ignore comorbidities
  • Recommendations that assume a specific healthcare system
  • UX patterns that require constant connectivity
  • AI models trained on non-representative datasets

This is where quality and access fail together.

Digital consumer health is not a niche

Amber said, "Digital consumer health is not a niche, my previous company reached over 150 million women globally. It is foundational to health, education and economic participation." That is a powerful reminder that consumer health tools are not just nice-to-have.

Women’s health affects:

  • School attendance and learning outcomes (pain, anemia, stigma, lack of supplies)
  • Workforce participation (symptoms unmanaged, caregiving load, pregnancy and postpartum health)
  • Household economics (medical costs, missed work, long-term chronic disease)

When consumer health tools work, they can reduce friction in daily life and create earlier touchpoints for prevention and care navigation. But that only happens if the product is trusted, accessible, and high quality.

What does “scale without exploiting data” look like?

Amber’s question includes a non-negotiable: scale must not require exploitation. In practice, ethical scaling often means adopting principles like these:

Data minimization and purpose limitation

Collect only what is needed for the user benefit being delivered. Be explicit about purpose, retention, and deletion.

User control that is real, not performative

Give users meaningful choices:

  • Clear consent flows
  • Easy export and deletion
  • Understandable explanations of data use

Privacy-preserving analytics and AI

Where possible, use approaches that reduce exposure:

  • Aggregation and de-identification (with realistic risk assessment)
  • On-device processing when feasible
  • Strict vendor controls and audits

Safety and bias monitoring

AI in health should be treated as a clinical risk surface. That means testing for performance differences across groups, monitoring drift, and having escalation paths when the model is uncertain.

The role of public-private collaboration and impact capital

Amber invited people working on "health equity, ethical AI, public private collaboration, or impact driven capital" to connect. That cluster of stakeholders matters because the incentive problem is not solved by product teams alone.

If we want models that prioritize equity, we often need:

  • Blended finance that tolerates longer payback periods
  • Procurement pathways that reward outcomes and inclusion
    n- Policy that sets baseline privacy and safety expectations
  • Partnerships with community organizations that can validate real-world usability

The uncomfortable truth is that the market sometimes underfunds what society most needs. Impact-driven capital and public-private collaboration are ways to close that gap, but only if accountability is built in.

What I hope more founders take from Amber’s point

Amber’s post is a reminder that digital health is not primarily a tech race. It is an incentive design problem.

If I translate her message into founder-level questions, they sound like this:

  • Who is rewarded when the product succeeds, and who pays the cost?
  • What metrics define success: engagement, revenue, outcomes, equity, safety?
  • Can the model grow without collecting more data than necessary?
  • Can the model serve people with low ability to pay, low bandwidth, or high vulnerability?

If the honest answers are uncomfortable, that is not a failure. It is a design prompt.

Closing thought

Amber arrived in Davos expecting intense conversations about health, technology, and impact. The conversation I want to see more of is the one she raised directly: scaling digital health in a way that earns trust, expands access, and improves quality, without exploiting data or excluding the most vulnerable.

That is not a slogan. It is a set of product, business model, and governance decisions. And it is exactly where the next generation of digital health leaders will differentiate.

This blog post expands on a viral LinkedIn post by Amber Vodegel, Exited Founder | CEO | NED | Investor | Public Speaker | AI & Digital Health Strategist | Advocate for Women+ Health | Mother of two. View the original LinkedIn post →