Back to Blog
Trending Post

Ing. Alejandro Medina on Scaling AI Beyond the Toy

A practical expansion of Ing. Alejandro Medina's post on scaling AI, jobs, and tech sovereignty, with execution-focused steps.

LinkedIn contentviral postscontent strategyAI strategyenterprise AIAI governancetechnology sovereigntydigital transformationsocial media marketing

Ing. Alejandro Medina recently shared something that caught my attention: "AI for everyone, or only for those who know how to scale?" He added a hard-to-ignore signal from Davos 2026: "86% of companies will be impacted by AI by 2030, but most are stuck with the toy, not the system."

That contrast - toy versus system - is the point many leaders still miss. We demo a chatbot, pilot a model, celebrate a quick win, and call it transformation. Meanwhile, the organizations that treat AI like architecture (not a gadget) are building durable advantage.

In his post, Medina laid out three uncomfortable truths discussed at the World Economic Forum. I want to expand on each one and translate it into practical choices you can make this quarter.

"Having the model is useless if your culture cannot handle the speed."

1) Scaling is the new code: culture and operating model first

Medina's first truth is blunt: scaling is the new moat. In the early wave of AI adoption, simply having access to a strong model could look like an advantage. That window is closing fast. Foundation models are becoming commoditized, and vendors are shipping similar capabilities.

What does not commoditize easily is your ability to turn AI into reliable, repeatable outcomes across teams.

The real bottleneck is not the model

In practice, scaling fails for a few predictable reasons:

  • Fragmented ownership: AI lives in innovation teams, while operations, risk, and IT treat it as "someone else's project."
  • No shared standards: prompts, evaluation, security rules, and deployment patterns differ by team.
  • Slow decision paths: approvals and procurement cycles cannot match the pace of model iteration.
  • Unclear incentives: teams are not rewarded for adoption, quality improvements, or removing manual steps.

If you want AI to be a system, treat it like a product portfolio with a clear operating model.

What I would implement to "scale" on purpose

Here is a concrete, architecture-led approach that aligns with Medina's idea:

  1. Define the AI value map (not a tool list)
  • Pick 5 to 10 high-leverage workflows (claims handling, invoice matching, clinical documentation, maintenance scheduling, demand forecasting).
  • For each workflow, specify the measurable outcome: cost per case, cycle time, error rate, churn, fraud loss, downtime.
  1. Create an AI delivery lane
  • Standardize intake, experimentation, evaluation, security review, and deployment.
  • Make it easy to go from idea to production without reinventing governance each time.
  1. Institutionalize evaluation
  • Establish baseline metrics and test sets.
  • Add human review where stakes are high.
  • Monitor drift and data leakage.
  1. Build a culture that can handle speed
  • Train teams in "AI literacy" plus workflow redesign.
  • Empower domain experts to co-own solutions, not just "use" them.

Scaling is not just technical. It is organizational throughput.

The winners will not be the companies with the best demo. They will be the companies with the fastest reliable deployment loop.

2) Economic growth is separating from traditional employment

Medina's second truth is uncomfortable because it challenges an assumption many of us carry: if the economy grows, jobs grow in parallel. AI breaks that linkage in many sectors.

AI can create value by:

  • Automating tasks inside existing roles
  • Compressing time-to-output (one analyst does what a small team did)
  • Enabling 24/7 service capacity without adding headcount

That value can show up as higher margins, faster growth, or better customer experience. But it does not automatically show up as more payroll.

What this means for leaders

If you are responsible for strategy, you cannot treat workforce impact as an afterthought. You need an explicit plan that answers three questions:

  1. Which tasks will disappear, shrink, or expand?
    Map tasks, not job titles. Most roles will be re-bundled.

  2. Where will new value be created?
    Often in areas like model supervision, quality assurance, data stewardship, AI-enabled sales, customer success, and new product lines.

  3. How will you redeploy people?
    Without a redeployment plan, the default becomes silent productivity extraction: the same staff, more output, less time. That can work briefly, then it burns people out.

A more resilient playbook

  • Make "human-in-the-loop" a capability, not a temporary crutch. Define when humans review, how decisions are escalated, and what quality thresholds trigger intervention.
  • Invest in augmentation roles: prompt and workflow design, evaluation engineering, AI risk and compliance, knowledge management.
  • Track productivity gains and reinvest part of them into training. If AI generates value but not payroll, leaders should deliberately fund reskilling to avoid a long-term capability gap.

This is not about resisting automation. It is about managing the transition responsibly and competitively.

3) Technology sovereignty is at stake: build, integrate, or depend

Medina's third truth goes beyond individual companies and touches geopolitics and resilience: sovereignty. In his framing, you either integrate high-impact solutions in sectors like health, energy, and supply chain, or you become "just a consumer of someone else's APIs."

This matters even if you are not a government or a national champion. Heavy dependence on external AI services creates practical risk:

  • Cost risk: pricing changes and usage-based bills can explode at scale.
  • Availability risk: outages, throttling, or vendor policy changes can stop critical workflows.
  • Data risk: sensitive data exposure, cross-border compliance, and auditability challenges.
  • Capability risk: you lose internal expertise and bargaining power.

Sovereignty does not always mean building your own model

For most organizations, sovereignty is not "train a frontier model." It is:

  • Owning your data pipelines and governance
  • Owning your evaluation and monitoring stack
  • Owning the workflow logic and integration layer
  • Maintaining portability across vendors (so you can switch)

Think of it as architectural leverage. If your core processes depend on black-box external services without a fallback plan, you are renting your future.

High-impact domains: where architecture matters most

Medina mentioned health, energy, and supply chain because failures there are not just inconvenient - they are expensive and sometimes dangerous.

  • Healthcare: AI can accelerate documentation, triage, imaging support, and capacity planning. But it demands traceability, privacy, and clinical governance.
  • Energy: forecasting, predictive maintenance, grid optimization. Here, reliability and cyber resilience matter as much as accuracy.
  • Supply chain: demand sensing, routing, inventory optimization, supplier risk. The value is real, but so is the risk of over-automation without robust controls.

In these domains, architecture is strategy.

Less smoke, more execution: turning "toy" into "system"

Medina closed with a line I keep coming back to: "Less hype, more real execution. The future is not awaited, it is built with architecture." I agree, and I would add one practical filter:

If an AI initiative cannot survive contact with security, compliance, and operations, it is still a toy.

Here is a simple execution checklist you can use:

1) Pick one workflow with real stakes

Not a novelty chatbot. Choose a process that moves money, time, risk, or customer trust.

2) Define success metrics and guardrails upfront

Include quality thresholds, escalation paths, and what "failure" looks like.

3) Design the full system

Model + data + integration + evaluation + monitoring + human oversight.

4) Build for change

Assume the model will change. Make components replaceable.

5) Ship, learn, and iterate

Speed matters, but reliability matters more. Build the loop that lets you do both.

The real question: AI for everyone, or only for those who can scale?

Medina's opening question is the right one. AI access is becoming universal. Competitive advantage is shifting to the ability to scale responsibly: across teams, across workflows, and across time.

If you want your organization to be on the "system" side of the line, the work is not to find a better toy. It is to build the architecture, governance, and culture that can carry AI into production repeatedly.

This blog post expands on a viral LinkedIn post by Ing. Alejandro Medina. View the original LinkedIn post →