Back to Blog
Trending Post

Arek Skuza on Designing AI Transformations That Stick

A practical expansion of Arek Skuza's point that AI succeeds when designed: clear purpose, process integration, and metrics.

LinkedIn contentviral postscontent strategyAI transformationAI strategyenterprise AIprocess designAI governancesocial media marketing

Arek Skuza recently shared something that caught my attention: "The difference between a successful AI transformation and a 'failed pilot' usually comes down to one thing: Design." He added that most companies are "doing AI," but very few are designing for it, and that framing nails what I see across teams that feel busy with prototypes yet struggle to create durable business impact.

When Arek says AI should not be an afterthought or a plugin, he is pointing at the uncomfortable truth: many initiatives start with a model and a demo, not with an operating design. The result is a short-lived pilot that never becomes a dependable capability.

Below, I want to expand on his core idea: if you want AI to actually transform a business process, you have to design for AI from the start, the same way you design for security, reliability, and customer experience.

The hidden reason pilots fail: they are not designed to live

A pilot can "work" in a sandbox and still fail in production because production demands more than accuracy. It demands:

  • Clear ownership (who runs it, who approves changes)
  • Stable inputs (data that keeps arriving in the right shape)
  • Safe outputs (guardrails, review paths, escalation)
  • Measurable value (a metric tied to a business goal)
  • A place in the workflow (so people actually use it)

Without those design elements, teams end up with what Arek described: an engine with no steering wheel. The technology exists, but it is not guided into the organization in a repeatable way.

"AI is the engine, but strategy and design are the steering wheel." - Arek Skuza

Start with purpose: Why are we building AI?

Arek asks the most important first question: "Why are we building AI?" This is not a philosophical warmup. It is the difference between a capability that compounds and a feature that confuses.

A good answer is specific and measurable, for example:

  • Reduce customer support handle time by 15% while maintaining CSAT
  • Increase sales pipeline quality by improving lead qualification consistency
  • Cut cycle time in invoice processing by automating exception triage

A weak answer sounds like:

  • We need to use AI to stay competitive
  • Leadership wants an AI initiative this quarter
  • Our competitors have a chatbot

Purpose should also define the "job" of the AI. Is it drafting? Classifying? Searching? Recommending? Deciding? The closer you get to decision making, the more design you need around risk, accountability, and human oversight.

A quick test: the one-sentence value statement

If you cannot complete this sentence, pause and design first:

"We are building AI to [do what] for [which users] so that [which metric] improves, while we control [which risk]."

Design the integration: How will AI fit the business process?

Arek's second question is where most teams underestimate the work: "How will we integrate AI with our current business processes?" This is where transformation either happens or stalls.

In practice, AI rarely replaces an end-to-end process. It reshapes steps inside it. The design work is to decide:

  1. Where AI sits in the workflow

    • Before a human (prep and draft)
    • Alongside a human (co-pilot and assist)
    • After a human (QA, review, monitoring)
  2. What inputs it receives

    • Structured fields, documents, knowledge base content, CRM notes
    • Who is responsible for data quality and access
  3. What outputs it produces

    • Draft text, recommended next step, classification label, summary
    • Where the output appears (ticketing tool, CRM, internal app)
  4. What the "last mile" looks like

    • Human approval or auto-apply
    • Confidence thresholds and fallback behavior
    • Escalation when the AI is uncertain or detects sensitive topics

Example: support agent assist that actually gets used

A common pilot: a chatbot that can answer questions from internal docs.

A designed-for-use version: inside the agent console, the AI suggests a response draft plus citations, highlights policy conflicts, and pre-fills fields. The agent can accept, edit, or request alternative drafts. The system logs acceptance rates and reasons for edits, which become training signals.

Notice the difference: the second version is designed as a workflow component, not a standalone demo.

Design the measurement: What metrics define success?

Arek's third question is the guardrail against vanity outcomes: "What metrics will we use to measure AI's success?" If you only measure model metrics (accuracy, BLEU, benchmark scores), you will miss whether the business changed.

I like to design metrics in three layers:

1) Business outcomes (the reason it exists)

Examples:

  • Cost per ticket
  • Revenue per rep
  • Fraud loss rate
  • On-time delivery

2) Operational leading indicators (what moves first)

Examples:

  • Average handle time
  • First-contact resolution
  • Time to draft a proposal
  • Exception backlog volume

3) Model and product health (quality and risk)

Examples:

  • Citation coverage and correctness for RAG systems
  • Hallucination and policy violation rate from audits
  • Latency and uptime
  • Rate of human overrides and reasons

The key design move is to link these layers. If handle time drops but escalations rise, you did not design the right guardrails. If citations are accurate but nobody accepts suggestions, you did not design the experience to fit the job.

The missing architecture: AI as an intentional capability

Arek said AI must be an intentional architecture, not a plugin. I interpret "architecture" broadly: technical architecture plus operating model.

Here are the design components I see in successful AI transformations:

Data and knowledge design

  • What is the source of truth?
  • How is content curated, versioned, and permissioned?
  • How do you prevent the AI from using stale or unapproved guidance?

Guardrails and governance design

  • Which use cases are allowed, restricted, or prohibited?
  • What are the review requirements for high-risk domains?
  • How do you handle privacy, IP, and regulatory constraints?

Experience design

  • Where does the AI appear in the tools people already use?
  • What does the user do when the AI is wrong?
  • How do you show confidence, evidence, and limitations without slowing work?

Change management design

  • Who is trained first and why?
  • How do you update playbooks and SOPs?
  • How do you create feedback loops so the system improves?

If any of these are missing, the pilot often "works" but does not survive contact with real workflows, real incentives, and real accountability.

A practical design checklist you can use this week

If you are in the middle of an AI initiative, here is a lightweight way to apply Arek's questions quickly:

  1. Purpose

    • Write the one-sentence value statement.
    • Name the primary metric and the primary risk.
  2. Process fit

    • Draw the current workflow in 6 to 10 steps.
    • Mark where AI will act, where humans will act, and where approvals happen.
  3. Data readiness

    • List required inputs and who owns them.
    • Identify the top three failure modes (missing fields, inconsistent labels, stale documents).
  4. Guardrails

    • Define fallback behavior when confidence is low.
    • Add a simple audit plan (for example, 50 samples per week reviewed).
  5. Measurement

    • Pick one business outcome and two leading indicators.
    • Add one adoption metric (weekly active users, acceptance rate, time saved).
  6. Ownership

    • Name a product owner and a process owner.
    • Schedule a monthly review for metrics, incidents, and iteration.

This is the steering wheel Arek is talking about. It is not complicated, but it is deliberate.

Closing thought: transformation is designed, not installed

Arek Skuza's post is a reminder that AI does not transform a business by showing up. It transforms a business when it is designed into the way work happens, measured against outcomes, and governed like a real capability.

If you are "doing AI" but not designing for it, you are not transforming processes. You are idling.

This blog post expands on a viral LinkedIn post by Arek Skuza. View the original LinkedIn post →