Back to Blog
Walid Boulanouar: Stop Chasing AI Trends, Ship Work
Trending Post

Walid Boulanouar: Stop Chasing AI Trends, Ship Work

·AI Tools & Productivity

A practical response to Walid Boulanouar on avoiding AI hype, running weekly experiments, and building a simple system that ships.

AI productivityAI toolsexperimentationClaude CodeautomationLinkedIn contentviral postscontent strategysocial media marketing

Walid Boulanouar recently shared something that caught my attention: "99.99% of the internet in 2026…

please stop following trends

from someone who tests new stuff all the time - I allocate 2 hours every Friday to test new things, but I do not bother myself with trends - I just make shit done".

That combination of curiosity and discipline is the point. Walid is not saying "never learn". He is saying: stop letting trend-chasing replace output.

And he adds a very grounded path forward: "you can create one claude.md + Claude Code and build your own way, and discover new stuff". In other words, pick a simple, repeatable workflow, then use controlled experimentation to expand it. Not the other way around.

Below is my take on what Walid is really advocating, why it matters more in 2026 than it did in 2024, and how to apply it if you want to build with AI tools without getting trapped in the hype cycle.

The real problem: trend-following is a productivity tax

Every week there is a new model, a new agent framework, a new "must-have" workflow video, and a new hot take about how "everyone will do X by next month".

If you build products, run a team, or even just want to level up your personal output, trend-following creates three hidden costs:

  1. Context switching: every new tool comes with settings, prompt patterns, integrations, and a new mental model.

  2. False urgency: trends feel time-sensitive, so they hijack priorities that should be driven by customer needs or your own roadmap.

  3. Identity drift: you start building a stack that reflects what the internet is excited about, not what you are trying to ship.

Walid's line "I just make shit done" is blunt, but it is also a useful diagnostic. If your week ends and you cannot point to shipped work (a feature, a landing page, a hiring loop, a content asset, a pipeline), you are probably paying the trend tax.

"Please stop following trends" is less a rant and more a strategy: protect your attention so output compounds.

Experimentation is not trend-chasing

Walid also says something many people miss: he tests new stuff "all the time" and still avoids trends. That sounds contradictory until you separate two behaviors:

  • Experimentation is private, time-boxed, and measured against your goals.
  • Trend-chasing is public, reactive, and measured against what everyone else is sharing.

The detail I love is the constraint: "I allocate 2 hours every Friday". That single sentence solves a lot.

A simple Friday AI lab (2 hours) you can copy

Here is a lightweight version of Walid's cadence that works whether you are solo or leading a team:

  1. Pick one bottleneck to improve (10 minutes)

    • Examples: writing specs, customer support drafts, lead enrichment, QA, meeting notes, code review, outbound, content repurposing.
  2. Test one new thing only (60 minutes)

    • One model, one agent pattern, one automation, one integration. If you test three, you learn none.
  3. Turn it into an artifact (40 minutes)

    • A template, a prompt, a script, a checklist, or an automation that someone else could reuse.
  4. Decide: adopt, park, or discard (10 minutes)

    • Adopt means it enters your default workflow next week.
    • Park means it was promising but not worth switching costs.
    • Discard means you stop thinking about it.

This is the key: experimentation should end with a decision, not an open tab.

Build your own "default" workflow with one source of truth

Walid mentions "one claude.md + Claude Code". Read that as: one place where your working rules live, plus one tool that can execute.

The specific files and tools can vary, but the principle is stable: reduce complexity by standardizing how you work, then let AI accelerate inside that system.

What to put in a claude.md (or equivalent)

If you have never maintained a simple "AI operating manual" for yourself or your team, you are leaving leverage on the table. A starter structure:

  • Project context: what you are building, who it is for, what matters.
  • Definitions: product terms, naming conventions, do-not-break rules.
  • Quality bar: what "done" means (tests, docs, performance, tone).
  • Reusable prompts: spec prompt, refactor prompt, test-writing prompt, customer email prompt.
  • Workflow steps: how you go from idea to shipped change.

This is not bureaucracy. It is memory. Once you have it, AI becomes more consistent, and you spend less time re-explaining yourself.

The fastest builders are not the ones who know every new tool. They are the ones with a stable system that new tools can plug into.

Practical examples: what "ship-first" looks like with AI tools

To make Walid's point tangible, here are a few examples of choosing output over trends while still staying current.

Example 1: You want to build agents

Trend-chasing version:

  • You hop between frameworks weekly.
  • You demo cool behaviors.
  • Nothing reaches production.

Ship-first version:

  • You pick one narrow job: "summarize inbound leads and create CRM notes".
  • You implement it with boring reliability (logs, retries, human review).
  • You then upgrade components over time (models, tools, memory) without changing the mission.

Example 2: You want to use the newest model

Trend-chasing version:

  • You rewrite prompts for every release.
  • You compare benchmarks that do not match your use case.

Ship-first version:

  • You keep the same evaluation set (20 real tasks you do weekly).
  • You test the new model for one hour.
  • If it does not improve speed or quality meaningfully, you keep your current setup.

Example 3: You want to automate with n8n (or similar)

Trend-chasing version:

  • You build a complex graph because it is impressive.

Ship-first version:

  • You automate one painful loop end-to-end: intake - classify - enrich - notify - log.
  • You add observability so it does not silently fail.
  • You document the workflow so it survives you.

A simple filter for 2026: does it increase shipped work?

Walid's "99.99% of the internet in 2026" line (with a nod to Greg Isenberg for the visual) speaks to the coming flood: more content, more "best stacks", more templated advice, more noise.

So you need a filter that is not about novelty. Mine is close to what Walid implies:

The Shipped Work Test

Before adopting a new AI tool or workflow, ask:

  • Will this reduce cycle time this month? Not someday. This month.
  • Will it reduce errors or rework? Not just make drafts faster.
  • Can I explain it in 2 sentences to a teammate? If not, switching costs are too high.
  • Can I roll it back easily? If rollback is hard, adoption should be slow.

If the answer is mostly no, treat it as a Friday experiment, not a core change.

The takeaway: be curious on schedule, decisive by default

What I hear in Walid Boulanouar's post is a mature builder mindset:

  • Curiosity is healthy.
  • Trends are optional.
  • Shipping is non-negotiable.

You do not have to ignore what is new. You just have to stop letting what is new decide what you do.

If you want to internalize this quickly, borrow Walid's constraints: reserve a small weekly block to explore, keep a single source of truth for how you build (your claude.md or equivalent), and measure every tool by whether it helps you ship.

Because in 2026, the advantage will not belong to the person who knows the most tools. It will belong to the person whose system turns tools into outcomes.

This blog post expands on a viral LinkedIn post by Walid Boulanouar, building more agents than you can count | aiCTO ay automate & humanoidz | building with n8n, a2a, cursor & ☕ | advisor | first ai agents talent recruiter. View the original LinkedIn post →