Back to Blog
Trending Post

Laurie Scheepers 🚀 on Tech Intent: Make It Fun

·Technology Ethics

A reflection on Laurie Scheepers 🚀's post on music, fun, and human-centered tech that avoids debates and boosts positivity.

LinkedIn contentviral postscontent strategytechnology ethicshuman-centered designAI product developmentcreativity and musicpositive impactsocial media marketing

Laurie Scheepers 🚀 recently shared something that caught my attention: "Music always inspires me. Like Gandalf, the message appears when it’s needed." She was about to publish an article she had spent three hours writing, but a new song from her friend and producer, Thomas - "Fun is nr. 1" - changed her mind.

That small pivot carries a bigger lesson for anyone building, debating, or deploying technology right now. Laurie’s point is not anti-critique, and it is not "ignore the hard questions." It is subtler: sometimes the way we debate "this vs. that" spreads the wrong message, even if we mean well and are aiming for clarity. In her words, it can "only add fuel to the fire." And when the fire is already raging in tech discourse - around AI, privacy, automation, bias, labor, and power - adding fuel rarely produces light.

So I want to expand on what Laurie is really advocating: start with intent, build for human benefit, and keep room for joy. Not as a distraction from ethics, but as a compass for it.

The moment that redirected the post

Laurie describes a familiar creator experience: you work hard on a piece of writing, but then something else arrives that feels more timely and more true. In her case, music delivered the nudge.

"Like Gandalf, the message appears when it’s needed."

Whether you take that literally or metaphorically, it is a good reminder that inspiration is not just aesthetic. Sometimes it is corrective. The right input can interrupt a spiral of over-analysis, outrage, or defensiveness, and re-center us on what we are trying to do.

The song title "Fun is nr. 1" is almost disarmingly simple. But that is the point. Simplicity can cut through the noise. In a landscape where every new tool becomes a culture war, the idea of fun can feel naive. Yet Laurie is arguing that fun is not the enemy of seriousness. It is a way to protect our humanity while we make serious things.

When "this vs. that" becomes the product

Laurie notes that debating one approach against another can unintentionally send the wrong signal. I think this happens in tech in three common ways.

1) We turn nuance into teams

The internet rewards identity-based positions: pro-AI vs. anti-AI, open vs. closed, regulation vs. innovation, artists vs. engineers. Real life is rarely that binary, but debate formats push us there. Once people feel assigned to a team, the goal shifts from understanding to winning.

2) We confuse heat with progress

A thread can be "high engagement" and still be low learning. Arguments intensify, language hardens, and eventually we are mostly reacting to each other’s tone, not each other’s ideas.

3) We forget the audience we are shaping

Even if you are right, the style of the debate trains onlookers to mimic it. Over time, your community inherits your posture: suspicion, sarcasm, constant dunking, constant defensiveness. That is the "fuel to the fire" Laurie is warning about.

None of this means we should avoid disagreement. It means we should notice when our discourse is optimizing for conflict instead of clarity.

The ethical center: intent that benefits humanity

The most direct line in Laurie’s post is this: "What matters, is our primary intent with technology, making things that benefit humanity - positive cheer."

That is a powerful framing because it moves ethics upstream. Many ethics conversations happen late, after a product exists, after incentives are set, after harms appear. Starting with intent changes what you build in the first place.

"Make someone smile - Have Fun!"

Taken seriously, this is not "add confetti to a harmful system." It is: measure whether your work supports human flourishing. If it does not, no amount of polish fixes the core.

Here are a few concrete interpretations of "benefit humanity" that teams can actually use:

  • Reduce unnecessary friction in people’s lives without increasing surveillance.
  • Expand capability without stripping agency.
  • Improve access and dignity, especially for people with less power.
  • Create tools that respect attention instead of extracting it.

And "positive cheer" does not have to mean shallow optimism. It can mean building experiences that feel respectful, calm, and empowering. Sometimes the kindest product decision is to be quieter.

Build like an artist: iterate, refine, and stay human

Laurie writes: "Vibe a cool new idea. Iterate and refine. Like you would do with art :) cherish the incredible new technology and use it for good."

I love the "like art" analogy because it carries three implications that tech culture often forgets.

Craft matters

Artists obsess over details because details change the emotional outcome. In technology, details are ethics. A default setting, a confusing consent screen, a dark pattern, a vague policy, an unexplained model output - these are moral choices disguised as UI.

Feedback is part of the process

Artists test their work against real human response. Builders should do the same, not just with metrics, but with qualitative reality: Did this confuse people? Did it pressure them? Did it make them feel small? Did it mislead them?

Tools are not the point

New technology is incredible, but it is still a means. The point is the human experience it enables. When teams fetishize the tool, they often stop caring about the person.

A practical checklist for "fun-first" tech that is still serious

To make Laurie’s philosophy usable, here is a simple set of prompts you can run in product reviews, content planning, or team debates.

1) Name your intent in one sentence

If you cannot say what human good you are aiming for, you are not ready to scale.

2) Identify who could be harmed by default

Not hypothetically. Specifically. Who loses time, money, privacy, opportunity, or dignity if you get this wrong?

3) Design for agency

Can people opt out easily? Can they understand what is happening? Can they correct errors? Can they leave?

4) Audit your incentives

Are you rewarded for engagement at any cost? For speed over safety? For growth over trust? If so, ethics will be downstream theater.

5) Choose discourse that cools the room

When debating internally or publicly, ask: does this clarify, or does it polarize? Are we adding light, or adding heat?

6) Make space for joy as a quality metric

Does the experience create relief, delight, confidence, or connection? If the best you can offer is compulsion, something is off.

Bringing fun back without becoming unserious

Laurie’s "Y so seriousss" line is playful, but it is also a critique of performative seriousness. There is a kind of tech commentary that is permanently tense: everything is framed as doom, every disagreement is moral collapse, every update is a scandal.

That stance can feel principled, but it often burns people out and narrows imagination. Fun, in this context, is a pressure release that keeps creativity alive. It helps us stay curious enough to design better alternatives.

And importantly, fun can be communal. Laurie thanks Thomas and says she will cover the song and send her version. That is collaboration, remixing, and gratitude. It is a reminder that our tech culture can be more like a studio session and less like a battlefield.

The goal is not to stop caring.
The goal is to care in a way that produces better outcomes.

If you are building with powerful tools, you owe people rigor. But you also owe them humanity. Laurie’s post is a timely prompt to check whether our conversations and creations are making life better, or just making the internet louder.

This blog post expands on a viral LinkedIn post by Laurie Scheepers 🚀, betting on the human spirit 示. View the original LinkedIn post →