Back to Blog
Trending Post

Addy Osmani on Vibe-Coding Game UI with AI

·Generative AI in Game Development

A deeper look at Addy Osmani's viral take on vibe-coding game UIs using Gemini, React, Three.js, and fast 3D workflows today.", "content

LinkedIn contentviral postscontent strategysocial media marketingLinkedIn strategycontent creationengagement

Addy Osmani recently shared something that caught my attention: "Vibe-coding a ship selection UI for a space exploration game" and calling it a "great use of Gemini 3 Pro and Nano Banana." That short line, plus the speed claims that followed, says a lot about where modern prototyping is headed.

What Addy highlighted was not just a cool demo. It was a compact case study in a new kind of creative throughput: when someone with taste and a clear vision combines a lightweight web stack (React + Three.js) with a few focused generative tools, they can jump from idea to interactive UI shockingly fast.

What I\u0027m enjoying about this moment is how folks with taste, creativity and vision can unlock their ideas on the web and native so much faster than they could before.

Below is my expanded take on what is actually happening in workflows like this, why it matters for game UI and product UI, and how to apply the same pattern without turning your build into a fragile pile of AI outputs.

The demo is impressive, but the workflow is the story

Addy pointed to a ship selection UI built by Dilum Sanjaya and emphasized a multi-model workflow:

  • Nano Banana for character design and UI exploration
  • Tencent Hunyuan3D for image-to-3D (single mesh)
  • Gemini Pro for UI work
  • MidJourney sometimes as an additional ingredient

On paper, that is just a tool list. In practice, it is a pipeline. The real unlock is that each tool is used for what it is best at, and the rest of the work is classic craft: choosing, editing, integrating, and iterating.

Addy also shared timing that should make any builder pause:

  • The image-to-3D step took only a few minutes
  • A basic flow took about an hour
  • End-to-end was about five hours with tweaks

Those numbers are believable if you treat AI as a rapid ideation and asset bootstrap layer, not a magical replacement for engineering.

What "vibe-coding" means in UI-heavy experiences

When people say "vibe-coding," they often mean building by feel: moving fast, iterating visually, and letting the interface guide the next decision. In UI-heavy experiences like a game selection screen, that approach can outperform a requirements-first process because the product is experiential. You need to see it, click it, and sense the motion and depth.

Where generative AI fits is simple:

  1. Reduce blank-page time (concepts, layout ideas, component variations)
  2. Generate placeholder assets that are good enough to evaluate the experience
  3. Help you write or refine the glue code that turns a mock into an interaction

The best part of Addy\u0027s framing is that it centers taste. Tools can generate options, but they do not automatically generate restraint, hierarchy, or pacing.

Why React + Three.js is a sweet spot for modern game UI prototypes

Addy called out the tech stack: React and Three.js. That combination is powerful because it compresses two worlds:

  • React handles state, UI composition, and predictable updates
  • Three.js handles 3D rendering, animation, lighting, and camera work

For a ship selection UI, this means you can treat the screen like a product UI (filters, tabs, selection state, hover and focus states) while still delivering a 3D, game-like presentation (turntables, parallax, shader-driven highlights).

If you want to go further, React Three Fiber can make the integration feel more idiomatic for React teams, but the core point stands: web-native tooling is now good enough to prototype game-grade interfaces quickly.

A practical "AI}

Addy Osmani on Vibe-Coding Game UI with AI | ViralBrain