Back to Blog
Dan Rosenthal's Claude + GitHub Company OS Starter Kit
Trending Post

Dan Rosenthal's Claude + GitHub Company OS Starter Kit

·AI GTM Operations
·Share on:

A deeper look at Dan Rosenthal's AI-native GTM Company OS: Claude Code plus GitHub structure, playbooks, and reusable GTM skills.

AI GTM operationsGitHub company OSClaude CodeGTM playbookssales enablement systemsLinkedIn contentviral postscontent strategysocial media marketing

Grow your LinkedIn to the next level.

Use ViralBrain to analyze top creators and create posts that perform.

Try ViralBrain free

Dan Rosenthal recently shared something that caught my attention: "We're building an AI-native GTM services company. Claude Code has become a huge part of it." He followed that with a detail that explains why the idea resonated: his team is "building our Company OS on GitHub" and using that foundation to "power ops through Claude Code." After 100k+ impressions and a flood of questions, he packaged the setup into a Claude + GitHub Company OS Starter Kit.

I want to expand on what Dan is really describing here, because it's bigger than a toolkit. It's a practical operating model for modern go-to-market teams: store your institutional knowledge in a structured, versioned place (GitHub), then let an AI agent (Claude Code) navigate, transform, and execute against it.

What Dan Rosenthal means by an "AI-native GTM services company"

When someone says "AI-native," it can sound like marketing. Dan's post makes it concrete: Claude Code is not a side utility, it's embedded in day-to-day delivery. In an AI-native GTM services model, the service is not only expertise, it's also the system that consistently produces outputs: positioning docs, ICP models, outreach copy, discovery prep, and weekly operating rhythms.

The difference matters:

  • Traditional services rely on individual consultants' brains and scattered docs.
  • AI-native services rely on a shared, evolving repository of patterns and artifacts that AI can reuse and remix.

That is why the "Company OS" concept is so important. If you want AI to help reliably, you need a place where "truth" lives and is kept current.

Key insight: AI gets dramatically more useful when it has a well-organized, version-controlled source of context.

Why GitHub works as a Company OS (even for non-engineers)

Dan's earlier post (the one he references) was about building a Company OS on GitHub. That choice is deliberate. GitHub is not just for code. It's a powerful home for:

  • Processes (documented as Markdown)
  • Templates (copy blocks, checklists, prompt patterns)
  • Decision logs (what you tried, why it worked, what changed)
  • Client deliverables (structured, repeatable outputs)
  • Internal tools (small scripts, automations, integrations)

The hidden advantage is versioning. In GTM, your "best practices" change constantly. A version-controlled OS lets you:

  • See what changed and when
  • Review changes before they go live
  • Roll back if something breaks
  • Standardize improvements across a team

If you have ever had five different "final" versions of an ICP doc in different Google Drive folders, you already understand the appeal.

Where Claude Code fits in the loop

Dan specifically calls out Claude Code as "a huge part" of their build. The important nuance is that Claude Code is not just writing content. It's acting like an interface to the OS.

Think of it as three layers:

  1. Storage layer (GitHub): the canonical source of your playbooks, templates, and client artifacts.
  2. Instruction layer (Markdown guides): the rules of how work gets done, and how to use the assets.
  3. Execution layer (Claude Code): the agent that can read, draft, refactor, and assemble outputs using the assets.

When these are connected, you can ask for results in a way that is repeatable:

  • "Generate a first-draft outbound sequence using our tone guide and the ICP scoring rubric."
  • "Prepare a discovery call plan using our standard qualification framework and this prospect's website copy."
  • "Update the LinkedIn post outline template based on the last three posts that performed best."

The point is not that AI can write. The point is that AI can write in your system.

What's in the Claude + GitHub Company OS Starter Kit (and why each piece matters)

Dan listed the components of the starter kit. Here's how I'd interpret each one, and what it unlocks.

1) "How to get started on Cursor (Notion guide)"

Cursor is becoming the "IDE" for knowledge work the same way developer IDEs became the cockpit for code. A getting-started guide is not fluff. It reduces setup friction so the OS actually gets used.

Practical takeaway: if you want adoption, document the first 30 minutes. Not the philosophy.

2) "Company OS blueprint"

A blueprint is the map: what major areas exist, how they relate, and where new artifacts should live.

A good blueprint answers:

  • What are the main workstreams in our GTM motion?
  • What artifacts should exist for each workstream?
  • Who owns updates?
  • What is considered "done" and "ready to reuse"?

3) "Full folder structure + .md guides"

This is where the OS becomes real. Folder structure is not aesthetics. It is retrieval.

If Claude (or any teammate) cannot quickly find:

  • the latest ICP model
  • your value props and proof points
  • outbound sequences that already worked
  • your discovery checklist

then the OS is just storage, not leverage.

A simple structure many teams can start with:

  • /00-README (how to navigate)
  • /01-Positioning
  • /02-ICP-and-Segmentation
  • /03-Outbound
  • /04-Discovery
  • /05-Content
  • /06-Analytics-and-Experiments
  • /07-Client-Delivery
  • /99-Archive

4) "Workflows GTM Engineering plugin"

This is the automation bridge. The plugin concept implies repeatable execution: turning templates and guides into guided workflows, internal tools, or semi-automated deliverables.

Even if you never build a full product, a lightweight plugin approach can:

  • standardize steps
  • prefill documents
  • enforce checklists
  • reduce quality variance

5) "Base set of GTM skills"

Dan calls out a set of skills like "outbound copywriter, LinkedIn post writer, ICP modelling, GTM strategy, discovery prep." This is a subtle but powerful framing.

Instead of thinking "we need prompts," think "we need roles." Each role becomes a reusable skill module:

  • Purpose: what the skill is for
  • Inputs: what information is required
  • Process: the steps the AI should follow
  • Outputs: what the deliverable format must be
  • Quality bar: examples, constraints, do's and don'ts

That structure is what makes AI outputs consistent across team members and clients.

If your AI instructions are organized as skills, you can improve one module and raise output quality everywhere.

A practical way to set this up from scratch (the "Day 1" version)

Dan said, "This is how I would start if I was rebuilding from scratch." Here's a pragmatic sequence that matches that spirit.

Step 1: Create a single "Company OS" repo

Start with one repo. Avoid the temptation to split everything. The goal is a shared home for your operating knowledge.

Step 2: Add a navigation README

Include:

  • what lives where
  • how to request changes
  • the definition of "approved" templates

Step 3: Document 3 core workflows

Pick the workflows that drive revenue and delivery:

  • ICP modeling and target list creation
  • outbound messaging and sequencing
  • discovery call preparation

Write each as a Markdown playbook with:

  • objective
  • steps
  • input checklist
  • output templates

Step 4: Build the first skills library

Create one folder for skills (role modules). Even 5 to 10 solid modules can beat 200 scattered prompts.

Step 5: Connect Claude Code and enforce reuse

The cultural change is this: before creating something new, search the OS. Then extend what's already there.

Common pitfalls (and how to avoid them)

A GitHub-based OS can fail if it becomes a dumping ground. A few guardrails help:

  • Make ownership explicit: every folder has an owner who reviews updates.
  • Separate drafts from approved assets: label clearly.
  • Write for scanning: short sections, checklists, consistent templates.
  • Capture decision logs: when you change a template, write why.

Also remember: AI will amplify whatever you store. If the OS contains inconsistent tone, outdated positioning, or conflicting ICP definitions, the AI will produce confident chaos.

Why this approach creates compounding GTM advantage

The reason Dan's post sparked so many comments is that it speaks to a real pain: GTM work repeats, but teams rebuild it from scratch every time.

A Company OS flips that:

  • Every client engagement improves the library.
  • Every experiment updates the playbook.
  • Every template becomes faster to deploy.
  • Claude Code turns the library into execution, not just documentation.

Done well, this becomes a flywheel: structured knowledge makes AI better, better AI makes it easier to create and maintain structured knowledge.

Closing thought

Dan offered his kit with a simple CTA: comment "OS" and he'll send it over. The real takeaway is not the download. It's the model: treat GTM like engineering, store the system in GitHub, and use Claude Code as the teammate that can actually operate inside that system.

This blog post expands on a viral LinkedIn post by Dan Rosenthal, Co-Founder @ Workflows.io | Growth playbooks using AI. View the original LinkedIn post →

Grow your LinkedIn to the next level.

Use ViralBrain to analyze top creators and create posts that perform.

Try ViralBrain free