Back to Blog
Adriano Herdman on the New Hiring Bar in the AI Era
Trending Post

Adriano Herdman on the New Hiring Bar in the AI Era

·AI Hiring
·Share on:

Adriano Herdman warns your hiring bar is outdated. Here is a practical plan to refresh signals, JDs, interviews, and scorecards.

LinkedIn contentviral postscontent strategyAI hiringtalent acquisitioninterviewingjob descriptionsskills matrixsocial media marketing

Grow your LinkedIn to the next level.

Use ViralBrain to analyze top creators and create posts that perform.

Try ViralBrain free

Adriano Herdman recently shared something that caught my attention: "If you're a Head of TA and you haven't updated your hiring bar in the last 12 months, AI just made it obsolete." He followed it with a clear promise: "We built a free interactive guide to help you fix that."

That opening hits because it names a problem most recruiting leaders can feel but struggle to quantify. The market did not just add another tool. Over the last year, AI crossed a threshold where it started changing the actual work, not just the workflow around the work. And when the work changes, the signals we use to predict performance change too.

In this post, I want to expand on Adriano's core point and turn it into a practical, blog-friendly blueprint. Not a theory piece. More like: what should a Head of Talent Acquisition actually do on Monday morning to rebuild the hiring bar for Engineering, GTM, and Design when AI is now part of the job?

The uncomfortable truth: your best signals might be lagging indicators

Adriano frames it as a shift: something crossed a threshold "over winter 2025," and the signals we have been hiring on "no longer predict performance." I think that is exactly right, and the reason is simple.

When AI becomes a baseline capability, two things happen at once:

  1. Output becomes easier to produce (more code, more copy, more mockups).
  2. Judgment becomes harder to evaluate (what to build, what to ignore, what is safe, what is true, what is worth shipping).

Old hiring bars tend to overweight effort proxies (years of experience, pedigree, tool checklists, familiarity with a framework) because those proxies used to correlate with output. But AI compresses the distance between "knows the tool" and "produces something plausible." The new differentiator is whether someone can reliably produce outcomes when AI is in the loop.

So the hiring question changes from:

  • "Can this person do the work?"

to:

  • "Can this person do the work with AI, and still be right, safe, and effective?"

What "new signals" look like (and why they matter)

Adriano's guide breaks the problem into useful parts: by function, a skills matrix, a JD rebuilder, an interview guide, and a candidate scorecard. That is a smart sequence because it matches how hiring systems actually run.

Here is how I would expand each component.

1) The Shift: what crossed the threshold, practically speaking

Even if you do not buy any specific date, the threshold idea is helpful. In many teams, AI moved from "optional productivity hack" to "default teammate" across core tasks:

  • Engineers use AI for scaffolding, debugging, tests, migrations, and docs.
  • GTM teams use AI for research, targeting, personalization, sequencing, and call prep.
  • Designers use AI for ideation, exploration, UI variants, copy, and rapid prototyping.

This forces a change in performance measurement. High performers are not the ones who use AI the most. They are the ones who:

  • Ask better questions and define constraints clearly.
  • Detect hallucinations, edge cases, and silent failures.
  • Integrate AI output into real systems and real customer contexts.
  • Maintain quality under speed.

If your hiring bar does not test those abilities, it is testing for a world that is fading.

2) By Function: old signals vs new, side by side

Adriano calls out Engineering, GTM, and Design explicitly. Here is a practical translation of "old signals" versus "new signals" you can use in calibration sessions.

Engineering

Old signals that are losing predictive value:

  • Memorized syntax and trivia-heavy interviews
  • Framework name-dropping without system thinking
  • "Solo hero" narratives that ignore collaboration and review

New signals that matter more:

  • Problem framing: clarifying requirements, constraints, and tradeoffs
  • AI-aware debugging: using AI to propose hypotheses, then validating with instrumentation
  • Quality systems: tests, observability, security thinking, and guardrails
  • Product judgment: shipping the right thing, not just shipping fast

A concrete marker: ask a candidate to describe a time AI sped them up but also introduced risk, and how they caught it.

GTM (Sales, Marketing, CS)

Old signals losing predictive value:

  • High volume activity metrics as a proxy for effectiveness
  • Generic "personalization" claims with no evidence
  • Tool list resumes (CRM, outreach, analytics) without strategy

New signals that matter more:

  • Customer research depth: synthesizing messy information quickly and accurately
  • Message testing loops: hypothesis, experiment design, iteration
  • AI-assisted personalization with compliance and brand control
  • Objection handling with truthfulness and nuance (no confident nonsense)

A concrete marker: can they build a targeted point of view with AI, then critique it for accuracy and differentiation?

Design

Old signals losing predictive value:

  • Portfolio polish without clarity on decision-making
  • Over-indexing on tool mastery alone
  • Output-only stories (screens) with little discovery and validation

New signals that matter more:

  • Taste plus rationale: why this solution, for this user, now
  • Rapid exploration with AI while preserving coherence and accessibility
  • Systems thinking: components, content rules, and scalable patterns
  • Cross-functional influence: aligning PM and Eng on what matters

A concrete marker: can they show how they used AI to widen options, then narrowed options with user and business constraints?

3) Skills Matrix: define "solid" vs "exceptional" in an AI world

Adriano mentions a "Skills Matrix" that maps competencies from junior to senior with clear definitions of "solid" versus "exceptional." This is where most companies still struggle, because they list skills but do not operationalize them.

If you build only one thing this quarter, build a matrix that includes AI-era competencies such as:

  • Problem framing and prompt clarity (not prompt tricks)
  • Verification habits (sources, tests, second-order checks)
  • Data handling and privacy judgment
  • Workflow design (where AI fits, where it must not fit)
  • Communication of uncertainty (what is known vs assumed)

Then define levels:

  • Solid: uses AI to accelerate tasks, checks work, documents decisions.
  • Exceptional: designs repeatable AI-assisted systems, teaches others, raises quality and safety while increasing speed.

This prevents the most common failure mode: hiring people who can demo AI fluency but cannot be trusted with AI in production.

4) The JD Rebuilder: cut the noise, add what you will measure

Adriano says: "see exactly what to cut, what to replace it with, and why" and offers "copy-ready language." The deeper point is that job descriptions are not marketing. They are instruments of alignment.

What to cut (common examples):

  • Inflated years of experience requirements
  • Laundry lists of tools that change every year
  • Vague traits like "rockstar" or "fast-paced" without behavioral definitions

What to replace it with:

  • Outcome-based responsibilities (what success looks like in 90 days)
  • AI-in-the-loop expectations (where AI is used, and quality expectations)
  • Evaluation transparency (what the interview will test)

If AI changed the work, your JD should say so explicitly. Otherwise, you will attract candidates optimized for the old job.

5) Interview Guide: 7 questions that surface adaptation (not buzzwords)

Adriano includes an "Interview Guide" with "7 questions" plus green and red flags. I love that framing because interview questions are only as good as the signals you attach to them.

Here are examples of what you want your questions to reveal:

  • Can the candidate describe a real workflow where AI is a component, not the hero?
  • Do they talk about validation, or only about speed?
  • Can they explain tradeoffs, or do they over-claim certainty?

One high-signal question format is: "Walk me through a recent piece of work where you used AI. Where did it help, where did it mislead you, and what checks did you run before shipping?"

Green flags: specific steps, concrete checks, humility, learning loops.
Red flags: hand-waving, secrecy, or "I just trust it" thinking.

6) Candidate Scorecard: make the new bar impossible to ignore

Adriano finishes with a "Candidate Scorecard" that rates candidates across five dimensions in real time. That matters because most hiring failures are not due to lack of opinions. They are due to inconsistent scoring.

A modern scorecard should:

  • Force a separation between "output" and "judgment"
  • Include AI-era competencies (verification, workflow design, risk awareness)
  • Require evidence snippets (what the candidate said or did)
  • Reduce recency bias by anchoring on definitions of solid vs exceptional

If your panel cannot articulate why someone is exceptional under the new bar, they are probably evaluating the old one.

A simple 30-day plan to update your hiring bar

If I were implementing Adriano's thesis inside a TA org, I would do this in four weeks:

  1. Week 1: Run a calibration workshop with hiring managers. List old signals you rely on today. Identify where AI breaks them.
  2. Week 2: Build function-specific competencies and level definitions (solid vs exceptional). Keep it short and testable.
  3. Week 3: Rewrite the JD templates and align interview loops to the competencies.
  4. Week 4: Launch scorecards and do a retro after 5-10 interviews to tighten signals.

This is not about making hiring harder. It is about making it predictive again.

Closing thought

Adriano Herdman did not just share a resource. He made a strategic warning: if you have not updated your hiring bar in 12 months, you may be selecting for a version of performance that AI has already outgrown. The good news is that the fix is not mysterious. It is systems work: define new signals, update your JDs, ask better questions, and score consistently.

This blog post expands on a viral LinkedIn post by Adriano Herdman, Talent Solutions for Technology businesses. View the original LinkedIn post ->

Grow your LinkedIn to the next level.

Use ViralBrain to analyze top creators and create posts that perform.

Try ViralBrain free