Skip to main content

How to assess AI Fluency in devs

Practical tips on how to use the new AI Fluency [Beta] section in candidate profiles in your hiring process

N
Written by Natalie Hoal
Updated today

The AI Fluency [Beta] section in candidate profiles helps you understand how comfortable a candidate is with AI tools, features and infrastructure. Used well, it can speed up screening, sharpen your interviews, and surface candidates who’ll thrive in AI‑enhanced teams.

👉 You can learn more about this Beta release here.

Below are some practical tips for:


1. How to read the AI Fluency [Beta] section

Candidates can indicate how they use AI across a few dimensions, for example:

  • “I use AI in my process”: They use AI tools to support everyday work: coding, code review, testing, documentation, research, analysis, etc.

  • “I build AI features”: They ship product features powered by AI models, APIs or agents (e.g. recommendations, summarisation, chatbots, content generation).

  • “I build AI infrastructure”: They work on the plumbing behind AI: data pipelines, embeddings/vector stores, model orchestration, evaluation, monitoring.

  • “I’m early in my AI journey and learning”: They’re experimenting and upskilling but don’t yet use AI deeply in production.

Candidates also add:

  • A short summary of their AI experience and mindset

  • Links to projects, work experience and courses where AI played a role

  • A note on which areas of AI raise caution for them (e.g. privacy, bias, IP)

What to look for at a glance

  • Depth vs. breadth
    Are they just mentioning tool names, or describing concrete workflows (“use Copilot for tests and refactors; built a RAG search on top of OpenAI + Pinecone”)?

  • Production vs. experimentation
    Have they shipped AI features/infrastructure to real users, or are they mostly experimenting in side projects and tutorials?

  • Ownership
    Do they talk about designing solutions, evaluating models, and making trade‑offs, or only about wiring up APIs someone else chose?

  • Risk awareness
    How do they think about privacy, security, bias and reliability when using AI? A thoughtful “caution” answer is a strong signal for production readiness.

2. How to use AI Fluency in your hiring process

a) During sourcing and screening

  • Match to your AI needs, not a generic “AI person”

    • If you mainly need engineers who use AI tools effectively, look for “I use AI in my process” with concrete examples.

    • If you’re building AI‑powered features, prioritise candidates who selected “I build AI features” and linked relevant projects.

    • If your challenge is data and infra, look for “I build AI infrastructure” plus experience with pipelines, vector stores, orchestration, etc.

  • Cross‑check with Projects and Work Experience

    • Click through to projects marked as using AI and see where they’ve linked AI work to specific roles. Strong profiles usually have consistency across all three sections.

  • Use AI Fluency as a tiebreaker

    • When two candidates are similar on core skills, the one with stronger, better‑explained AI fluency is often more adaptable in fast‑changing environments.

b) Before the interview

  • Pull 2-3 concrete examples from their AI Fluency and Projects
    Note specific tools, features or problems they mention. Base your interview scenarios and follow‑up questions on those, instead of asking only generic “tell me about AI” questions.

  • Align on expectations internally
    Decide what level of AI fluency you actually need for the role (tool usage vs. feature development vs. infra), and calibrate your questions and evaluation accordingly. This avoids penalising strong candidates just because they’re not “AI experts” when the role doesn’t require it.

3. Interviewing with AI Fluency in mind

Below are example questions and what to listen for, organised by the type of AI experience a candidate indicates.

A. Candidates who “use AI in my process”

Goal: Assess whether they use AI tools thoughtfully to improve quality and speed, not just as a crutch.

Example questions

  1. “Walk me through a typical task where you use AI. What do you use it for, and what do you still do manually?”

    • Look for: clear workflows, awareness of tools’ strengths/limits, how they review AI‑generated output.

  2. “Tell me about a time AI saved you significant time or helped you get unstuck. How did you validate the result?”

    • Look for: critical thinking, testing, code review, not blindly trusting outputs.

  3. “Are there tasks where you deliberately avoid using AI? Why?”

    • Look for: judgment around sensitive data, security, or areas where AI tends to hallucinate or mislead.

Red flags 🚩

  • “I just paste errors into ChatGPT and ship whatever it suggests.”

  • No mention of testing or code review for AI‑generated changes.

  • No awareness of data/IP policies when using external tools.

B. Candidates who “build AI features”

Goal: Understand how deeply they understand the models, UX and trade‑offs behind AI‑powered product features.

Example questions

  1. “Pick one AI‑powered feature you’ve built. What problem did it solve, and how did you decide that AI was the right approach?”

    • Look for: problem framing, consideration of non‑AI alternatives, user‑centred thinking.

  2. “How did you choose which model(s) or provider(s) to use? What trade‑offs did you make?”

    • Look for: awareness of latency, cost, quality, data residency, fine‑tuning vs. prompt engineering.

  3. “How did you evaluate whether the AI output was ‘good enough’?”

    • Look for: metrics (accuracy, usefulness, satisfaction), A/B tests, human‑in‑the‑loop processes.

  4. “Tell me about a failure or weird behaviour you saw from the model. How did you debug or mitigate it?”

    • Look for: prompt iteration, guardrails, fallbacks, input validation, monitoring.

Red flags 🚩

  • Can’t clearly articulate the problem the AI feature solves.

  • Chose a model purely because it was “popular” or “cool”, with no mention of constraints.

  • No evaluation beyond “it looked fine to me”.

C. Candidates who “build AI infrastructure”

Goal: Probe their experience with the less visible parts of AI systems: data, pipelines, orchestration and reliability.

Example questions

  1. “Describe an AI system you helped build or maintain. What did the architecture look like end‑to‑end?”

    • Look for: understanding of data ingestion, storage, feature/embedding stores, model serving, APIs, observability.

  2. “What were the main reliability or scaling challenges, and how did you handle them?”

    • Look for: queueing, batching, caching, rate limiting, circuit breakers, model/version management.

  3. “How did you monitor performance and detect issues in production?”

    • Look for: metrics for latency, error rates, quality drift; dashboards; alerts; evaluation sets.

  4. “Tell me about data quality or privacy concerns you had to address.”

    • Look for: PII handling, anonymisation, access control, compliance considerations.

Red flags 🚩

  • Very vague description of infrastructure (“we just call the model API”).

  • No mention of observability or monitoring in production.

  • No awareness of data governance or security issues.

D. Candidates who are “early in my AI journey and learning”

Goal: Gauge motivation, learning approach and how quickly they might ramp up.

Example questions

  1. “What have you done so far to get up to speed with AI? Any courses, tutorials or side projects?”

    • Look for: structured learning, not just occasional YouTube videos.

  2. “What’s a recent AI concept or tool you found interesting, and why?”

    • Look for: curiosity, basic understanding of key ideas (e.g. embeddings, LLMs, RAG).

  3. “If you joined us tomorrow, how would you use AI to make your own work more effective?”

    • Look for: practical, incremental ideas rather than abstract hype.

Red flags 🚩

  • Vague “I’m interested in AI” with no concrete steps taken.

  • No clear understanding of how AI could help in their current role.

4. Combining AI Fluency with your evaluation

A few practical recommendations for using AI Fluency alongside your usual bar:

  • Keep your core bar constant. AI fluency is a bonus, not a substitute for fundamentals. Strong engineering, product or data skills still matter most.

  • Match expectations to role level. A junior developer using Copilot well may be a great hire even if they haven’t built production AI features yet. For seniors, look for higher‑level ownership, design and risk thinking.

  • Use AI in the interview itself (where appropriate).
    For example, you can:

    • Ask them to talk through how they’d supervise AI‑generated code or documentation.

    • Present a small, realistic scenario and ask how they would incorporate AI into the solution, step by step.

    • Discuss how they’d design a safe workflow for using AI with your type of data.

  • Be explicit about your AI expectations in the role.
    Share how your team currently uses (or doesn’t yet use) AI and what you’d expect from them in the first 3–6 months. This makes it easier to judge whether their current fluency is “enough” or whether you’ll need to invest in upskilling.


Used thoughtfully, the AI Fluency [Beta] section can give you a head start before you ever speak to a candidate. It lets you focus interviews on real experiences instead of generic AI hype, and helps you identify people who can not only work with AI tools, but also help your team use them responsibly and effectively.

Here are some other resources around AI Fluency:

Did this answer your question?