
Dylan Arnaud's 25 Power BI Interview Questions
A practical guide expanding Dylan Arnaud's viral Power BI interview questions, with deeper context on DAX, modeling, and performance.
Dylan Arnaud recently shared something that caught my attention: "25 Power BI interview questions nobody tells you." He then followed with crisp, interview-ready one-liners like "Desktop = creation, Service = sharing and collaboration" and "CALCULATE modifies the filter context of a measure." That mix of clarity and practicality is exactly why the post traveled fast.
I want to build on Dylan's list and turn it into a study guide you can actually use. Not just to memorize answers, but to show the kind of thinking recruiters are screening for: can you model data cleanly, write DAX that matches business intent, and ship reports that stay fast when the dataset grows?
Key idea Dylan Arnaud emphasized: recruiters prefer one project you truly master over ten projects you only skim.
What interviewers are really testing
Power BI interviews often look like a quiz, but underneath the questions are three themes:
- Can you separate authoring, publishing, and governance? (Desktop vs Service)
- Can you design a model that scales? (star schema, relationships, cardinality)
- Can you write DAX that is correct and performant? (context, CALCULATE, iterators)
If you answer with definitions only, you risk sounding shallow. If you anchor each answer in an example from a real report, you sound like someone who has shipped.
Power BI Desktop vs Service: more than a definition
Dylan Arnaud summarized it well: Desktop is for building, Service is for sharing and collaborating. In interviews, I would add two practical angles:
- Lifecycle: build and validate in Desktop, publish to Service, then manage refresh, permissions, apps, and distribution.
- Team workflows: in Service you prove you understand workspaces, deployment pipelines (if available), and governance (who can edit vs view, certified datasets, endorsements).
A strong answer includes when you would use each: "I prototype measures and model changes in Desktop, then publish and manage access, refresh schedules, and audience-specific apps in Service."
Import vs DirectQuery vs Live Connection: choose the tradeoff
Dylan laid out the essentials: Import loads data in memory, DirectQuery runs queries in real time, Live connects to Analysis Services.
To expand it, frame it as a decision:
- Import: fastest visuals, best DAX flexibility, but refresh-based latency and model size limits.
- DirectQuery: freshest data, but depends on source performance, can limit DAX patterns, and may introduce query folding constraints.
- Live connection (or thin reports): central semantic model managed elsewhere (SSAS or Power BI dataset), better governance, but you often cannot reshape the model from the report.
Interview tip: mention composite models only if you have used them, and explain why you combined Import with DirectQuery.
Data modeling fundamentals: facts, dimensions, and the star schema
Dylan Arnaud included the core building blocks: facts hold transactions and metrics, dimensions provide descriptive context, and the star schema places a central fact table with surrounding dimensions.
What I would add:
- Grain first: define what one row in the fact table means (one order line, one ticket, one session). Many modeling problems come from unclear grain.
- Relationships: Dylan noted the common pattern is 1-to-many from dimension to fact. That is the backbone of clean filtering.
- Avoid chains: star schemas avoid long join chains because they create ambiguity, complicate DAX, and hurt performance.
Many-to-many relationships: bridge tables over shortcuts
Dylan suggested bridge tables or bidirectional filtering (with caution). That caution matters.
A solid interview answer sounds like: "I prefer a bridge table when two entities can relate in multiple ways (like customers to segments or products to tags). I keep filtering single-direction where possible, and I only use bidirectional relationships when I can justify the model behavior and have tested ambiguity."
DAX essentials: measures, columns, and context
Dylan highlighted one of the most important distinctions:
- A calculated column is computed row by row at refresh time and stored.
- A measure is computed at query time and changes with filter context.
That leads directly to his one-liner on CALCULATE: it modifies the filter context of a measure. In practice, that means CALCULATE is how you express business logic like "sales for France" or "sales excluding returns" while respecting slicers and visuals.
SUM vs SUMX: when iterators matter
Dylan noted SUMX when you need row-by-row calculation before aggregating. A classic example is margin:
- If margin is a stored column, SUM might work.
- If margin must be computed as (Price - Cost) per row then aggregated, SUMX is the correct pattern.
Interviewers love this because it reveals whether you understand evaluation context and performance. SUMX is powerful, but it can be slower. Mention that you use it intentionally.
ALL vs ALLSELECTED: reset filters, but not the same way
Dylan captured the difference: ALL ignores all filters, ALLSELECTED keeps external filters.
A practical framing:
- ALL is great for "percent of total" where total should ignore the visual and slicers (or ignore some columns only).
- ALLSELECTED is great when you want totals within the user selection, like a bar chart showing category share within selected year and region.
Year over Year (YoY): time intelligence needs a real date table
Dylan mentioned SAMEPERIODLASTYEAR or DATEADD with CALCULATE. The interview-friendly expansion is:
- Create or mark a proper Date table (Dylan referenced CALENDAR).
- Ensure relationships from Date to facts are correct.
- Then build measures like YoY, YoY%, and rolling periods.
If you can explain why time intelligence fails without a contiguous date table, you stand out.
Power Query vs DAX for missing data
Dylan suggested Power Query replacements or DAX like COALESCE and IF. I agree with the general rule:
- Fix data quality issues upstream when possible (Power Query or source system).
- Use DAX for presentation logic and edge cases.
Interview-ready phrasing: "I handle structural cleaning in Power Query (types, null handling, deduplication), and I use DAX to control how missing values appear in KPIs."
Report design and interactivity: visuals, drill-through, bookmarks
Dylan included practical UI questions that matter in real projects:
- Comparing categories: clustered columns or bar charts are often best.
- Drill-through: define target page and fields so users can go from summary to detail.
- Bookmarks: capture report states for navigation and guided storytelling.
Too many KPIs on one page: reduce cognitive load
Dylan suggested prioritization, tooltips, drill-through, and multiple pages. The deeper point is product thinking: one page should answer one primary question. Tooltips and drill-through let you keep a clean canvas while still supporting exploration.
Performance troubleshooting: what to do when a report is slow
Dylan's answer was on point: use DAX Studio to analyze, optimize measures, and reduce cardinality.
To expand that into a strong interview response, walk through a quick checklist:
- Identify the bottleneck: slow visuals, slow refresh, or slow interactions.
- Model hygiene: remove unused columns, reduce cardinality (especially in large text fields), and use proper data types.
- Measure optimization: avoid expensive iterators when unnecessary, reduce repeated calculations, and leverage variables.
- Source and query design: push aggregations to the source where feasible, ensure query folding, and consider aggregations tables.
This is where real experience shows. Even one concrete example (like optimizing a distinct count measure by changing the model or pre-aggregating) will beat a generic answer.
The "project" questions: your best story beats perfect theory
Dylan included behavioral prompts: your best project, biggest technical challenge, handling last-minute changes, and training others. These are not fillers. They test whether you can deliver value with stakeholders.
Use Dylan's suggestion to structure the story (STAR: Situation, Task, Action, Result) and include numbers. For example: reduced manual reporting time by 8 hours per week, improved refresh time from 20 minutes to 5 minutes, or increased adoption to 200 weekly users.
A simple prep plan based on Dylan Arnaud's list
If I were preparing for a Power BI interview using Dylan's 25 questions, I would do this:
- Pick one flagship project you can demo and explain end-to-end.
- Review model design: star schema, relationship direction, many-to-many handling.
- Drill DAX fundamentals: context, CALCULATE, iterators, time intelligence.
- Practice performance steps: what you check first, and what tools you use.
Dylan Arnaud's reminder is the best closer: one mastered project beats ten shallow ones.
This blog post expands on a viral LinkedIn post by Dylan Arnaud, Data Analyst | J’aide les entreprises à piloter leur activité et leur rentabilité en concevant des tableaux de bord sur mesure | Power BI & Fabric | ⭐ 5/5 Malt. View the original LinkedIn post →