The wrong interview question can cost more than a bad hire. Asking a candidate about their nationality, family plans, or date of birth exposes the company to discrimination claims — and it still happens in 40% of interviews according to SHRM research. Meanwhile, the right questions predict performance with 2x the accuracy of unstructured conversations. This guide covers both problems.

Why Most Interview Questions Fail

Most interviewers ask the same ten questions they were once asked as candidates. That pattern produces three compounding problems: rehearsed answers, legal exposure, and inconsistency across interviewers that makes candidate comparison impossible.

Generic questions produce rehearsed answers

"Tell me about yourself." "What is your greatest weakness?" "Where do you see yourself in five years?" Every candidate has polished responses to these prompts before they walk in the door. The answers reveal preparation, not capability. You learn what the candidate thinks you want to hear, not how they actually think or behave under real conditions.

Structured, role-specific questions — particularly those tied to actual job scenarios — are far harder to pre-rehearse. A candidate can prepare a story about a generic weakness; it is much harder to fabricate a detailed, specific account of the last time they had to reverse a product decision under pressure with incomplete data.

Illegal questions create legal liability

The U.S. Equal Employment Opportunity Commission (EEOC) recovered $665 million in employment discrimination relief in FY 2023 alone. Many of those cases started with interview questions that touched on protected characteristics — age, religion, national origin, disability, pregnancy status, marital status. These questions do not have to be malicious to create liability. A hiring manager asking "where are you originally from?" to break the ice can generate a discrimination complaint if the candidate is not hired.

The legal standard is simple: if a question does not directly probe the candidate's ability to perform the job's essential functions, it should not be asked.

No consistency across interviewers breaks comparison

When three interviewers all ask different questions, their evaluations cannot be compared in any meaningful way. One interviewer explored communication skills; another spent 30 minutes on culture fit; a third went deep on technical knowledge. The hiring committee now has three incompatible data sets and defaults to gut feel — which is where bias enters. Structured interviews, where every candidate answers the same questions evaluated against the same criteria, solve this directly.

The 4 Types of Interview Questions

Not every question type serves the same purpose. Understanding the four main categories — and when to use each — is the foundation of an effective interview questions guide.

1. Behavioural questions (STAR method)

Behavioural questions ask candidates to describe a specific past experience. The premise, validated repeatedly in organizational psychology research, is that past behaviour is the strongest available predictor of future behaviour. Interviewers evaluate answers using the STAR framework: Situation, Task, Action, Result.

The four components matter in different ways. The Situation and Task establish context and responsibility. The Action is the most diagnostic part — it reveals how the candidate actually thinks and operates, not how they think they should operate. The Result tells you whether their approach worked and whether they measure their own impact.

Example behavioural questions:

  • "Tell me about a time you had to deliver a project with fewer resources than originally planned. What did you cut, and how did you decide?"
  • "Describe a situation where you disagreed with your manager's decision. How did you handle it and what happened?"
  • "Walk me through the last time you made a significant error at work. What was your process for addressing it?"

When a candidate gives an incomplete answer — describing the situation but skipping the result — ask a neutral follow-up: "What was the actual outcome?" or "How did you measure success?" Avoid leading follow-ups that hint at the answer you want.

2. Situational questions

Situational questions present a hypothetical scenario tied to the role and ask the candidate to describe how they would respond. They are particularly useful for candidates with limited direct experience — graduates, career changers, or people moving into a new function — where there are not enough real reference situations to draw from.

According to Schmidt and Hunter's meta-analysis of personnel selection methods (updated in 2016 and published in the Journal of Applied Psychology), situational interview questions have a predictive validity of around 0.35 — significantly better than unstructured conversation (0.20) though somewhat lower than past-behaviour questions (0.51). The combination of both types in a single interview produces the strongest predictive results.

Example situational questions:

  • "Imagine your team is two weeks from a product deadline when you discover a critical dependency will not be ready in time. How do you approach the next 24 hours?"
  • "A long-term client escalates a complaint directly to your CEO, copying you. Your team handled the account. What do you do?"
  • "You are leading a project where two senior stakeholders have conflicting requirements. Both report to the same VP. How do you move forward?"

3. Competency-based questions

Competency-based questions target a named skill or behavioural attribute — leadership, communication, analytical thinking, adaptability — rather than a general experience. They differ from pure behavioural questions in that they explicitly name the competency being assessed, which helps interviewers score consistently against a defined standard.

Example competency-based questions:

  • Leadership: "Describe how you brought together a team with different working styles to deliver a shared outcome."
  • Communication: "Tell me about the last time you had to explain a technical decision to a non-technical audience. What did you do to ensure they understood?"
  • Analytical thinking: "Walk me through a data problem you solved recently. What was the question, what data did you use, and what did you conclude?"
  • Adaptability: "Describe a period of significant change at work — reorganization, new leadership, pivot in strategy. How did you adjust your approach?"

For each competency-based question, you need a scoring anchor before the interview: a written description of what a 1, 3, and 5 response looks like. Without this, two interviewers asking the same question will score the same answer differently. See Section 8 for complete BARS examples.

4. Technical and skills-verification questions

Technical questions evaluate job-specific knowledge and skills. The most common mistake with technical questions is making them conversational: "How comfortable are you with SQL?" produces a confidence rating, not an ability rating. Effective technical questions ask candidates to demonstrate, not self-assess.

Example technical/skills-verification approaches:

  • Instead of "How well do you know Excel?" — "Here is a data set with 500 rows. Walk me through how you would calculate the monthly churn rate from this."
  • Instead of "Do you have experience with agile?" — "Tell me about the last sprint planning session you ran. What was your process for prioritizing the backlog?"
  • Instead of "Are you a good writer?" — "Here is a brief for a product announcement. Write a 100-word email version of it in the next 10 minutes."

Work sample tests — short tasks that mirror actual job content — have a predictive validity of 0.54 according to Schmidt and Hunter, which is the highest of any single selection method. Even a 15-minute mini-task embedded in the interview process produces better signal than an hour of unstructured conversation.

Under U.S. federal law — primarily Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), the Americans with Disabilities Act (ADA), and the Immigration Reform and Control Act (IRCA) — certain interview topics create direct legal exposure. The EEOC guidance on pre-employment inquiries is the primary reference. State laws (California, New York, Illinois) often add further restrictions, particularly around salary history and criminal records.

The table below maps the eight highest-risk topics, shows the illegal phrasing that still appears in real interviews, and provides the legal alternative that gets you the same information without the exposure.

Topic Illegal Question Legal Alternative Why It Matters
Nationality / Citizenship "Where are you from originally?" / "Are you a U.S. citizen?" "Are you authorized to work in the United States?" IRCA prohibits citizenship-based discrimination against authorized workers. National-origin questions violate Title VII.
Religion "What religion do you practice?" / "Will you need any religious holidays off?" "This role requires occasional weekend availability. Are you able to meet that requirement?" Title VII prohibits religious discrimination. Probe schedule availability in job-neutral terms; offer accommodation without asking why it's needed.
Age "How old are you?" / "What year did you graduate high school?" "Are you at least 18 years of age?" (only if legally required for the role) ADEA protects workers aged 40+. Graduation year questions are a common proxy for age and carry the same risk.
Disability "Do you have any health conditions?" / "Do you take prescription medication?" "Are you able to perform the essential functions of this role, with or without reasonable accommodation?" ADA prohibits pre-offer medical inquiries. Post-offer medical questions are permitted only if asked of all candidates for the same role.
Pregnancy / Family Plans "Are you planning to have children?" / "Are you pregnant?" "This role involves up to 30% travel. Is that something you can commit to?" Title VII (as amended by the Pregnancy Discrimination Act) prohibits all pregnancy-related questions. Probe travel or time requirements in job-neutral terms.
Marital Status "Are you married?" / "What does your spouse do?" Do not ask. No legal version of this question has job relevance. Marital status is a protected characteristic in most states and proxies for gender discrimination. There is no business justification for this question.
Financial Status "Have you ever declared bankruptcy?" / "Do you own or rent your home?" Only permissible for roles with fiduciary responsibility — via a formal background check process under FCRA, not in-interview questioning. Financial status questions correlate with protected characteristics and can create FCRA/Title VII exposure. Use formal, consented background checks if the role genuinely requires it.
Criminal Record "Have you ever been arrested?" / "Do you have a criminal record?" Defer to post-offer stage. Use ban-the-box compliant language and individualized assessment where records are relevant to specific job duties. Over 35 U.S. states and 150+ cities have ban-the-box laws. Blanket disqualification based on arrest records (not convictions) violates EEOC guidance and disparately impacts protected groups.

Small talk is not legally neutral

Many illegal questions surface in casual pre-interview conversation: "Did you travel far today?" leading to "Where do you live?" "Exciting weekend?" leading to "Do you have kids?" The legal risk applies to any question asked during the hiring process, not just formal interview questions. Train interviewers on what constitutes a protected topic before they go into a room with a candidate.

Best Questions by Hiring Goal

Generic competency categories produce generic questions. The most useful interview question banks are organized by what you are actually trying to learn about the candidate — not just by label. Here are question sets for the five hiring goals that come up most often, along with the diagnostic intent behind each question.

For culture add

Culture add questions identify whether a candidate will contribute something genuinely different to the team — not just whether they fit an existing mould. Hiring for "fit" without defining what that means produces homogeneous teams. Interview bias thrives in vague culture-fit conversations.

  • "Describe an environment where you do your best work. What specifically about that environment enables you?"
  • "Tell me about a time you changed how a team operated for the better. What did you notice that others hadn't?"
  • "What do teams you join tend to get from working with you that they didn't have before?"

For leadership potential

Leadership potential questions surface evidence of initiative, influence without authority, and the ability to develop others. They are as relevant for individual contributors as for managers — the behaviours are present (or absent) regardless of formal title.

  • "Tell me about a time you took ownership of an outcome that wasn't technically your responsibility."
  • "Describe the last time you mentored or developed someone, formally or informally. What did you focus on and what changed for them?"
  • "Walk me through a situation where you had to get people to move in a direction they were initially resistant to."

For problem-solving

Problem-solving questions reveal how a candidate structures ambiguous situations. The key signal is in the Action component of STAR — specifically, whether they define the problem before acting, gather data, involve others, and adjust when their first approach doesn't work.

  • "Tell me about the most ambiguous problem you've worked on in the last 12 months. How did you decide where to start?"
  • "Describe a time your initial solution to a problem turned out to be wrong. What did you do next?"
  • "Walk me through a decision you made with incomplete information. What did you do to reduce uncertainty, and what did you ultimately decide?"

For communication

Communication questions should distinguish between output quality (clear, well-structured messages) and process quality (listening actively, adjusting to the audience, following up). Both matter and both are predictable from past behaviour.

  • "Tell me about a time you had to communicate complex information to someone who pushed back on your conclusions. How did you handle it?"
  • "Describe the last time a misunderstanding between you and a colleague caused a real problem. What was your role in the breakdown?"
  • "Give me an example of written communication you produced that had a significant impact — what was it and how did you approach it?"

For growth mindset

Growth mindset questions assess whether a candidate actively seeks feedback, changes their behaviour as a result of it, and applies learning across contexts. The diagnostic signal is specificity — vague answers about "always wanting to learn" reveal nothing. Concrete examples of changed behaviour reveal a lot.

  • "Tell me about the most useful piece of critical feedback you received in the last two years. What specifically did you change as a result?"
  • "Describe a skill you decided to build that was outside your natural strengths. What did the learning process look like?"
  • "What is the most significant way your professional approach has changed over the past three years, and what drove that change?"

Generate role-specific question sets with Edera AI

Treegarden's Edera AI builds a complete interview framework for each open role by analysing the job description and the candidate's CV together. It generates behavioural, situational, and competency-based questions mapped to the specific skills the role requires — not a generic library copy-pasted across positions. The interviewer gets a ready-to-use guide with BARS anchors before each session. See how it works →

Questions That Predict Performance

The research base on interview validity is clearer than most hiring teams realise. Schmidt and Hunter's 1998 meta-analysis — replicated and updated multiple times, including a 2016 revision covering 100 years of selection research — gives precise predictive validity coefficients for different question types. Three categories stand out.

Structured situational questions

Situational questions with a scoring guide (what a 1, 3, and 5 answer looks like) outperform unstructured interviews by a factor of more than two. The key is that the "correct" response for each scenario must be defined in advance, based on input from high performers in the role or from job task analysis — not the interviewer's gut. Questions drawn from critical incidents (real situations that distinguish high performers from average performers in that specific role) produce the strongest signal.

Work sample questions

Work sample tests — asking a candidate to do a representative sample of the actual job — have the highest predictive validity of any selection method (0.54 per Schmidt and Hunter). A short writing task for a content role, a data analysis exercise for an analyst, a mock client call for a sales role, or a code review for a developer each produce more predictive data in 20 minutes than most hour-long interviews. The exercise does not need to be elaborate; it needs to mirror the actual work.

Past-behaviour questions

Questions in the form "Tell me about a time you…" outperform hypothetical questions ("What would you do if…") for experienced candidates because they require the candidate to recall specific events rather than construct ideal-sounding scenarios. Candidates who have faced a situation describe it differently — with real detail, real friction, real results — than candidates who are imagining what they would do. Training interviewers to probe for specificity ("What specifically did you say?" "What was the actual result?") filters out theoretical answers quickly.

For a deeper breakdown of how to score these against job outcomes, see interview scorecards in ATS and the structured interview guide.

Questions Interviewers Should Stop Using

Three questions appear in nearly every interview process and produce almost no useful information. They survive through habit, not evidence.

"What is your greatest weakness?"

This question has been so thoroughly coached that it has a fixed answer genre: a strength disguised as a weakness ("I work too hard") or a past weakness that is now resolved ("I used to struggle with public speaking but I took a course"). Neither response gives you diagnostic information about how the candidate actually performs. Replace it with a past-behaviour question about a specific failure: "Tell me about the last time your work fell short of what was expected. What happened and what did you do about it?" The specificity requirement makes it much harder to deliver a prepared non-answer.

"Where do you see yourself in five years?"

This question produces aspirational statements calibrated to what candidates think you want to hear. It does not predict performance, tenure, or fit. If you are trying to understand motivation and career direction, ask specifically: "What have you deliberately chosen to get better at in the last two years, and why?" or "What kind of work energises you most, and what does the opposite look like?" These questions reveal actual patterns of engagement rather than scripted ambition narratives.

"Why should we hire you?"

This is a sales prompt, not an evaluation question. The candidate who answers it best is the one with the most practised pitch — which correlates with interview preparation, not job performance. If you want to understand how a candidate perceives their own strengths and their fit for this role, ask them to connect their specific experience to a specific challenge the role involves: "Based on what you know about this role, where do you think your experience is most directly relevant, and where do you see the biggest gap you would need to close?" That question requires self-awareness and research, not performance.

The pattern behind low-signal questions

Questions that invite self-description rather than evidence — "How would you describe your leadership style?" "Are you a team player?" "How do you handle stress?" — share the same structural flaw: they ask candidates to characterise themselves rather than demonstrate capability through specific examples. Any question that can be answered with "I am a [positive trait]" produces low signal. Any question that requires "Here is a specific time I did X and here is what happened" produces high signal.

Building a Shared Question Bank

An interview question bank is not a list of questions. It is a structured, maintained resource that maps questions to competencies, includes scoring anchors, assigns questions to specific interviewers on the panel, and gets updated as the company learns what distinguishes high performers from average ones in each role.

How to organise it

Start with the competency map for each role: 4–6 skills or behaviours that are genuinely critical for success. These should come from the job description, input from the current top performers in that role, and any post-hire performance data you have. Generic competencies like "teamwork" and "communication" are fine starting points but need to be defined at the role level — what does "strong communication" mean specifically for a product manager versus a customer success manager? Different answers produce different questions.

For each competency, document:

  1. 2–3 behavioural questions that probe for past evidence
  2. 1 situational question for candidates with limited direct experience
  3. BARS anchors for scores 1, 3, and 5
  4. 2–3 suggested follow-up probes when candidates give incomplete answers

Ensuring panel consistency

In multi-interviewer panels, each interviewer should own a distinct set of competencies and their associated questions. This prevents the same question being asked three times while leaving other competencies entirely uncovered. Before the interview, all panel members should review the candidate's CV, the question guide, and the scoring rubric. After the interview, each interviewer completes their own scorecard independently — before any group debrief — to prevent anchoring bias (one strong opinion influencing everyone else's rating).

Calibration sessions — where the team scores the same recorded answer or reviews a recent hire's interview notes and compares them to actual performance — are the most effective tool for keeping ratings consistent across different interviewers. Even one calibration session per quarter reduces inter-rater variability significantly. See structured scoring in ATS platforms for how to run this digitally at scale.

Keeping it current

A question bank that is never updated becomes as generic as the questions you were trying to replace. Track which questions generate the most differentiated answers (some questions will reliably produce rich STAR responses; others will consistently produce vague non-answers). Connect new-hire performance data back to interview scores: if someone who scored 4/5 on "problem-solving" during the interview is underperforming six months later, the question or the scoring anchor needs revision. This feedback loop is what turns an interview process from a static document into a continuously improving hiring system. Interview feedback loops covers this in detail.

Rating Scales for Interview Answers

A 1–5 rating scale without defined anchors is not a rating scale — it is five different people's intuitions mapped to numbers. Behaviourally Anchored Rating Scales (BARS) solve this by writing, in advance, the specific observable behaviours that distinguish each score level.

Example BARS rubric: Problem-solving

Score Label What this answer looks like
1 Below expectations Candidate describes a problem but took no independent action to solve it, or escalated immediately without attempting a solution. No measurable result. Unable to identify what they specifically contributed.
2 Partial Candidate took action but without a clear diagnostic process. The result is mentioned but vague ("it worked out"). Missing one or more STAR components. Answer is reactive rather than structured.
3 Meets expectations Candidate clearly defines the problem, takes a logical sequence of actions, involves relevant stakeholders where appropriate, and states a measurable outcome. All STAR components present. Actions and results are specific.
4 Exceeds expectations Candidate proactively identified the problem before it was flagged. Their approach demonstrates structured thinking (data gathered, hypotheses tested, decision made). Result is quantified and attributed. Shows reflection on what they would do differently.
5 Exceptional Candidate identified a systemic issue others had missed, built a structured solution, involved and aligned stakeholders, measured impact, and applied the learning to prevent recurrence or improve the team's process. Demonstrates both technical depth and organisational awareness.

Calibration sessions

Even well-written BARS anchors drift in practice. Two interviewers reading the same anchor will interpret it differently after a few months without recalibration. A 30-minute calibration session — where the team scores the same candidate answer independently, then compares scores and discusses the gap — keeps raters aligned. The goal is not perfect agreement on every score; it is understanding why different scores were given and reaching a shared interpretation of the anchor.

For panel interviews covering 4–6 competencies, run calibration once per quarter or after any significant change in the role profile. Track score variance across interviewers: if one interviewer consistently scores 1–2 points higher than others on the same competencies, their anchor calibration needs revisiting.

For a detailed walkthrough of how to conduct a structured interview session from opening to debrief, and how AI scoring tools support the process, see AI-assisted interview question generation.

Build your interview question bank in Treegarden

Treegarden's ATS platform stores your question bank by role and competency, assigns questions to panel members automatically, collects independent scorecards from each interviewer, and surfaces BARS anchor scores in a side-by-side candidate comparison view. Your team stops comparing impressions and starts comparing evidence. Start your first structured interview process →

Related Reading

Frequently Asked Questions

What are the best interview questions to ask candidates?

The most predictive interview questions combine behavioural questions (past-experience STAR format), situational questions (hypothetical scenarios), and work sample questions. Examples: "Tell me about a time you had to convince stakeholders of an unpopular decision — what did you do and what happened?" or "Walk me through how you would approach building a reporting process from scratch for a 20-person team." These types carry a predictive validity of 0.51 according to Schmidt and Hunter's meta-analysis, compared to 0.20 for unstructured conversation.

What interview questions are illegal to ask candidates?

Under U.S. Equal Employment Opportunity law (Title VII, ADEA, ADA, and IRCA), employers cannot ask about nationality, citizenship status, religion, age or date of birth, disability or medical history, pregnancy or family plans, marital status, financial history, or arrest records without job justification. The EEOC recovered $665 million in employment discrimination relief in FY 2023. Replace illegal questions with legal alternatives that directly probe job-related skills and availability.

What is the STAR method in interviews?

STAR stands for Situation, Task, Action, Result. It is a framework for structuring both interview questions and candidate answers. Interviewers ask for a specific past experience (Situation), the responsibility the candidate held (Task), the specific actions they took (Action), and the measurable outcome (Result). Candidates who provide complete STAR answers demonstrate structured thinking. Interviewers should prompt for missing components, especially the Result, which candidates most often omit.

How many interview questions should you ask per interview?

For a 45–60 minute interview, 6–8 substantive questions is the practical limit when you allow full STAR responses (4–6 minutes each) plus follow-up probes. Asking more than 10 questions forces interviewers to rush answers, which eliminates the depth that makes behavioural questions predictive. A structured question bank with 4–6 core competency questions per role gives you enough data to score and compare candidates objectively.

What questions should interviewers stop asking?

Three questions that produce low-quality data: "What is your greatest weakness?" (candidates answer with rehearsed non-weaknesses), "Where do you see yourself in 5 years?" (produces aspirational answers with no predictive power), and "Why should we hire you?" (a sales pitch, not an evaluation). Replace these with specific past-behaviour questions tied to the competencies your role actually requires.

How do you build a shared interview question bank for your team?

A team question bank starts with mapping the 4–6 competencies critical to each role (from the job description and input from top performers). Then write 2–3 behavioural and situational questions per competency, add BARS anchors defining what a score of 1, 3, and 5 looks like for each question, and store everything in your ATS or shared document. Assign questions to panel members so no question is asked twice. Calibration sessions where the team scores the same recorded answer help keep ratings consistent.

What is a BARS rating scale in interviews?

BARS stands for Behaviourally Anchored Rating Scale. It is a 1–5 scoring rubric where each score is defined by a specific description of what that level of answer looks like — not just a number. For example, on a "problem-solving" question: a score of 1 means the candidate described the problem but took no independent action; a score of 3 means they identified the issue, acted on it, and measured the outcome; a score of 5 means they proactively identified a problem others missed, led the resolution, and applied the learning to prevent recurrence. BARS anchors defined before the interview eliminate post-hoc rationalisation.

Do structured interview questions reduce hiring bias?

Yes. Research published in the Journal of Applied Psychology shows that structured interviews reduce the influence of demographic similarity bias, halo effect, and confirmation bias compared to unstructured conversations. When every candidate answers the same questions in the same order, evaluated against the same BARS anchors, the comparison is based on job-relevant evidence rather than impressions. The SHRM guidelines on interview compliance also recommend structured approaches as the standard for legally defensible hiring.

This article was created with AI assistance. Content has been editorially reviewed by the Treegarden team.