The $200K Dashboard Nobody Trusts
A CHRO at a 2,000-person technology company spent $200,000 on an employee experience platform last year. It collected data from onboarding surveys, quarterly pulse checks, exit interviews, and annual engagement questionnaires. The executive team received monthly slide decks filled with colour-coded heatmaps and trend lines. Yet when the board asked a direct question — “Is our employee experience getting better?” — the CHRO could not answer with confidence.
This is not an isolated case. It is the norm. Most organisations fall into the same trap: they conflate data collection with measurement. They run a single annual engagement survey, get a score, compare it to last year, and call it a day. The data is already stale by the time results are published. Worse, they measure satisfaction — how content people are — instead of experience, which spans every interaction from first job application to final exit interview.
The result is a measurement programme that costs money, consumes HR hours, and produces reports that leadership does not trust. Employees, having filled out surveys without seeing change, stop responding honestly. Managers, lacking specific guidance on what to improve, ignore the results entirely. The entire cycle repeats.
Fixing this requires abandoning the annual-survey mindset and replacing it with employee experience metrics that are continuous, specific, and tied to individual lifecycle stages. Instead of asking “Are employees happy?” you need to ask “What is the 90-day retention rate for new hires in engineering?” or “How many employees applied for internal roles this quarter?” These questions produce answers that drive action.
This guide presents 15 employee experience KPIs organised across five lifecycle stages: Attract, Onboard, Engage, Develop, and Retain. For each metric, you will find a precise definition, a measurement method, an industry benchmark, and a specific corrective action to take when the number falls below target. No vague advice. No abstract theory. Only what you need to measure employee experience in a way that produces board-ready answers and manager-level actions.
Why Annual Surveys Fail
According to Gallup's State of the Global Workplace report, only 23% of employees worldwide are engaged at work. Annual surveys miss the other 364 days of experience data. By the time results are published, the problems they reveal are months old.
Two Reasons Your EX Measurement Programme Is Broken
Before examining individual metrics, it is worth understanding the two structural failures that plague most employee experience measurement programmes. These are not tactical errors — they are foundational mistakes that render the entire system unreliable.
Problem 1: Surveying Annually Instead of Continuously
The annual engagement survey was designed for an era when employees stayed at companies for decades and organisational change happened slowly. Neither condition holds today. The average employee tenure in the United States is 4.1 years, according to the Bureau of Labor Statistics data referenced by SHRM. In technology and professional services, it is closer to 2.5 years. An annual survey means you get, at best, two or three data points during an employee’s entire tenure. That is not measurement — it is guesswork with a spreadsheet.
Continuous measurement does not mean bombarding people with surveys every week. It means combining passive operational data (turnover rates, application volumes, time-to-productivity) with targeted surveys triggered at specific lifecycle moments (onboarding day 30, post-promotion, return from parental leave). This approach generates a steady stream of actionable data without creating survey fatigue. If you are designing your employee experience programme from scratch, building continuous measurement into the foundation is far easier than retrofitting it later.
Problem 2: Measuring Satisfaction Instead of Experience
Satisfaction asks: “Are you content with your current situation?” Experience asks: “How well does this organisation support you at every stage of your employment?” These are fundamentally different questions. An employee can be satisfied with their salary and office setup while simultaneously having a terrible onboarding experience, no career development path, and a manager who never provides feedback. Satisfaction metrics will show green. Experience metrics will show red.
The distinction matters because satisfaction is a snapshot; experience is a journey. A person can be satisfied today and disengaged next quarter because they were passed over for a promotion or saw a colleague leave without anyone asking why. EX metrics track the full journey, from the moment a candidate lands on your careers page to the moment an alumnus leaves a Glassdoor review. When you measure the journey rather than a single moment, you can intervene before problems become departures.
The Lifecycle Framework: 5 Stages, 15 Metrics
Employee experience is not one thing — it is the sum of hundreds of interactions across distinct phases of the employment relationship. Measuring it requires employee experience KPIs tied to each phase. Here is the framework this guide uses:
- Attract (2 metrics): The pre-hire experience — how candidates perceive your employer brand and career site.
- Onboard (3 metrics): The first 90 days — how quickly new hires become productive and whether they stay.
- Engage (4 metrics): The ongoing experience — day-to-day sentiment, participation, and internal growth opportunities.
- Develop (3 metrics): Learning and advancement — whether employees are growing their skills and moving up.
- Retain (3 metrics): Longevity and departure signals — who is leaving, when, and what departing and staying employees say.
This structure ensures you do not over-index on any single phase. A company with outstanding engagement scores but 40% first-year turnover has an onboarding problem, not an engagement problem. A company with low turnover but zero internal mobility has a development problem that will eventually become a retention problem. For a deeper look at mapping these touchpoints, see our guide on employee journey mapping.
15 Employee Experience Metrics: Complete Overview
The following table summarises all 15 metrics. Each is explored in detail in the sections that follow.
| # | Metric | Lifecycle Stage | How to Measure | Benchmark | Action if Below |
|---|---|---|---|---|---|
| 1 | Application Completion Rate | Attract | Completed applications ÷ started applications × 100 | 60–70% | Shorten the application form; reduce required fields to under 10 |
| 2 | Career Page Conversion Rate | Attract | Visitors who click “Apply” ÷ total career page visitors × 100 | 8–12% | Improve job descriptions; add salary ranges and employee testimonials |
| 3 | Time-to-Productivity | Onboard | Days from start date until new hire meets defined performance baseline | 90–120 days (role-dependent) | Structure first-week schedule; assign onboarding buddy; set 30/60/90 goals |
| 4 | Onboarding NPS | Onboard | Survey at day 30 and day 90: “How likely are you to recommend our onboarding?” (0–10) | +40 or higher | Interview detractors; audit onboarding content for gaps; fix manager hand-off |
| 5 | 90-Day Retention Rate | Onboard | New hires still employed at 90 days ÷ total new hires in cohort × 100 | ≥85% | Conduct stay interviews at day 45; review role-expectation alignment |
| 6 | eNPS (Employee Net Promoter Score) | Engage | “How likely are you to recommend this company as a place to work?” (0–10). Score = %Promoters − %Detractors | +10 to +30 | Run focus groups with detractors; address top three complaint categories |
| 7 | Pulse Survey Score | Engage | Average score across quarterly pulse survey questions (scale 1–5 or 1–10) | ≥3.8 / 5 or ≥7.5 / 10 | Identify lowest-scoring questions; create department-level action plans |
| 8 | Survey Participation Rate | Engage | Surveys completed ÷ surveys distributed × 100 | ≥75% | Shorten survey length; communicate past actions taken from results; ensure anonymity |
| 9 | Internal Mobility Rate | Engage | Employees who changed role or department ÷ total headcount × 100 (annual) | 10–15% | Create internal job board; mandate internal posting before external; fund cross-training |
| 10 | Training Completion Rate | Develop | Employees who completed assigned training ÷ employees assigned × 100 | ≥80% | Shorten modules to under 20 minutes; allow self-paced scheduling; tie completion to goals |
| 11 | Training Application Rate | Develop | Employees applying skills from training on the job within 60 days ÷ total completers × 100 | ≥60% | Redesign training around real projects; add post-training manager check-ins |
| 12 | Promotion Rate | Develop | Employees promoted ÷ total headcount × 100 (annual) | 8–15% | Audit promotion criteria for bias; publish clear career ladders; increase manager training |
| 13 | Voluntary Turnover by Tenure | Retain | Voluntary departures in tenure band ÷ total employees in tenure band × 100 | <15% overall; <20% in year one | Segment by tenure band and department; target interventions at highest-risk cohort |
| 14 | Stay Interview Feedback Score | Retain | Average score from structured stay interviews (scale 1–5): “How satisfied are you with reasons to stay?” | ≥4.0 / 5 | Train managers on stay interview technique; act on top three requests within 30 days |
| 15 | Glassdoor / Employer Review Rating | Retain | Average rating on public employer review platforms (1–5 scale) | ≥3.8 / 5 | Respond to all reviews; address systemic themes in internal town halls; encourage reviews from current staff |
Stage 1: Attract — Measuring the Pre-Hire Experience
Employee experience does not begin on the first day of work. It begins the moment someone encounters your employer brand. A confusing careers page, a 45-minute application form, or a job description full of vague requirements all shape the candidate’s perception of what it is like to work at your organisation. Two metrics capture this stage.
Metric 1: Application Completion Rate
Definition: The percentage of candidates who begin an application and submit it successfully. This metric reveals how much friction exists in your application process.
How to measure: Divide the number of completed applications by the number of started applications, then multiply by 100. Most applicant tracking systems track this automatically. If yours does not, use form-level analytics to count unique sessions that reach the confirmation page versus those that reach the first application step.
Benchmark: 60–70%. Applications longer than 15 minutes or requiring more than 15 fields consistently fall below 50%, according to research by the Society for Human Resource Management (SHRM).
Action if below benchmark: Audit your application form field by field. Remove every optional field. Eliminate redundant document uploads (a resume already contains contact information — do not ask for it again in separate fields). Allow one-click apply through LinkedIn or Indeed profiles. Set a target of under 5 minutes and under 10 fields. Test the application on a mobile device; if it takes more than two thumb-scrolls, it is too long.
Metric 2: Career Page Conversion Rate
Definition: The percentage of visitors to your careers page who click through to an individual job listing or hit “Apply.” This measures how effectively your employer brand converts passive interest into active intent.
How to measure: Divide the number of visitors who click “Apply” (or click through to a specific job listing) by the total number of unique visitors to your careers page, then multiply by 100. Track this through Google Analytics event goals or your ATS analytics dashboard.
Benchmark: 8–12%. Pages with employee video testimonials and transparent salary ranges consistently outperform pages without them.
Action if below benchmark: Add salary ranges to every job posting (candidates increasingly expect this, and legislation in many jurisdictions now requires it). Include short employee testimonials or day-in-the-life content. Ensure the page loads in under 3 seconds on mobile. Replace stock photos with real workplace imagery. Make the “View open roles” button visible above the fold. Treegarden’s ATS features include branded career pages that you can customise to match your employer brand without developer involvement.
Stage 2: Onboard — The Critical First 90 Days
Onboarding is the highest-impact phase of the employee lifecycle. According to the Gallup workplace research, employees who strongly agree their onboarding was exceptional are 2.6 times more likely to be extremely satisfied with their workplace. Despite this, only 12% of employees strongly agree their organisation does onboarding well. Three metrics cover this stage. For a deeper dive into onboarding measurement specifically, see our guide on onboarding satisfaction metrics.
Metric 3: Time-to-Productivity
Definition: The number of calendar days from a new hire’s start date until they reach a defined, role-specific performance baseline. This is not the same as time-to-fill (a recruiting metric) — it measures how quickly the organisation converts a hire into a contributing team member.
How to measure: Define what “productive” means for each role before the hire starts. For a sales representative, it might be closing their first deal. For a software engineer, it might be shipping their first pull request to production. For a customer support agent, it might be handling 80% of the ticket volume of a tenured agent. Track the date this milestone is achieved and subtract the start date. Average the result across each hiring cohort.
Benchmark: 90–120 days for professional roles. Highly structured onboarding programmes can reduce this to 60–90 days. Roles requiring domain expertise or security clearances may reasonably take longer.
Action if below benchmark: Map the first 90 days hour by hour for the first week and week by week thereafter. Assign a dedicated onboarding buddy (not the manager — a peer). Set explicit 30/60/90-day goals and review them on schedule. Provide access to all tools and systems before day one, not during it. If time-to-productivity exceeds 120 days consistently, the problem may be in the job design rather than the onboarding process.
Metric 4: Onboarding NPS
Definition: A Net Promoter Score specific to the onboarding experience. It asks new hires: “On a scale of 0 to 10, how likely are you to recommend our onboarding process to a friend joining this company?”
How to measure: Send this single question at two points: day 30 and day 90. Respondents scoring 9–10 are promoters, 7–8 are passives, and 0–6 are detractors. Subtract the percentage of detractors from the percentage of promoters. Track this by department and hiring cohort to identify patterns.
Benchmark: +40 or higher. An onboarding NPS below +20 signals structural problems in the first-week experience or the manager hand-off.
Action if below benchmark: Interview every detractor within one week of their response. Ask three specific questions: What was missing? What was confusing? What would you change? The answers will cluster around a small number of fixable issues — typically IT access delays, unclear first-week schedules, or absent manager involvement. Fix the top two issues before the next cohort starts.
Metric 5: 90-Day Retention Rate
Definition: The percentage of new hires who remain employed at the 90-day mark. Early departures are the most expensive kind of turnover because they represent sunk costs in recruiting, hiring, and onboarding with zero return.
How to measure: For each hiring cohort, count the number of new hires still employed at 90 days. Divide by the total number of hires in that cohort and multiply by 100. Exclude involuntary terminations to isolate voluntary early departures.
Benchmark: 85% or higher. Industries with high-volume hiring (retail, hospitality) may see lower rates, but professional services, technology, and healthcare should target 90%+.
Action if below benchmark: Conduct stay interviews at day 45 — halfway through the critical window. Ask: “What is different from what you expected?” and “What would make you stay another year?” If the answers reveal misaligned expectations, the problem is in the recruitment process (overpromising in interviews). If the answers reveal cultural or structural issues, the problem is in the onboarding experience itself. For more on structuring this critical window, see our employee experience design guide.
Stage 3: Engage — The Ongoing Daily Experience
Engagement metrics measure what happens after onboarding ends and the employee settles into their role. This is the longest phase of the lifecycle and the one most susceptible to gradual erosion. Four metrics capture the health of the ongoing experience.
Metric 6: eNPS (Employee Net Promoter Score)
Definition: The willingness of employees to recommend their organisation as a place to work. eNPS is the single most widely used gauge of overall employee sentiment and serves as a leading indicator of retention risk.
How to measure: Ask one question quarterly: “On a scale of 0 to 10, how likely are you to recommend [Company Name] as a place to work?” Calculate: (% of respondents scoring 9–10) minus (% of respondents scoring 0–6). The result ranges from −100 to +100. Segment by department, tenure, and role level to identify pockets of concern.
Benchmark: +10 to +30 is good. +30 to +50 is excellent. Above +50 is exceptional. A negative eNPS indicates more detractors than promoters and requires urgent attention. Qualtrics benchmarking data places the global median between +10 and +20.
Action if below benchmark: Do not treat eNPS as a score to improve in isolation. Run follow-up focus groups with self-selected participants from the detractor group. Ask open-ended questions about what specifically they would change. The themes that emerge — typically management quality, growth opportunities, or compensation fairness — become the action items. Track those themes quarter over quarter to confirm your interventions are working. For a guide on designing effective follow-up surveys, see our article on employee engagement surveys.
Metric 7: Pulse Survey Scores
Definition: The average score across a set of targeted questions asked at regular intervals (monthly or quarterly). Unlike annual surveys, pulse surveys focus on a narrow set of 5–10 questions that track specific dimensions of the work experience over time.
How to measure: Select 5–10 questions that cover the dimensions most relevant to your organisation (e.g., manager support, workload balance, sense of purpose, psychological safety). Use a consistent scale (1–5 or 1–10) and send the same questions at the same cadence. Calculate the average score per question and overall. Track trends over at least four quarters before drawing conclusions.
Benchmark: 3.8 or higher on a 5-point scale, or 7.5 or higher on a 10-point scale. Scores below 3.5/5 on any individual question warrant immediate investigation.
Action if below benchmark: Identify the two lowest-scoring questions. These represent the highest-priority intervention areas. Create department-level action plans with specific owners and deadlines. Share the results and the action plan with the team that produced them. Measure the same questions in the next pulse to validate improvement. Avoid changing questions frequently — trend data is more valuable than any single data point.
Metric 8: Survey Participation Rate
Definition: The percentage of employees who complete a distributed survey. Participation rate is a meta-metric: it does not measure experience directly, but it measures whether your measurement system is trustworthy enough for employees to engage with it.
How to measure: Divide the number of completed surveys by the number of surveys distributed, then multiply by 100. Track separately for each survey type (pulse, annual, lifecycle).
Benchmark: 75% or higher for pulse surveys. 80%+ for annual surveys. Rates below 60% indicate fundamental trust issues — employees either do not believe the survey is anonymous or do not believe their feedback will lead to change.
Action if below benchmark: First, shorten the survey. Every additional question beyond 10 reduces participation by approximately 2–3 percentage points. Second, communicate what changed as a result of the last survey. Publish a “You said, we did” summary before launching the next one. Third, guarantee anonymity in writing and explain exactly how responses will be aggregated. If participation is below 50%, pause the survey programme entirely, address the trust deficit, and restart with a shorter format.
Metric 9: Internal Mobility Rate
Definition: The percentage of employees who move into a new role, department, or function within the organisation during a given period. Internal mobility is a strong proxy for whether employees see a future at the company.
How to measure: Count the number of employees who changed roles (lateral moves, promotions, or department transfers) during the measurement period. Divide by total headcount and multiply by 100. Include only voluntary moves — exclude restructuring-driven reassignments.
Benchmark: 10–15% annually. According to Willis Towers Watson research, companies with internal mobility rates above 15% experience 30% lower voluntary turnover than those below 5%.
Action if below benchmark: Create a visible internal job board that posts all open roles before external advertising. Require managers to support (not block) internal applicants. Fund cross-training budgets so employees can develop skills for adjacent roles. Review whether manager performance evaluations include a component for developing and moving talent. If internal mobility is near zero, you have a cultural problem — managers are hoarding talent rather than developing it.
Track EX Metrics Across the Full Lifecycle
Treegarden centralises recruitment, onboarding, and employee data in a single platform, making it possible to track metrics from application to retention without juggling separate tools. Explore Treegarden’s features to see how lifecycle data connects.
Stage 4: Develop — Learning and Advancement
Development metrics answer the question every ambitious employee asks silently: “Is this company investing in my growth?” When the answer is no, the best performers leave first. Three metrics track whether your development programmes are real or performative.
Metric 10: Training Completion Rate
Definition: The percentage of employees who complete assigned or available training programmes within the designated timeframe. This measures accessibility and relevance — if people are not finishing training, it is either too long, too irrelevant, or too difficult to access.
How to measure: Divide the number of employees who completed the training by the number of employees who were assigned or enrolled, then multiply by 100. Track by training type (mandatory compliance vs. elective skill development), department, and role level.
Benchmark: 80% or higher for mandatory training. 50–60% for elective training is typical; above 70% for elective courses signals strong learning culture.
Action if below benchmark: Audit training content for length and relevance. Modules over 30 minutes consistently see lower completion rates. Break content into 10–20-minute segments. Allow self-paced scheduling instead of fixed-time sessions. Tie training completion to individual development plans and performance conversations. If compliance training completion is below 80%, the issue is likely logistical (employees cannot access the platform or cannot find time during work hours) rather than motivational.
Metric 11: Training Application Rate
Definition: The percentage of training completers who apply what they learned on the job within 60 days. Completion without application is wasted investment. This metric separates box-checking from genuine skill development.
How to measure: This is harder to track than completion and requires manager involvement. At the 60-day mark after training completion, send a brief survey to the employee’s manager: “Has [employee] applied the skills from [training] in their work?” Alternatively, track whether employees who completed specific training programmes produce measurable output changes (e.g., a sales training programme leading to higher conversion rates among participants vs. non-participants).
Benchmark: 60% or higher. Research consistently shows that without reinforcement, employees retain less than 20% of training content after 30 days — a phenomenon known as the Ebbinghaus forgetting curve.
Action if below benchmark: Redesign training around real projects rather than hypothetical scenarios. Assign post-training action items that require applying new skills within two weeks. Schedule a manager check-in at the 30-day mark to discuss how training is being applied. If application rates remain low despite good content, the barrier is likely managerial — managers are not creating space for employees to use new skills. Build this into the HR KPI dashboard so leadership can track application alongside completion.
Metric 12: Promotion Rate
Definition: The percentage of employees who receive a promotion (defined as a move to a higher level with increased responsibilities and compensation) during a given period.
How to measure: Count the number of promotions in the measurement period. Divide by average headcount and multiply by 100. Segment by gender, ethnicity, department, and tenure to identify equity gaps. A promotion rate that is healthy overall but zero in specific demographics signals a systemic barrier.
Benchmark: 8–15% annually, depending on industry and growth stage. High-growth companies may see higher rates; mature organisations with flatter structures may target the lower end.
Action if below benchmark: Audit promotion criteria to ensure they are documented, consistent, and free from subjective bias. Publish career ladders for every role family so employees can see what “next” looks like. Train managers on how to nominate and advocate for promotions. If the overall rate is within benchmark but specific groups are underrepresented, conduct a pay and promotion equity analysis and address systemic barriers. Low promotion rates combined with high voluntary turnover among high performers is a clear signal that your development pipeline is failing.
Stage 5: Retain — Keeping Your Best People
Retention metrics are lagging indicators — by the time turnover rises, the underlying problems have been present for months. That is why retention-stage metrics should be read alongside the leading indicators from earlier stages. Three metrics capture retention health. For context on how wellbeing affects these numbers, see our analysis of employee wellbeing and productivity.
Metric 13: Voluntary Turnover by Tenure
Definition: The voluntary departure rate segmented by how long employees have been at the organisation. Overall turnover rate is useful but insufficient — a company losing 15% of employees annually has a very different problem if most departures happen in year one (onboarding failure) versus year three (growth ceiling).
How to measure: Group employees into tenure bands: 0–6 months, 6–12 months, 1–2 years, 2–5 years, 5+ years. For each band, divide the number of voluntary departures by the total number of employees in that band, then multiply by 100. Calculate quarterly and annually.
Benchmark: Less than 15% overall voluntary turnover annually. Less than 20% in the first year. First-year turnover above 25% is a red flag that demands investigation. SHRM benchmarking data provides industry-specific breakdowns for comparison.
Action if below benchmark: Target the tenure band with the highest turnover rate. If 0–6 months is the problem, focus on onboarding improvements (metrics 3–5). If 1–2 years is the problem, focus on development and internal mobility (metrics 9–12). If 2–5 years is the problem, conduct stay interviews and investigate whether compensation, management quality, or career ceilings are driving departures. Never treat overall turnover as a single number — always segment by tenure to find the root cause.
Metric 14: Stay Interview Feedback Score
Definition: The average score from structured one-on-one conversations with current employees about what keeps them at the organisation and what might cause them to leave. Unlike exit interviews, stay interviews capture feedback while there is still time to act on it.
How to measure: Conduct structured stay interviews with a random sample of 20–30% of employees per quarter, ensuring coverage across departments and tenure bands. Use a standardised set of five questions, including: “What is the primary reason you stay at this company?” and “What would cause you to consider leaving?” Ask employees to rate their overall satisfaction with their reasons to stay on a 1–5 scale. Average the ratings.
Benchmark: 4.0 or higher on a 5-point scale. Scores below 3.5 in any department correlate with increased turnover risk in the following 6–12 months.
Action if below benchmark: Train managers to conduct stay interviews themselves — this is not an HR exercise. When an employee says “I stay because of my team, but I’d leave if I don’t get promoted this year,” the manager needs to act on that information within 30 days. Aggregate the “what would cause you to leave” responses across the organisation to identify systemic patterns. The three most common themes become the CHRO’s priority list for the next quarter.
Metric 15: Glassdoor / Employer Review Rating
Definition: Your organisation’s average rating on public employer review platforms such as Glassdoor, Indeed, or Kununu. This is the only employee experience metric that is visible to candidates, making it both a retention indicator and an employer brand signal.
How to measure: Monitor your average rating monthly. Track the volume and sentiment of new reviews. Pay particular attention to the themes in negative reviews (1–2 stars) and whether they align with issues identified in internal surveys.
Benchmark: 3.8 or higher on a 5-point scale. Companies below 3.5 face measurable difficulty attracting candidates — research suggests a significant percentage of job seekers will not apply to a company with a rating below 3.0.
Action if below benchmark: Do not ask employees to post positive reviews to inflate the score — this is transparent and damages trust. Instead, respond to every review (positive and negative) with specifics about what you are doing to address concerns. Identify the systemic themes in negative reviews and address them through internal programmes. Then, encourage (not require) current employees to share their genuine experiences. A rising internal eNPS will eventually produce a rising external rating. Discrepancies between internal sentiment and external ratings indicate that departing employees had a worse experience than current employees — investigate the exit process.
External Ratings as a Leading Indicator
A consistent gap between your internal eNPS and your Glassdoor rating suggests that the employees who leave have a fundamentally different experience from those who stay. Investigate the exit experience — the last 30 days of employment often determine the tone of the review.
Building Your EX Scorecard: From Metrics to Action
Having 15 metrics does not help if they live in 15 different spreadsheets. The value of this framework depends on consolidating these EX metrics into a single scorecard that leadership reviews regularly. Here is how to build one.
Step 1: Assign Ownership
Each metric needs a named owner — not a team, a person. The talent acquisition lead owns application completion rate and career page conversion. The onboarding programme manager owns time-to-productivity, onboarding NPS, and 90-day retention. The HR business partner owns eNPS, pulse scores, and participation rates. The L&D lead owns training metrics. The CHRO owns the overall scorecard and presents it quarterly to the executive team.
Step 2: Set Cadence
Not every metric needs monthly tracking. Application completion rate and career page conversion can be reviewed monthly. Onboarding metrics should be reviewed per cohort (which may be monthly or quarterly depending on hiring volume). Engagement metrics should be reviewed quarterly. Development metrics should be reviewed semi-annually. Retention metrics should be reviewed monthly (turnover) and quarterly (stay interviews, Glassdoor).
Step 3: Establish Baselines Before Setting Targets
Measure every metric for at least two quarters before setting improvement targets. Arbitrary targets based on external benchmarks without understanding your baseline will produce frustration and gaming. A company with 55% application completion rate should target 65%, not 80%. Incremental improvement grounded in your own data is more sustainable than aspirational targets disconnected from reality. For guidance on building the dashboard itself, see our article on HR KPI dashboard design.
Step 4: Connect Metrics Across Stages
The power of lifecycle-stage measurement is the ability to diagnose problems precisely. High 90-day retention but high 1–2 year turnover means onboarding works but development fails. Low application completion but high career page conversion means your employer brand is strong but your application process creates friction. Low eNPS in a specific department but healthy company-wide scores means you have a local management problem, not a cultural one. Always read metrics in context of each other.
Step 5: Report to the Board in Business Terms
The CHRO who spent $200,000 on an experience platform could not answer the board’s question because the platform produced HR jargon, not business outcomes. Translate your scorecard into financial impact. First-year turnover cost the organisation $X in recruiting and lost productivity. Improving time-to-productivity by 15 days saved $Y in ramp-up costs. A 10-point eNPS increase correlated with a Z% reduction in voluntary turnover. When the board hears money, they fund programmes. When they hear sentiment scores, they move to the next agenda item.
The AI Advantage
AI-powered HR platforms can surface correlations across metrics that manual analysis misses — such as the relationship between training completion in quarter one and voluntary turnover in quarter three. Explore how Treegarden’s AI capabilities connect data points across the employee lifecycle.
Five Mistakes That Sabotage EX Measurement
Even organisations that adopt this framework can undermine their own efforts through common implementation errors. Avoid these five.
Mistake 1: Measuring Everything, Acting on Nothing
Having 15 metrics is useful. Having 50 is paralyzing. If your team cannot explain what specific action each metric triggers when it drops below threshold, remove it from the scorecard. A metric without a corresponding action plan is decoration, not measurement.
Mistake 2: Aggregating Without Segmenting
Company-wide averages hide problems. An eNPS of +25 means nothing if the engineering department is at −10 and marketing is at +60. Always segment by department, tenure band, role level, and — where legally permissible and with appropriate privacy safeguards — demographics. The goal is to find the specific population with the specific problem, not to produce a single number for a slide deck.
Mistake 3: Ignoring the Attract Stage
Most EX measurement programmes start at onboarding. This misses the first impression. A candidate who struggles through a 30-minute application, receives no communication for two weeks, and then gets a generic rejection email has already formed a negative perception of your organisation. If they eventually get hired (perhaps for a different role months later), that perception lingers. Measure the attract stage or accept blind spots in your data.
Mistake 4: Surveying Without Closing the Loop
Distributing a pulse survey without publishing the results and the resulting action plan is worse than not surveying at all. It confirms the employee’s suspicion that their feedback does not matter. Before every survey launch, prepare a “You said, we did” communication that summarises what changed since the last survey. If nothing changed, explain why and set expectations for the next cycle.
Mistake 5: Treating Metrics as Scores Instead of Signals
A metric is not a grade — it is a signal. A low score is not a failure; it is a diagnosis. Organisations that penalise managers for low department scores create incentives to game the numbers rather than fix the problems. Use metrics to identify where to investigate, not whom to blame. The culture you create around measurement determines whether people report honestly or tell you what you want to hear.
Frequently Asked Questions
What is the most important employee experience metric?
There is no single most important metric. The value lies in tracking a balanced set across every lifecycle stage. That said, eNPS is the most widely used single indicator because it captures overall sentiment in one question. Pair it with at least two or three stage-specific metrics — such as 90-day retention for onboarding and voluntary turnover by tenure for retention — to get an accurate picture.
How often should we measure employee experience?
Annual surveys are insufficient. Best practice is to run quarterly pulse surveys for engagement metrics, trigger lifecycle surveys at key moments (first week, 30-day, 90-day, exit), and track operational metrics like turnover and internal mobility monthly. Continuous measurement prevents the stale-data problem that plagues annual survey cycles.
What is a good eNPS score?
An eNPS above +10 is considered acceptable. Scores between +20 and +40 indicate strong employee advocacy. Scores above +50 are exceptional and rare. Any negative eNPS is a warning signal that requires immediate investigation. The global median sits between +10 and +20 according to Qualtrics benchmarking data.
How do we measure employee experience without surveys?
Several EX metrics require no survey at all. Application completion rate, time-to-productivity, 90-day retention, internal mobility rate, promotion rate, voluntary turnover by tenure, and Glassdoor ratings are all derived from operational data in your HRIS, ATS, or public review platforms. Combining these passive metrics with targeted surveys gives you a full picture without over-surveying your workforce.
What is the difference between employee experience and employee engagement?
Employee engagement is one dimension of employee experience. EX encompasses every interaction an employee has with the organisation — from seeing a job posting to their last day. Engagement measures the emotional commitment and discretionary effort an employee gives at a specific point in time. You can have a great onboarding experience but low engagement six months later if growth opportunities are missing. Measuring both ensures you catch problems wherever they occur in the lifecycle.
How do we benchmark our EX metrics against industry standards?
Start with published benchmarks from organisations like SHRM, Gallup, and Willis Towers Watson. These provide industry-specific and company-size-specific baselines. However, your most useful benchmark is your own historical data. Track trends quarter over quarter and set improvement targets based on your starting point rather than chasing external averages that may not reflect your context.
Which teams should own employee experience metrics?
HR owns the measurement framework and reporting, but individual metrics should be co-owned by the functions that influence them. Talent acquisition owns application completion rate and career page conversion. L&D owns training completion and application rates. Line managers own pulse survey scores and stay interview feedback. The CHRO or VP of People should own the overall EX scorecard and present it to the executive team.
Can small companies (under 100 employees) use these metrics?
Yes, but prioritise. Small companies should start with five core metrics: 90-day retention rate, eNPS, voluntary turnover rate, time-to-productivity, and one survey-based measure like onboarding NPS. As the organisation grows past 100 employees, add lifecycle-stage metrics and begin segmenting data by department. The key is consistency — even tracking three metrics quarterly is better than tracking fifteen metrics once a year.
Measuring employee experience is not an academic exercise — it is the difference between knowing your organisation is healthy and hoping it is. The fifteen metrics in this guide give you a structured, lifecycle-based framework to replace guesswork with evidence. Start with the five metrics that matter most for your current challenges, build baselines, and expand from there.
Stop reporting sentiment scores that no one trusts. Start tracking the specific employee experience metrics that connect daily experience to business outcomes. Treegarden gives HR teams the data infrastructure to measure what matters across the full employee lifecycle — from first application to long-term retention.