The honest problem with review sites

It seems like most "best ATS" lists are actually "most marketed ATS" lists — ranked by which vendors invest most in review generation campaigns, which companies have the largest sales teams to push customers toward review platforms, and which brands have the most marketing budget for SEO dominance, not which products actually deliver the best outcomes for companies in your specific situation.

That's a strong claim. Here's why it's accurate:

G2 and Capterra rankings reward volume of reviews. Volume of reviews correlates directly with: how large the vendor's customer base is, how aggressively the vendor's customer success team requests reviews, and how much budget the vendor invests in review generation campaigns. None of these correlate reliably with product quality for your specific use case. The ATS with 4,200 reviews from a 10,000-customer enterprise platform is not necessarily better than the ATS with 180 reviews from a 500-customer focused platform — it just has more reviews, for reasons unrelated to product quality.

Enterprise ATS vendors dominate rankings despite serving a completely different use case. iCIMS, Greenhouse, and Workday have thousands of customers and thousands of reviews. A significant proportion of those customers are mid-market or enterprise companies. A 50-person company evaluating ATS options reads "iCIMS: #1 ATS by market share" and interprets that as a recommendation — when in reality, iCIMS was built for and prices for companies 10–100x their size.

SEO-driven "best ATS" articles are content marketing, not buyer research. The articles ranking highly for "best ATS 2026" on Google are almost uniformly produced by companies with affiliate relationships with the ATS vendors they recommend, or by the ATS vendors themselves through their content marketing. They are not written by people who have evaluated the tools — they are written by content teams optimising for keyword rankings. Reading them tells you which vendors invest in content SEO. It tells you very little about which ATS is right for your company.

What actually matters: the selection criteria review sites can't see

The meaningful differentiators in ATS selection are almost entirely invisible to review site rankings. They're only visible through a specific evaluation process — and specifically through conversations with customers who match your profile.

Hiring volume is the primary segmentation criterion

An ATS built for a company hiring 500 people per year in retail and logistics (Paradox, Jobvite, iCIMS) is a completely different product than an ATS built for a company hiring 30–80 people per year in professional services (Greenhouse, Lever, Ashby). Both are "enterprise" on some review sites. Both have positive reviews. Neither is the right choice for the other's context.

The hiring volume ranges that map to genuinely different product categories:

  • Under 20 hires per year: Simple, affordable, fast-to-implement ATS. Primary concern: ease of use, fast time-to-value. Ashby is too complex; iCIMS is architecturally and financially inappropriate.
  • 20–100 hires per year: The most contested market segment. Mid-market ATS platforms (Greenhouse, Lever, Workable, Teamtailor, Treegarden) compete here. Feature depth, integration capability, and pricing transparency differentiate.
  • 100–500 hires per year: The premium mid-market and lower-enterprise range. More complex pipeline management, more integration requirements, more reporting depth needed. Greenhouse and Ashby compete well; iCIMS is entering the frame.
  • 500+ hires per year in standardised roles: Enterprise territory, often with high-volume flavour. iCIMS, Workday, SAP SuccessFactors, Paradox (agentic AI).

The 3 must-have filter

Is it crazy to start an ATS evaluation with a 40-feature checklist? Not crazy — but consistently counterproductive. The checklist approach to ATS selection produces over-buying, implementation complexity, and eventual underutilisation of features the company paid for but doesn't actually need.

The more effective approach: identify the 3 features that are genuinely non-negotiable — without which the platform cannot serve your core hiring workflow. Not "nice to have." Not "we'd use it if it's there." The 3 features whose absence would make the platform non-functional for your primary use case.

For most mid-market companies, those 3 features are something like: reliable CV parsing and structured candidate records, calendar integration for interview scheduling, and job board posting to the specific platforms your candidates use. Everything else is secondary — it may influence your decision between two equally capable platforms, but it shouldn't drive the selection.

Starting from 3 must-haves rather than 40 nice-to-haves produces a shorter shortlist, faster evaluation cycles, and better decisions.

Pricing transparency as a filter, not a data point

If you can't see the price before you talk to a sales rep, move on. That's a stronger filter than it might sound, and here's why:

An ATS vendor with opaque pricing has made a deliberate strategic decision: they believe they can extract more revenue by quoting each customer individually (and adjusting the quote based on the customer's perceived willingness to pay) than by publishing a price that everyone can see. That decision reveals something about the vendor relationship you're entering. It means the price you're quoted is not the price your competitor in a comparable situation is paying. It means the renewal price is undiscoverable. It means the negotiation is structural and ongoing, not transparent and predictable.

The vendors with transparent, public pricing — where you can see what you'll pay before booking a demo — have made the opposite decision. They've chosen to compete on product quality and clear value rather than negotiation skill. That alignment of incentives tends to produce a better long-term vendor relationship.

Treegarden is an example of a platform in the 50–750 employee range with fully public pricing at $299–$899/month. It's not the only option with transparent pricing, but it's one example of what that looks like as a design principle.

The real test: your workflow, your data

The single most diagnostic evaluation step that review sites cannot replicate: run a real job and two real CVs through the system during the vendor demo, not in a guided tour of the vendor's prepared demonstration.

Provide the vendor with a copy of your most common job description and two actual CV files (appropriately anonymised) before the demo. Ask them to start the demo by parsing those CVs and walking through how a recruiter would process those candidates in their system. The output will tell you:

  • How accurately the AI parses your actual CV format
  • What the recruiter experience actually looks like for a workflow you recognise
  • How the system handles the edge cases in your real data (unusual formatting, non-standard skills, etc.)
  • Whether the platform's assumptions about how hiring is done match your actual process

A vendor who declines to run your data and insists on a scripted demo is showing you something important about their confidence in product performance on real-world inputs.

The reference check that review sites replace — badly

Review sites exist, at least partially, because the alternative — talking to actual customers — takes time. But the time investment in 2–3 focused reference calls is consistently the most valuable evaluation step. Here's how to get the most from it.

Ask the vendor for two references with these specific characteristics: in your industry (or similar), your company size range (within 50% of your headcount), and who switched from your current ATS (if applicable) or who started from a comparable alternative. These specifics matter because a reference from a 2,000-person enterprise company who loves the platform is not informative for a 100-person company's evaluation.

In the reference call, the most valuable questions are not "do you like the platform?" (the answer is always yes — the vendor has pre-selected the reference) but rather: "What did you underestimate about the implementation?" "What do you wish you'd known before signing?" "How has the vendor handled problems — give me a specific example of something that went wrong and how they responded." "How did the renewal conversation go — did pricing change?"

The last question is the one most buyers forget to ask, and it's the one that reveals the most about the vendor relationship you're entering.

Ask about renewal pricing before signing — always

The best time to understand your renewal pricing risk is before you sign the initial contract, not 11 months later when the renewal notice arrives.

Ask specifically: "What is your standard renewal pricing policy? Is there a cap on annual increases in the contract? Can you show me what a typical customer at our size paid at their last renewal compared to their initial contract price?"

Vendors who have a good renewal story will answer this readily. Vendors with aggressive renewal histories will redirect, qualify, and deflect. The quality of the answer is informative regardless of the specific number.

See exactly what Treegarden costs

All features included. Public pricing. No demo required to see the numbers. Startup: $299/mo · Growth: $499/mo · Scale: $899/mo.

View full pricing →

Frequently asked questions

How do I find the best ATS for my company?

Reject popularity as a selection criterion. Define your hiring volume, identify your 3 genuinely non-negotiable features, filter to platforms with transparent pricing, test with your own data during the demo, request references specifically from companies in your industry and size range, and ask about renewal pricing policy before signing. This process consistently produces better outcomes than starting with review site rankings.

Are G2 and Capterra reviews reliable for ATS?

Useful for identifying consistent complaints across many reviews, understanding the general customer relationship tone, and getting a sense of onboarding experience. Unreliable for comparing vendors across different company size segments, comparing vendors with very different review volumes, or making "best" determinations. Any "leader" badge is a volume-based or paid placement, not a quality assessment for your specific context.

What's the difference between a popular ATS and the right ATS?

A popular ATS has the most users and the most marketing. The right ATS fits your specific hiring volume, team structure, industry, must-have integrations, and 3-year budget. iCIMS is the world's most popular ATS by market share — it is right for some companies and completely wrong for others. Popularity tells you the vendor has successfully sold the product; fit tells you whether it will work for you specifically.

How do I evaluate ATS vendors without getting sold to?

Three practices: Control the demo agenda — send a specific list of workflows you want to see, not "show me the ATS." Test with your own data — provide a real job description and two anonymised CVs and ask them to process these during the demo. Ask the question they don't want to answer: "What do customers who cancel typically say as their reason for leaving?" A vendor with good retention answers this specifically. One without that confidence deflects.