Ready to run background checks the modern way?
In 2026, Checkr surveyed 3,000 workers—split evenly between managers and non-managers—to understand how AI is reshaping hiring, enabling workplace misrepresentation, widening manager-employee divides, and testing trust at work.
Introduction
AI has moved from boardroom buzzword to daily workplace reality. But its effects are not landing the same way for everyone.
Managers are adopting AI at higher rates, trusting its outputs, and viewing it as a competitive edge. Employees, meanwhile, are navigating a landscape where AI tools often feel imposed rather than embraced, where professional authenticity is increasingly hard to verify, and where questions about fairness and job security have become part of everyday work life.
That divide is already shaping real decisions. Consider that 78% of managers support using AI to detect AI-generated applications—highlighting a concern not yet shared by employees. The divide between those deploying AI and those subject to it is clearly portrayed in nearly every finding in this report.
At Checkr, we help organizations make smarter decisions throughout the hiring process. In our organization, everyone in every role is authentically embracing AI tools to solve problems and supercharge productivity. But that's not what's happening at every company. So it's important to understand how workers at every level think about AI.
This report maps a growing divide across four dimensions: hiring integrity, workplace misrepresentation, adoption pressure, and day-to-day trust in AI outputs.
Summary of key findings
- Trust gap in AI hiring: 70% of managers trust AI-driven hiring tools, compared to only 27% of employees.
- The hiring arms race is real: 77% of managers and 51% of employees agree that hiring has become an AI arms race.
- AI resumes are everywhere: AI resumes are everywhere: 81% of managers regularly encounter AI-enhanced resumes.
- AI fraud is a shared concern: 70% of managers and 58% of employees believe AI is enabling a new form of workplace fraud.
- Identity misrepresentation is growing: 69% of managers have experienced or suspected a coworker misrepresented themselves in hiring.
- Managers feel greater adoption pressure: 64% feel pressure to adopt AI to stay competitive, compared with 38% of employees.
- AI is becoming an unspoken job requirement: 58% of managers agree, compared to only 29% of employees.
- Daily AI trust is deeply divided: 40% of managers trust AI outputs often or almost always; 59% of employees rarely or never do.
- ROI skepticism runs deep: Only 3% of employees believe their org's AI spending on hiring is clearly worthwhile, compared with 21% of managers.
- Widespread ambivalence: 49% of managers and 57% of employees describe AI at work as both helpful and harmful.
Who do you trust to make the hire?
Few areas of work have been more disrupted by AI than hiring. To understand how managers and employees feel about it, we started with a straightforward question: Do you actually trust AI to make fair hiring decisions?
The answer depends entirely on who you ask. 70% of managers say they trust AI-driven hiring tools completely or somewhat. Among employees, that number drops to 27%.
How much do managers and employees trust AI-driven hiring decisions?
Percentage of respondents who say they trust AI-driven tools to make fair and accurate hiring decisions
*Data from Checkr proprietary survey of 1,500 managers & 1,500 employees
More telling, 41% of employees actively distrust these tools, with 18% saying they distrust them completely. Only 16% of managers feel that way.
That said, the skepticism flows in multiple directions. While employees distrust the AI tools evaluating them, employers are grappling with their own AI problem on the other side of the hiring table—81% of managers say they regularly encounter resumes that appear significantly enhanced or rewritten by AI, with 27% saying it happens very often.
What managers are seeing across those applications is fueling a broader perception that hiring has become an arms race.
77% of managers agree that candidates use AI to appear more qualified, while companies use AI to catch it.
Among employees, 51% agree, but a notable 38% describe themselves as neutral, suggesting a large portion of the workforce has not yet felt the full weight of this shift.
So should companies fight AI with AI? On detection, managers and employees split sharply. 78% of managers support using AI tools to detect AI-generated applications, split evenly between universal use and use only in high-stakes roles.
But 34% of employees say AI detection creates unfair suspicion and bias, compared to just 17% of managers. The concern likely reflects a fear of being penalized for using tools that are increasingly standard, even expected, in professional life.
Despite their disagreements, both groups land in the same place on one question: humans should have the final say. 64% of managers and 66% of employees agree that a human should override a strong candidate rejected by AI.
Only 7% of managers and 1% of employees think AI's recommendations should be followed unconditionally. A telling 15% of managers even agreed that AI is sometimes used as a leadership shield, a view echoed by 12% of employees. Agreement here is meaningful, but it also signals an underlying unease about who, or what, is really making hiring decisions.
Is the person you hired really who they said they were?
Getting hired is one thing. Actually doing the job is another. AI is increasingly blurring that line, and both managers and employees are starting to feel it.
When asked whether AI is making it easier for people to misrepresent their skills, the concern is nearly universal. 82% of managers and 80% of employees are at least somewhat worried, with close to half of each group expressing elevated concern.
This is one of the few places in the survey where the manager-employee divide largely disappears. Misrepresentation anxiety does not appear to vary much by rank.
What does vary is how directly each group has experienced it. 54% of managers say they have worked alongside someone who seemed underqualified for their role, possibly because AI helped them get there, and 12% say this has happened frequently. Among employees, only 31% report observing something similar.
The gap reflects a visibility difference: managers evaluate performance and set expectations in ways most employees simply do not.
That lived experience is shaping how people talk about AI in the workplace more broadly. 70% of managers and 58% of employees agree that AI is creating a new kind of workplace fraud, where people appear more capable on paper than they are in the actual role.
Only 9% on either side disagree entirely, which means this concern has moved well beyond fringe thinking into mainstream perception.
The misrepresentation concern extends beyond polished resumes to something more fundamental: identity itself. 41% of managers are very or extremely concerned that someone at their organization may not be who they claimed to be during the hiring process.
Among employees, 28% share that level of concern, and 62% express at least some worry.
When we asked whether anyone had actually experienced this, 69% of managers said they had either directly encountered or suspected identity misrepresentation by a coworker, including 21% with direct experience. Among employees, 46% share some version of that story.
Have managers and employees experienced or suspected hiring misrepresentation?
Percentage of respondents who say they have directly experienced or suspected that a colleague misrepresented their identity during the hiring process
*Data from Checkr proprietary survey of 1,500 managers & 1,500 employees
When the concern becomes concrete, the question becomes: what do you actually do about it? Managers lean toward action. 36% say they would raise it formally with HR or compliance, and 26% would confront the person directly.
Employees take a more cautious approach: 27% would document it privately, and 14% say they would do nothing at all because AI over-reliance is just becoming normal. That last number is the one worth sitting with.
Are you expected to use AI—or just assumed to?
For many workers, using AI is no longer entirely optional. But the pressure to adopt it is not distributed evenly, and the gap between how managers and employees experience that pressure reveals a great deal about how AI is actually being introduced at work.
Start with who workers think is driving AI adoption in the first place. Managers most commonly point to leadership or executives, with 39% naming this group as the primary force. Employees are far less certain: 34% say they simply do not know where the push is coming from. Only 8% of employees cite managers as the driver, compared to 21% of managers who see themselves in that role.
When workers cannot identify where AI directives originate, adoption tends to feel like something happening to them rather than something they are part of.
That sense of top-down imposition is clear in the pressure data. 64% of managers feel at least some pressure to adopt AI tools to stay competitive. Among employees, only 38% feel meaningful pressure, and 36% say they feel none at all.
Is AI usage necessary for managers and employees to stay competitive and relevant at work?
Percentage of respondents who say they feel some sort of pressure to use AI tools to stay competitive or relevant in their roles
*Data from Checkr proprietary survey of 1,500 managers & 1,500 employees
The managers expected to build and enforce AI strategies are experiencing the most adoption anxiety, while employees who use these tools daily are comparatively insulated from that urgency.
That disconnect creates real friction between policy and practice.
Much of that friction is invisible because expectations are never made explicit. 58% of managers agree that AI use is becoming an unspoken performance requirement at work, but only 29% of employees agree, and 37% say they are genuinely unsure.
That middle group matters. If over a third of employees cannot tell whether AI competence is being quietly evaluated as part of their professional standing, organizations are creating anxiety without accountability.
The perception gap extends to how much AI is actually used day to day. 45% of managers believe their colleagues use AI frequently or constantly. Only 18% of employees believe the same about their peers, and 16% say they are not sure.
Performance expectations built on managerial assumptions about AI adoption will consistently miss the mark if they are not grounded in what employees are actually doing.
Is anyone getting true value from AI in the workplace?
Even as AI tools spread across the workplace, trust in their outputs remains deeply contested. The way managers and employees answer questions about daily AI use paints a picture of two groups operating in the same environment but experiencing it in entirely different ways.
When it comes to trusting AI-generated outputs in their daily work, 40% of managers say they trust them often or almost always. Among employees, that figure falls to just 9%. More than half of all employees, 59%, say they rarely or never trust AI outputs at work.
Adoption without trust tends to produce compliance rather than genuine use, raising real questions about what AI productivity gains are actually measuring.
Interestingly, even managers who personally rely on AI recognize the organizational risk of over-reliance. 80% of managers believe AI outputs are trusted too much in workplace decision-making at least sometimes, with 22% saying this happens very often.
Employees are similarly skeptical: 68% agree AI is over-relied upon. The broad consensus across both groups—that AI is being leaned on too heavily—is a useful signal for organizations trying to build more thoughtful governance.
On return on investment, the divide becomes most pronounced. 21% of managers believe their organization's AI spending on hiring is clearly worthwhile, with another 46% calling it somewhat worthwhile. Among employees, only 3% say the spending is clearly worthwhile, and 22% call it somewhat worthwhile. When workers do not see meaningful improvements in how hiring works, they will not assign value to the tools behind it.
Do managers and employees agree on AI investment for recruitment and hiring?
Percentage of respondents who believe their organization’s AI investment in recruiting and hiring is clearly or somewhat worthwhile
*Data from Checkr proprietary survey of 1,500 managers & 1,500 employees
Zooming out to the broadest question in the survey, both groups most often describe AI in the workplace as both helpful and harmful. 49% of managers and 57% of employees chose that framing.
But the optimists are concentrated in management: 25% of managers believe AI is building trust and improving work quality, compared to only 5% of employees. And 20% of employees describe AI as an experiment with unclear outcomes, versus just 9% of managers. The ambivalence is real, and it runs deep.
Moving forward with AI in the workplace in 2026 and beyond
Here is what three thousand workers just told us: managers believe in AI, employees are still finding their footing with it, and the gap between those two experiences is one of the most important conversations happening in the workplace right now.
The misrepresentation concerns are real. The ROI questions are fair. The pressure employees feel to adopt tools they were never properly introduced to is something every organization should take seriously. But none of that changes the bigger picture: AI is making workplaces faster, smarter, and more capable. The opportunity is real, and it is growing.
The companies that will get the most out of AI are not necessarily the ones moving the quickest.
They are the ones where managers and employees work from the same playbook, where they “why” and the “how” of AI adoption is explained rather than assumed, and where trust in these tools is built deliberately rather than assumed.
The divide this survey reveals is not a reason to pump the brakes. It is a reason to communicate better.
For more information on Checkr's research or to request graphics or commentary about this study, please contact press@checkr.com.
Methodology
All data found within this report is derived from a survey conducted by Checkr via the online survey platform Pollfish from February 9–12, 2026. In total, 3,000 employed adult Americans were surveyed: an equal number of managers and non-management level employees. The respondents were found via Pollfish’s age and organizational role filtering features. This survey was conducted over a four-day span, and all respondents were asked to answer all questions as truthfully as possible and to the best of their knowledge and abilities.
Disclaimer
The resources and information provided here are for educational and informational purposes only and do not constitute legal advice. Always consult your own counsel for up-to-date legal advice and guidance related to your practices, needs, and compliance with applicable laws.


About the author
As VP of Product & Customer Marketing at Checkr, Bryan is responsible for educating current and prospective customers about the value and use of our products. He is passionate about understanding the needs of companies and candidates who use Checkr, and helping them get the most from the platform.

