Hiring Behind the Veil: What Fairness Looks Like in Practice
The Veil of Ignorance and the Future of Recruitment
Imagine you’re designing a hiring process, but you don’t know whether you’ll be the one doing the hiring or the one applying for the job. Would you build a system that rewards résumés from elite schools, favors confident English speakers, and ghosts most candidates without explanation? Probably not.
This is the thought-experiment philosopher John Rawls posed when he introduced the concept of the “veil of ignorance.” Fair systems, he argued, are those we’d choose if we didn’t know our position in them. By that standard, much of today’s hiring fails the test.
Most candidates experience hiring as a black box: decisions made without transparency, interviews shaped by gut instinct, and outcomes delivered (if at all) with generic rejection emails. Biases creep in. Informal networks matter more than qualifications. And feedback? Rare to nonexistent. Rawls would likely see this for what it is: a process designed by and for those already on the inside.
Recruiters don’t mean to be unfair, but they’re often under-resourced and asked to make high-stakes decisions based on superficial signals. Nowhere is this more acute than in technical hiring, where take-home assessments are often unpaid, résumés favor the already privileged, and those who stumble once are rarely given another chance.
If Rawls were alive today, he might ask: how could we redesign this system from behind the veil?
What the Research Shows
Micro1 set out to test that question. In partnership with researchers at Stanford, they ran a randomized experiment involving over 37,000 applicants for junior software roles. One group went through the traditional hiring process. The other was assessed by Zara, their AI recruiter, before advancing to the same final human interview. The results were surprising.
Candidates interviewed by Zara were more likely to come from non-traditional backgrounds. They described the experience as structured, respectful, and far less anxiety-inducing than traditional processes. Even rejected applicants received personalized feedback. Most strikingly, nearly 60% of AI-selected candidates passed the final human interview, compared to just 34% from the conventional track.
This research challenges the dominant fear that AI in hiring is inherently biased, dehumanizing, or unaccountable. Those risks are real, but so are the risks of the current system. I believe the opposite is not only possible, but also urgent: when designed for equity, AI can make hiring more fair, transparent, and humane.
Four Principles for Fairer Hiring
Organizations have tried to automate the hiring process for a while, but the results have been mixed. An integrated human-AI approach based on the following principles could offer a ray of hope.
First, structure over gut feel. A fair process should ask every candidate the same core questions and evaluate them against a consistent rubric.
Second, feedback is a form of respect. Most candidates invest significant time and emotional energy in interviews, only to be left in the dark. Offering clear, personalized feedback, even in rejection, helps people grow, builds goodwill, and holds the system accountable.
Third, candidate agency matters. Giving them control over timing, allowing for interview retakes, and enabling clarification during the process can level the playing field in meaningful ways.
Finally, human-AI collaboration is key. The goal is to design systems where AI removes noise and enables, while humans bring contextual awareness.
Rawls and Recruitment
Rawls’s “veil of ignorance” is, at its core, a fairness test. It asks us to design systems without knowing whether we’ll be the beneficiary or the overlooked. In recruitment, this means reimagining hiring from the perspective of the candidate, rather than that of the gatekeeper.
But Rawls’s test also offers a blueprint. The challenge isn’t whether we use AI in hiring. It’s how. Do we replicate the same opaque, biased systems at scale? Or do we use new tools to ask older, better questions — about dignity, equity, and access?
Rawls never wrote about algorithms. But his fairness test is as urgent as ever. Because in today’s job market, most of us, at some point, will be on the other side of the table. Behind the veil.