BLOG | AI in recruiting

The "AI Assessment Effect": When AI changes human behavior

Companies use AI-based tools to review application documents, record interviews, and analyze character traits. A recent study shows what this means for applicants and recruiting.

 

Automated CV analysis, chatbots, video interviews, and matching algorithms: artificial intelligence in recruiting is developing rapidly. But how does artificial intelligence affect candidates and, consequently, the recruiting process as a whole?

Researchers at the Universities of St. Gallen and Rotterdam investigated this in a study entitled "AI assessment changes human behavior. " To do so, they developed various scenarios, including simulated job application processes and college admission procedures. They also conducted studies in controlled online environments. The scenarios and experimental rankings were designed in such a way that people believed they were being evaluated either by AI or by a human being.

The study, which involved over 13,000 participants, provides evidence for the so-called "AI assessment effect." The effect is based on the fact that people systematically change their behavior as soon as they believe that artificial intelligence is involved in their assessment. The result: even when there is no evidence of the desired behavior, applicants present themselves in a more analytical and less intuitive or emotional manner when artificial intelligence is used.

Why does this happen? The role of expectations toward AI

The most important driver of this effect is a widespread assumption: although modern AI is increasingly capable of classifying emotional cues, the prevailing belief is that AI primarily evaluates analytical characteristics and neglects intuition. In the study, the researchers refer to this as a lay understanding of AI, or "analytical priority lay belief." This understanding has a significant impact on the behavior of applicants and thus on recruiting in general.

What research shows – 7 important findings

The study reveals seven findings that are fundamental to how HR managers and recruiters should integrate artificial intelligence into application and selection processes in the future:

  1. People present themselves differently when AI is involved: In a field study on Upwork, applicants described themselves as more analytical when they thought that AI was reading their application.
  2. This behavior occurs consistently across different contexts: whether applying to college, simulating a job interview, or responding to a real freelance job posting—similar patterns emerged everywhere.
  3. AI influences behavioral decisions: In tasks where individuals had to rank characteristics according to importance (e.g., analytical vs. intuitive), they chose analytical characteristics higher up when using AI evaluation.
  4. The effect is independent of application scenarios or admission procedures: people generally deviated more from their "normal" self-image under AI evaluation than under human evaluation—even when they were not in an active admission or application scenario.
  5. Behavior influences the final selection: Simulations show that, depending on the selection criteria, up to 27% of applicants would only be selected if the evaluation were carried out by AI, and vice versa.
  6. Transparent communication reduces the effect: When applicants in the study were asked to question their own assumptions about AI or to assume the opposite, the effect disappeared or was partially reversed.
  7. When humans are involved in the selection process, the effect is diminished: if participantswere informed that AI was involved but that a human would make the final decision, analytical self-presentation remained strong but was less pronounced than when the evaluation was carried out exclusively by AI.

What does this mean for companies?

This has negative consequences for companies that use artificial intelligence in selection processes without taking appropriate countermeasures. If applicants systematically adapt their answers and thus distort their patterns of thinking and behavior, this leads to distorted assessment data. The result: companies make the wrong decisions for or against candidates. If AI selects applicants who are pretending to be someone they are not, their actual strengths will not match the company's requirements – which in turn leads to potential miscasts.

The change in applicants' behavior may also lead to unequal opportunities in AI-based assessments. This is because groups that try harder to meet expectations may change their behavior particularly significantly. These include, for example, younger candidates and people who are generally more afraid of negative assessments.

The good news is that companies can counteract this with simple measures.

What can companies change?
5 recommendations for action in recruiting

Based on the study results and the resulting implications for companies, the following strategies are suitable for limiting misleading behavior by applicants during AI-based selection processes:

1. Optimize expectations of AI

Instead of simply stating that artificial intelligence makes the decisions, companies should communicate transparently which criteria the AI evaluates, how analytical and intuitive characteristics are weighted, and that, if applicable, emotional aspects are also relevant. This information reduces false assumptions by applicants about the use of artificial intelligence. The study shows that specific information about the capabilities of AI reduces or even neutralizes analytical bias. Regulations such as the EU AI Act also require companies to communicate the use of AI in application processes as transparently as possible.

2. Involve people transparently in the selection process

Since combining human and AI-based decisions reduces systematic behavior among applicants, it makes sense to use AI in only one area, such as screening. The analysis and interpretation are then carried out by HR managers. Involving people in a transparent manner, for example in a final interview, can be helpful in reducing the "AI assessment effect." However , the key is transparent communication, where applicants know at all times where and how either artificial intelligence or people are evaluating and making decisions.

3. Adjust tasks

The study shows that applicants primarily adapt tasks to artificial intelligence in which they present or describe themselves. Therefore, it can be helpful to supplement admission procedures with behavioral tests, work samples, structured interviews, or simulations. These tasks are less easy to falsify and convey a more authentic impression of how candidates really think, act, and work.

4. Review systems and selection processes

To ensure consistently good personnel decisions, companies should continuously review their systems and selection processes. This involves determining whether AI systems overemphasize certain self-presentation patterns, whether and how candidate profiles shift, and how this subsequently affects their working methods.

5. Optimize selection processes with data-driven candidate personas

Candidate personas are an effective tool for AI-supported recruiting processes. They help companies understand the different personalities of applicants, their expectations, motivations, and possible reactions to AI-based selection processes. This enables recruiters to identify in advance which candidates are sensitive to automated processes, which tone of voice builds trust, and at which touchpoints human decision-makers should be more involved. On this basis, companies can design AI-based selection processes that are efficient, psychologically effective, fair, and target group-specific.

Conclusion: AI influences the behavior of the people it evaluates.

The most important finding of the study is that AI not only changes how companies approach and select candidates, but also influences how candidates behave. Applicants therefore base their behavior on what they believe to be AI's evaluation criteria – often based on false assumptions. This results in distortions that companies must actively address and minimize in order to continue making sensible personnel decisions in the future. The two most important prerequisites for this are that AI should incorporate emotional signals into decisions in addition to technical and rational aspects, and that companies should communicate this transparently. With data-based candidate personas, the Persona Institute provides guidance on how companies can design AI-based, fair, and authentic selection processes.

 Save as PDF