- Editorial note
- Managing director’s note
- Digitalisation
- Interviews
- Events Coverage
How to use AI responsibly
PCS Paruch Chruściel Stępień Kanclerz | Oct 14, 2025, 10:00

By Bartosz Wszeborowski, advocate, senior lawyer and Julia Łuszczewska, lawyer, PCS Paruch Chruściel Stępień Kanclerz
In recent years, many companies have turned to artificial intelligence (AI) to help with recruitment: screening CVs, ranking candidates, predicting performance, even concluding initial interviews. The benefits are promising – faster hiring cycles, lower costs, improved objectivity, and potentially better decision-making. But with these advantages comes a significant challenge – making sure algorithms do not reinforce the very biases they are meant to eliminate.
Understanding algorithmic discrimination
Algorithmic discrimination refers to situations where AI systems lead to unjust or biased treatment of individuals – often unintentionally. This usually happens when the training data used to build the algorithm reflects existing social or historical inequalities. Even if the system does not explicitly use protected characteristics like gender, race, age, disability, or religion, it may rely on proxies that correlate with them. For example, a recruitment algorithm trained on past hiring data may learn to favour male candidates if the company historically hired more men. Legally speaking, this kind of discrimination can be just as serious – and unlawful – as if it had been done intentionally by a human recruiter.
What regulations apply?
Polish Labour Code
Polish employment law clearly prohibits discrimination in recruitment and employment. According to article 183a of the Labour Code, both employees and job candidates must be treated equally regardless of their sex, age, disability, race, religion, nationality, political beliefs, trade union membership, ethnic origin, sexual orientation, etc. Equal treatment means ensuring no one is disadvantaged – directly or indirectly – due to any of the protected characteristics. Direct discrimination may involve for instance rejecting a candidate based solely on their age. Indirect discrimination occurs when seemingly neutral requirements disproportionately exclude a protected group without valid justification.
In the context of AI, if an automated hiring tool systematically disadvantages certain groups – even unintentionally – this could be a breach of labour law and may lead to justified claims raised by candidates.
GDPR
When AI tools use personal data – and in recruitment, they almost always do – the General Data Protection Regulation (GDPR) applies. Employers must ensure transparency, lawfulness, and fairness in how personal data is processed.
Importantly, GDPR also includes Article 22, which restricts decisions based solely on automated processing that have legal or similarly significant effects – such as hiring decisions. Candidates must be informed when such processing occurs and have the right to obtain human intervention or challenge the decision.
Additionally, special categories of personal data (e.g. health data, religious beliefs, biometric data) are subject to stricter protections and, in most cases, cannot be processed in recruitment unless specific conditions are met.
AI Act
Under the AI Act, systems used in recruitment are classified as ‘high-risk AI systems’. This means they will be subject to strict requirements around:
- Risk management
- Data governance
- Technical documentation
- Record-keeping
- Transparency and provision of information to users
- Human oversight
- Accuracy, robustness and cybersecurity
Can AI-based systems be used in recruitment legally?
Using AI in recruitment is not illegal by default. In fact, when done thoughtfully, it can be a valuable support tool. The real challenge is not whether companies can use AI – it is how they do it.
When properly designed and implemented, AI can support recruiters by identifying qualified candidates more efficiently, reducing human error, and helping eliminate unconscious bias. For example, a recruitment tool that does not rely on protected characteristics, is audited regularly for bias, and produces clear, explainable outcomes can be considered lawful. This means, in particular, reviewing training data to ensure it does not replicate historical discrimination, giving candidates clear information about how their data is being used, and ensuring there is always a way to challenge or appeal decisions that significantly affect them.
But just as AI has the potential to improve recruitment, it also carries serious risks if misused. Some practices clearly cross legal boundaries. Systems that directly or indirectly factor in gender, age, ethnicity, or other protected characteristics without clear justification are a legal red flag. The same applies to tools that make hiring decisions autonomously, without any human oversight or possibility of review.
AI in everyday employment
The use of AI does not stop at hiring. Increasingly, employers are turning to algorithmic tools to monitor, evaluate, and even discipline employees. While the potential benefits – such as improved efficiency or objectivity – are frequently cited, these systems also raise serious legal concerns.
New challenges are emerging especially when it comes to monitoring employees. These technologies raise significant concerns about privacy and excessive control. AI systems can track performance, measure productivity, and even try to predict behaviour. Unlike traditional supervision, AI tools can operate continuously and collect vast amounts of data about employees. This may risk overstepping privacy boundaries.
Another important issue is how AI is used to evaluate employees. In theory, algorithms are supposed to be fairer by removing human bias. But in reality, they can create new, less visible biases or even make existing inequalities worse. When the systems are combined with constant monitoring, they may feel invasive or even harmful. Algorithms often aim for perfect performance, but people cannot increase their productivity endlessly. Expecting continuous improvement is not realistic – and can put unhealthy pressure on employees.
The critical concern, however, arises when AI systems are used to decide whether to terminate an employment relationship. The idea that a machine – without human empathy or full understanding – could decide on dismissing someone is worrying. Polish labour law does offer some protection, such as the obligation to justify terminations and consult with trade unions. But these rules might not be strong enough if employers start relying too heavily on algorithms.
AI has the potential to transform recruitment and employee management for the better – by increasing efficiency, reducing human error, and supporting fairer decision-making. However, if not carefully designed and monitored, algorithmic tools can reinforce bias, undermine employee rights, and erode trust in the workplace. Employers must take steps to prevent discrimination, provide transparency and human involvement, and protect employee privacy. Algorithms must be tools – not decision-makers. And ultimately, it is humans who must remain accountable.

