By Piotr Maślak, senior director, head of Emerging Technologies, AstraZeneca

astrazeneca_pole

 

Most companies are vocal about AI implementation, with 92% planning investment in it, but the effects are not nearly as impressive, with only 1% stating maturity[1]. These solutions have a great potential to revolutionise how companies operate, but come with significant risks related to data quality, employee adoption and regulatory compliance.

Pioneering innovation in a data-driven era
Despite widespread AI adoption in daily life, pharmaceutical industry faces a stark reality: only 11% of companies have fully implemented AI solutions delivering tangible organisational benefits, with many initiatives stalled in proof-of-concept stages[2]. Most innovative implementations remain an uncharted territory – adapting AI models prevalent in consumer applications to complex regulatory environments requires careful consideration of the risks and benefits. There’s an unprecedented opportunity to transform our ways of working, spanning from early research to daily productivity, while maintaining commitment to sustainable and responsible technology development and implementation. Leveraging these technologies will be the competitive advantage across all industries, including pharmaceutical R&D.

Strategic advantages: how AI is revolutionising pharma and clinical development
Drug development process spans from computer-based drug discovery, through laboratories to clinics and hospitals, where patients undergo clinical trials under the rigorous supervision of medical personnel and research experts, ensuring their safety and wellbeing. Artificial Intelligence can be implemented across the whole value chain – from augmenting medicinal chemistry with computational methods, through algorithms analysing thousands of medical images, to rapid analysis of electronic health records[3] and improving efficiency of regulatory submissions[4]. Plentiful opportunities, however, introduce risks requiring careful mitigation.

Mitigating risks: navigating the complexities of AI implementation
Risk mitigation is one of the key considerations in research and development processes, and AI implementation novelty sets it as a prime example of complexity. Risks can be grouped into three categories: data integrity and bias, regulatory and ethical, and implementation. First category relates to the underlying data quality and inherent bias the AI models gain in the process of training. The more issues in the source and training data, the less reliable are AI-powered solutions – this requires robust data governance standards and model oversight to ensure reliability. An example of incomplete dataset introducing bias is heart-attack symptom detection – because most studies were conducted on men, there’s a gap in understanding of heart attack symptoms specific to women, as they make up less than 30% of the studied population[5]. The second category relates to the complex regulatory landscape, with a limited, but growing set of requirements that AI solutions must adhere to, and ethical considerations of privacy and autonomy. Establishing human oversight is key to ensure responsible deployment and adherence to regulations is crucial for compliant adoption.EU AI Act, especially for ‘high-risk AI systems’ (that is those without proper human review), underscores the need for human oversight[6]. Lastly, implementation requires a completely new set of skills from workforce and users, which, if not managed correctly, can discourage people or disrupt established processes. This requires a different approach to the talent management, with an increased demand on digital talent: data-scientists, data engineers and AI integration specialists[7]. Specialised, modern talent pools are highly sought after in traditional technology hubs in the USA and Western Europe. There is, however, an untapped potential – plentiful highly educated specialists in dynamic economic environments like Poland, which offer competitive salaries, while allowing free access to the European market[8].

Leveraging clinical and technology ecosystem in Poland
Poland is among the fastest rising economies in the region[9], making it an ideal ground for innovation development. The country ranks fifth in Europe by the number of clinical trials[10] and has the largest tech talent pool in the region (607,000 in 2024)[11]. AstraZeneca recognises the innovation landscape in Poland, having established a site in Warsaw with over 3,000 employees, with an R&D Centre status granted by Ministry of Development[12]. The Warsaw Site teams focus on drug development process, including the design and execution of global clinical trials. Warsaw AI Centre of Excellence was established as a recognition of Polish talent, with a goal of promoting Warsaw as a tech hub and enabling internal talent to flourish by providing support in development and access to global data science and AI projects. This includes working on data-science projects in all phases of clinical research, improving productivity across AstraZeneca’s operations and driving innovation in cutting-edge projects, including AI-powered assistants and custom agents enabling intelligent process automation. However, as more processes are automated and impacted by advanced technology, a robust governance and oversight are required to ensure sustainability – which is one of key commitments of AstraZeneca, embracing responsible approach to science.

Responsible AI deployment
While AI models introduce a promise to save time and accelerate drug development, deployment must be responsible and ethical. Establishing appropriate enterprise AI governance bodies safeguards adherence to regulations and ensures continued trust from our partners and patients. Maturing regulatory landscape provides a set of laws and recommendations that indicate how this disruptive technology should be used: EU AI Act[13], GDPR[14], HIPAA[15] and FDA Frameworks[16], among others. Commitment to responsible AI is not just a necessity, but a strategic imperative for sustainable innovation and long-term success.

A future defined by responsible innovation
Rapid acceleration of innovation and emerging technologies becoming interwoven into the fabric of our daily lives promises a revolution, but introduces a strategic imperative of social, ethical and regulatory responsibility. Companies hoping to stay on the tip of the spear of digital transformation must assume the responsibility and create proactive, sustainable and ethical frameworks. Fostering organisational culture that champions continuous learning, and awareness will be the competitive differentiator for future innovation.

 

[1] https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work

[2] https://www.appliedclinicaltrialsonline.com/view/new-insights-on-the-impact-of-ai-enabled-solutions

[3] https://www.astrazeneca.com/r-d/data-science-and-ai.html

[4] https://www.astrazeneca.com/what-science-can-do/topics/data-science-ai/generative-ai-drug-discovery-development.html

[5] https://www.health.harvard.edu/heart-health/the-heart-disease-gender-gap

[6] https://artificialintelligenceact.eu/article/6/

[7] https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work

[8] https://devsdata.com/polish-tech-boom/

[9] https://www.erstegroup.com/en/research/report/en/SR460488

[10] https://pharmaboardroom.com/articles/top-10-european-countries-for-clinical-trials-in-2025/

[11] https://polandweekly.com/2025/06/24/polands-hidden-superpower-ai/

[12] https://www.astrazeneca.pl/o-nas.html

[13] https://artificialintelligenceact.eu/

[14] https://gdpr-info.eu/

[15] https://www.hhs.gov/hipaa/index.html

[16] https://www.fda.gov/news-events/press-announcements/fda-proposes-framework-advance-credibility-ai-models-used-drug-and-biological-product-submissions