By Mikita Wojcieszonek, optimisation expert, Innovation & Optimisation Department, Forvis Mazars in Poland

forvis_mazars_pole

 

The widespread use of AI has opened up numerous opportunities for individuals and legal entities. While easy access to AI offers significant time savings for everyday users through various applications, AI for business primarily brings new risks to the development path. Companies using AI can achieve more, and faster, than without it, so the desire to reap the benefits of large language models (LLMs) is obvious for many businesses. Some companies, however, choose restrictive policies when using AI internally, believing this will ensure their resources are safe. In reality, both approaches should defend themselves against potential threats that have intensified due to the widespread availability of AI.

AI-enabled cybercrime
Easy access to AI for everyone means that criminals, hackers, and fraudsters can also benefit. With AI, they are able to prepare increasingly sophisticated attacks with greater speed. AI significantly enhances cybercriminals’ capabilities in various areas:

  • Target identification – faster identification of attack targets, for example, through automated identification of the most profitable businesses
  • Victim analysis – gathering information to later launch a more effective attack
  • Attack execution – automated and personalized execution of malicious actions

Next-generation phishing and spoofing
The best example of the attacker’s modernised arsenal is the long-standing phishing or spoofing. Impersonation attacks have become easier than ever, even without in-depth knowledge of social engineering. In theory, anyone can use AI to generate highly personalised emails, effectively hiding a malicious link or attachment to encourage recipients to click. The attacker no longer needs to worry about the effectiveness of the message content – AI will choose the right words to manipulate and extract sensitive data from the intended recipient. Previously, grammatical or spelling errors were red flags for users – now generative AI eliminates this defects in phishing emails. Another telltale sign was awkward translation of message content into the victim’s native language. Now, a correct translation is achieved in a single sentence in a prompt.

The most common element of phishing involves the criminal creating a website imitating a website known to the victim. Creating such a copy of a website before the age of AI was time-consuming and often ineffective. Users were struck by the visual differences compared to the familiar original, causing them to pay attention to the domain or immediately leave the site before providing any information. Currently, AI tools are not only capable of creating a perfect visual copy of any website but can also replicate the functionality of that page, ensuring that after a few clicks, the user has no doubt they are where they should be. Given all this, these types of attacks are extremely difficult for humans to detect.

Despite deploying various types of security measures, which can also be based on AI, companies are still exposed to the risk that such systems may mistakenly classify a phishing email as legitimate – or vice versa.

Multimodal AI: a new dimension of threats
Can AI support phishing only via the internet (e.g. email)? Unfortunately, no. Multimodal AI, in addition to generating text, can ‘understand’ and generate sound (including voice), images, and video. Spoofing can now be done via phone by an automated AI agent. AI can clone any person’s voice, mimicking the tone or timbre. At the same time, there are solutions that allow criminals to fully impersonate any phone number. It’s easy to imagine that by combining several tools, we could receive a phone call from our boss or a client, whose voice sounds completely authentic, asking us to provide confidential information or make a payment. The ability to automate the process gives fraudsters an additional advantage: they can target multiple victims simultaneously.

Internal threats: employees
What about companies that allow employees to use AI or decide to implement AI-based tools? Employees using LLMs may be unknowingly misled by so-called hallucinations, where the AI invents non-existent facts or distorts real data. A common issue is AI’s handling of numbers, such as simple math. Famous examples include asking any AI model to count the letters ‘r’ in the word “strawberry” (the answer is usually two, though there are three) or comparing the numbers 5.11 and 5.9 (AI may incorrectly say 5.11 is greater). It’s important to remember that the AI-generated answer can be different each time, even with identical prompts.

Such simple AI errors can be detected if users understand that AI outputs require verification. Another threat, less visible to the average employee, is the voluntary sharing of confidential data. It seems obvious that if we ask AI to summarise certain company data, we must provide this data as part of the prompt. However, such queries are often logged. Companies providing their LLMs or other AI products strive to continuously improve their services and collect all possible feedback, including data sent in the chat. Unfortunately for businesses, the employee who provided this information caused a leak of confidential data. Of course, such situations can be prevented through appropriate obligations on the AI providers or by using LLMs locally (using internal IT resources without third-party services).

Confidential data as AI knowledge source?
One of the most dangerous ideas that companies might consider is launching an AI-based tool that automatically accesses confidential data. If such a solution is made available outside the company network (for example, as a chatbot), it becomes an attractive target for criminals. There are already known cases where companies using trusted AI providers have become the subject of journalistic attention due to data leaks. Despite the existence of numerous mechanisms to protect the insecure operation of AI, criminals managed to achieve such levels of prompt engineering that LLM disclosed full sets of confidential data they had access to. This demonstrates that uncertainty about AI use is justified for many businesses.

Conclusion
It’s understandable why some companies continue to delay implementing AI. Businesses must pursue innovation while minimising the risk of errors – potential financial or reputational losses can be incomparably greater than the anticipated benefits. This is driven by threats, only a small portion of which have been discussed in this article. To utilise AI, businesses should primarily focus on cybersecurity, analyse risks, and create mechanisms to prevent any losses. It’s essential not only to consciously choose AI service providers and select appropriate security technologies, but also to train employees and build awareness of potential threats throughout the company.