What is meant by ‘AI Hardening’?

Many AI applications are great because they increase the efficiency of companies. However, they can also pose a threat to information security. Find out here how AI Hardening makes your systems more secure.

What is AI hardening?

The term ‘AI Hardening’ has not yet been precisely defined. We understand it as the Secure Configuration of AI systems so that they have fewer security vulnerabilities. In this case, it is a specific form of Application Hardening.

We also see AI Hardening as a measure to harden operating systems – with a focus on significantly limiting or disabling tools such as MS Copilot. AI Hardening then belongs in the realm of OS Hardening or specific Windows 11 Hardening.

What are the goals of AI Hardening?

As with all System Hardening, AI Hardening is also about …

✅ … the individual applications process, send and store as little sensitive data as possible on external servers. This restricts ‘calling home’ so that ‘data octopuses’ cannot obtain or even misuse the information.

✅ … attackers cannot use installed AI applications or online AI tools to penetrate company systems. In other words, hardening ensures that the attack surfaces and thus the risk of compromise are reduced.

How do you carry out AI Hardening?

AI Hardening consists of various tasks. These include, among others:

➡ Optimising the individual AI parameters so that they can be considered as ‘safe’ as possible.

➡ Restricting, deactivating and/or uninstalling AI applications if they are not absolutely necessary.

Note: There are currently no specific recommendations from CIS, DISA or BSI for AI Hardening, but they will certainly follow soon. One exception is Microsoft, which recommends deactivating Copilot in the MS Server 2025 Baselines.

Specialty: LLM Hardening

LLM Hardening focuses on securing Large Language Models (LLMs). These AI models, which can understand and generate natural language, are vulnerable to manipulation attempts – for example, through prompt injections or jailbreaks.

➡ LLM System Hardening follows a structured approach to minimize these risks. It primarily focuses on protection against evasion attacks, which attempt to circumvent the security mechanisms or behavioral rules of an AI model.

➡ The concept of LLM Hardening is based on established IT security principles and applies them to AI systems. The motto is: As much functionality as necessary, as little attack surface as possible.

➡ The German Federal Office for Information Security (BSI) has published several papers on this topic, which include recommendations for LLM Hardening.

Would you like to find out more?

Are you interested in the topic of ‘System Hardening’? Feel free to contact us if you need practical support with the implementation of (automated) hardening based on current standards.

💬 Contact us now!

 

Image: Freepik Pikaso

Leave a Reply