Product Thinking & How to Balance Security Risks When Working with LLMs
Developing an application to leverage large language model (LLM) functionality can be both exciting and daunting. Language models have the potential to provide incredible functionality for both language processing and system interactions. However, like any powerful tool, it comes with a caveat: Before embarking on this app development journey, you must understand the technological limitations and take necessary precautions to assess and address any cybersecurity or related risks.
The Limitations of LLMs
First, let’s break down the fundamental limitations of LLMs:
- Need for External Accuracy Validation: LLMs do not implicitly represent the truth. While retrieval augmented generation (RAG) architectures can select a (trusted) corpus of documents as the basis of a generation task, the veracity of the output is not evaluated. This limitation means that the use of an LLM for accuracy-critical applications requires additional, likely quite complex, validation logic and, in some scenarios, a human in the loop.
- Sensitivity to Minor Discrepancies: LLMs are sensitive to minor input discrepancies. Furthermore, they are sensitive to minor discrepancies in floating point operations performed by graphic processing units (GPUs), which directly impact tokens' probability calculation. This means that, in general, LLMs cannot guarantee a predictable and repeatable response even with the same prompt and temperature parameter set to zero, which forces the model to always choose the most probable tokens.
- Untrusted & Potentially Malicious Responses: User input is a primary factor in the responses generated by LLMs because, by its nature, the LLM cannot separate any commands from user data. Unfortunately, user input could be untrusted and malicious. As a result, it is crucial to treat LLM responses as untrusted and potentially malicious.
- Training Data Genealogy Issues: As with many AI applications relying on the distillation of emergent patterns from curated data sources, generative models may be susceptible to data poisoning attacks. There are scenarios where LLMs can be polluted by nefarious data that is ingested during training. This is very hard to detect and would systemically produce consistent, but also potentially consistently biased, results. Therefore, new challenges are emerging pertaining to the nuances of monitoring and detecting compromised behavior in conversational output. As a starting point, we must emphasize the importance of data lineage as an inherent part of productizing generative AI solutions.
These limitations tangibly impact the security threat landscape of the applications on top of the LLM and require a calibrated concept of risk to be established. To this end, many organizations look to OWASP as a source of an initial and unbiased landscape of applicable threats and mitigations. At EPAM, for example, we adopt and contribute to OWASP best practices to help other organizations establish an objective and trustworthy perspective on LLM risks.
Having contextualized the risk landscape, you can proceed with building applications on top of the LLM, which requires balancing serendipity and security in line with the anticipated product experience. On the one hand, the freedom of using natural language enhances an application's capabilities and user experience. On the other hand, it carries the risk of losing control over your data and system.
How to Find the Perfect Balance
Achieving the right balance is challenging but it is possible with proper preparation. Here are our tips:
1. Understand the Limitations & Evaluate the Risks: Before starting development, accept the limitations of LLMs. Next, identify risks applicable to your product. For example: Is your product considering privacy? Should it provide accurate legal information? Or does it impact human safety? If your desired level of security and accuracy cannot tolerate these risks, do not proceed with building your application on the LLM.
2. Implement Secure Architecture: Design your applications to mitigate identified risks. For example, we use a comprehensive framework for LLM security assessments that helps tackle major threats introduced in solutions that leverage LLMs. It includes the following categories and controls within them:
- Interaction with users
- Interaction with downstream systems (APIs, etc.)
- Interaction with the LLM
- Security of data storage used by the app on top of the LLM
- Model and LLM provider security
- Secure SDLC
3. Ensure Constant Vigilance: Even after deployment, continuously monitor and review the data generated by the LLM to identify potential issues along with monitoring of the application behavior. Since foundational models and attacks are continuously evolving, it is crucial to promptly detect problems and have enough information to investigate and mitigate them.
Conclusion
LLMs are an incredibly powerful tool if utilized correctly. Understanding the potential risks associated with developing applications with this technology is essential. By being aware of the potential security pitfalls and adopting proactive measures, developers can strike a delicate balance between freedom of using natural language and security. Ultimately, be prepared and cautious when entering the game of building applications on top of LLMs — you'll need both skills and awareness to emerge victorious.