third-party risk management

Large Language Models and third-party risk management: building trust when using new technologies

Organizations of all shapes and sizes are racing to deploy artificial intelligence (AI) to streamline their operations, cut costs, and improve customer service.

At the core of this revolution are Large Language Models (LLMs) like OpenAI’s ChatGPT, Google’s Bard, and others. While these new AI technologies are rapidly shifting the business landscape, they bring with them new risks. To help manage them, the U.S. National Institute of Standards and Technology (NIST) recently released an AI Risk Management Framework (RMF), which we discussed previously.

One of the RMF’s key considerations is how to govern, measure, and map third-party risk. LLMs present unique cybersecurity, privacy, and compliance challenges that need to be addressed throughout the supply chain. So in this post, we will look at these as well as talk about how enterprises can examine their vendors to identify potential problem areas. And in closing, we’ll also look at how AI companies can best prepare themselves for security scrutiny from their customers.

Risks of LLM use

1. Cybersecurity

With new technologies, innovators often move quickly and think about security afterward. However unfortunate, it is a fact of life that LLMs have security vulnerabilities whereby data confidentiality can be impacted by:

  • Attackers chaining ChatGPT  plugins  together to interact with other systems in unexpected ways.
  •  Prompt  injection, either directly on the part of an attacker or indirectly by browsing to a malicious website.
  • Data  leakage  through unintended model training.

Data integrity can also be at risk due to threats like data poisoning or attackers taking advantage of LLM hallucinations to typosquat on open source library names.

Whatever the vector, organizations need to protect sensitive information – both in their networks and that of their suppliers – when innovating with AI.

2. Privacy

Rules like the California Consumer Privacy Act (CCPA) and General Data Protection Regulation (GDPR) might be relatively new, but their drafters still don’t appear to have anticipated the rapid and broad adoption of AI tools. As a result, there remain gray areas as to how privacy requirements apply to technologies like LLMs.

Especially when using 3rd party AI tools, risks can involve:

  • Needing to ensure data subject access (DSAR) and erasure requests are fulfilled throughout your entire software supply chain.
  • Having to deal with situations where LLMs can generate accurate personal information about people even without having access to an underlying database containing it.

It’s sad but true that a lot of these novel questions are going to be decided by “enforcement through regulation,” whereby companies only learn they have run afoul of the law when they are punished. In the face of this uncertainty, though, it is possible to take steps – both internally and with your vendors – to reduce risk both to your customers and to your organization.

3. Compliance

Audit frameworks such as SOC 2 and ISO 27001 were conceived prior to the widespread deployment of LLMs, and thus don’t necessarily account for using them in their requirements. With that said, these standards have relatively high-level guidance and auditors have latitude to apply them to specific situations and technologies. And integrating specific guidance from the AI RMF can help to fill out areas where existing standards don’t have clear answers.

Using the example of SOC 2, Common Criteria 1.1 requires that entities protect their confidential information, so having a documented plan to address some of the risks related to inadvertent training and prompt injection when using third-party models will be key. ISO 27001 section 5.5. specifies that organizations must have a data labeling program, which is also a critical underpinning for any system for ensuring only the appropriate data is provided to LLMs hosted outside of your organization.

Adapting your compliance program to reflect emerging technologies like AI will take some work. And it will require both vendors and customers to take additional measures to document and prove they have conducted due diligence.

How to manage third-party AI risk

Every business in the modern economy is going to rely on others to get their job done. And when it comes to information, organizations are as interdependent as ever. 

Managing vendor risk is a key discipline for any organization, and it becomes even more important when you add AI to the mix. Not to mention it is a requirement of frameworks like the AI RMF, SOC 2, and 27001.

Some key steps you can take specifically in this regard are:

  • Understanding vendor’s data retention policies. Will your information be kept for 30 days? Indefinitely? Somewhere in between?
  • Limiting LLM training using your data. Do you need to opt in or opt out for an external model to train on the inputs you provide?
  • Adding contractual requirements or guarantees related to AI use.

You’ll want to verify that your vendors are abiding by their word when it comes to these and other security guarantees. So developing an effective method for tracking their attestations and security reviews in a single place will be vital.

Building trust with customers as an AI company

Just as managing your own supply chain to ensure it is secure and compliant is vital, companies using LLMs as a core part of their business proposition will need to reassure their own customers about their governance program. Taking a proactive approach is important not just from a security perspective, but projecting an image of confidence can help you to close deals more effectively. Some key steps you can take involve:

  • Documenting an internal AI security policy.
  • Launching a coordinated vulnerability disclosure or even bug bounty program to incentivize security researchers to inspect your LLMs for flaws.
  • Building and populating a Trust Vault to allow for customer self-service of security-related inquiries.
  • Proactively sharing methods through which you implement the AI RMF specifically for your company and its products.

If it isn’t the case already, AI companies are going to eventually need to rely on each other to deliver highly specialized products and services. Thus, they will find themselves on both sides of the supplier/customer relationship. Having an effective trust and security program – tailored to incorporate AI considerations – can strengthen both these relationships and your underlying security posture.

Conclusion

No company is going to survive without embracing artificial intelligence in some way. Whether or not your core value proposition revolves around developing or deploying LLMs, this technology is certain to form at least part of your digital supply chain. Building trust throughout it by using these best practices can improve relationships and streamline sales processes.

Want to see how Scrut Automation can help you manage third-party AI risk and build your customers’ confidence that you can deal with yours? Please reach out today!

Stay up to date

Get the latest content and updates in information security and compliance delivered to straight to your inbox.

Book Your Free Consultation Call

Stay up to date

Get the latest content and updates in information security and compliance delivered to straight to your inbox.

Book Your Free Consultation Call

Related Posts

We are entering the Spring of 2024 with fresh new capital – […]

Get ready to explore the crunchy and soft side of GRC in […]

Scrut recently organized a conference with some of the brightest cyber minds […]

The increase in the use of software technology is directly proportional to […]

Organizations of all shapes and sizes are racing to deploy artificial intelligence[...]

Organizations of all shapes and sizes are racing to deploy artificial intelligence[...]

Organizations of all shapes and sizes are racing to deploy artificial intelligence[...]

See Scrut in action!