Generative AI systems like OpenAI's ChatGPT have revolutionized how we interact with technology, but they come with a significant risk: the inadvertent exposure of sensitive information (OWASP LLM06). Without proper safeguards, these AI platforms may receive, process, and potentially retain confidential data, including:
Personally Identifiable Information (PII)
Protected Health Information (PHI)
Financial details (e.g., credit card numbers, bank account information)
Intellectual property
Real-world scenarios highlight the urgency of this issue:
Support Chatbots: Imagine a customer service AI powered by OpenAI. Users, in their quest for help, might unknowingly share credit card numbers or Social Security information. Without content filtering, this sensitive data could be transmitted to OpenAI and logged in your support system.
Healthcare Applications: Consider an AI-moderated health app that processes patient and doctor communications. These exchanges may contain protected health information (PHI), which, if not filtered, could be unnecessarily exposed to the AI system.
Content filtering is a crucial safeguard, removing sensitive data before it reaches the AI system. This ensures that only necessary, non-sensitive information is used for content generation, effectively preventing the spread of confidential data to AI platforms.
Let's examine this in a Python example using the LangChain, Anthropic, and Nightfall Python SDKs. You can download this sample code here.
Step 1: Setup Nightfall
If you don't yet have a Nightfall account, sign up here.
Create a Nightfall key. Here are the instructions.
Install the necessary packages using the command line:
Set up environment variables. Create a .env
file in your project directory:
Create an inline detection rule with the Nightfall API or SDK client, or use a pre-configured detection rule in the Nightfall account. In this example, we will do the former.
If you specify a redaction config, you can automatically get de-identified data back, including a reconstructed, redacted copy of your original payload. Learn more about redaction here.
Step 3: Classify, Redact, Filter Your User Input
to integrate content filtering into our LangChain pipeline seamlessly. We'll create a custom LangChain component for Nightfall sanitization. This allows us to seamlessly integrate content filtering into our LangChain pipeline.
We start by importing necessary modules and loading environment variables.
We initialize the Nightfall client and define detection rules for credit card numbers.
The NightfallSanitizationChain
class is a custom LangChain component that handles content sanitization using Nightfall.
We set up the Anthropic LLM and create a prompt template for customer service responses.
We create separate chains for sanitization and response generation, then combine them using SimpleSequentialChain
.
The process_customer_input
function provides an easy-to-use interface for our chain.
In a production environment, you might want to add more robust error handling and logging. For example:
To use this script, you can either run it directly or import the process_customer_input
function in another script.
Simply run the script:
This will process the example customer input and print the sanitized input and final response.
You can import the process_customer_input
function in another script:
If the example runs properly, you should expect to see an output demonstrating the sanitization process and the final response from Claude. Here's what the output might look like:
This section consists of various documents that assist you in scanning various popular SaaS GenAI services and frameworks using Nightfall APIs.
Generative AI systems like OpenAI's ChatGPT have revolutionized how we interact with technology, but they come with a significant risk: the inadvertent exposure of sensitive information (OWASP LLM06). Without proper safeguards, these AI platforms may receive, process, and potentially retain confidential data, including:
Personally Identifiable Information (PII)
Protected Health Information (PHI)
Financial details (e.g., credit card numbers, bank account information)
Intellectual property
Real-world scenarios highlight the urgency of this issue:
Support Chatbots: Imagine a customer service AI powered by OpenAI. Users, in their quest for help, might unknowingly share credit card numbers or Social Security information. Without content filtering, this sensitive data could be transmitted to OpenAI and logged in your support system.
Healthcare Applications: Consider an AI-moderated health app that processes patient and doctor communications. These exchanges may contain protected health information (PHI), which, if not filtered, could be unnecessarily exposed to the AI system.
Content filtering is a crucial safeguard, removing sensitive data before it reaches the AI system. This ensures that only necessary, non-sensitive information is used for content generation, effectively preventing the spread of confidential data to AI platforms.
A typical pattern for leveraging Claude is as follows:
Get an API key and set environment variables
Initialize the Anthropic SDK client (e.g. Anthropic Python client), or use the API directly to construct a request
Construct your prompt and decide which endpoint and model is most applicable.
Send the request to Anthropic
Let's look at a simple example in Python. We’ll ask a Claude model for an auto-generated response we can send to a customer who is asking our customer support team about an issue with their payment method. Note how easy it is to send sensitive data, in this case, a credit card number, to Claude.
This is a risky practice because now we are sending sensitive customer information to Anthropic. Next, let’s explore how we can prevent this while still benefitting from Claude.
Updating this pattern by using Nightfall is straightforward to check for sensitive findings and ensure sensitive data isn’t sent out. Here’s how:
Step 1: Setup Nightfall
Get an API key for Nightfall and set environment variables. Learn more about creating a Nightfall API key here. In this example, we’ll use the Nightfall Python SDK.
Step 2: Configure Detection
Create a pre-configured detection rule in the Nightfall dashboard or an inline detection rule with the Nightfall API or SDK client.
Consider using Redaction
Note that if you specify a redaction config, you can automatically get de-identified data back, including a reconstructed, redacted copy of your original payload. Learn more about redaction here.
Step 3: Classify, Redact, Filter
Send your outgoing prompt text in a request payload to the Nightfall API text scan endpoint. The Nightfall API will respond with detections and the redacted payload.
For example, let’s say we send Nightfall the following:
We get back the following redacted text:
Send Redacted Prompt to Anthropic
Review the response to see if Nightfall has returned sensitive findings:
If there are sensitive findings:
You can specify a redaction config in your request so that sensitive findings are redacted automatically.
Without a redaction config, you can break out of the conditional statement, throw an exception, etc.
If no sensitive findings or you chose to redact findings with a redaction config:
Initialize the Anthropic SDK client (e.g., Anthropic Python client), or use the API directly to construct a request.
Construct your outgoing prompt.
If you specified a redaction config and want to replace raw sensitive findings with redacted ones, use the redacted payload that Nightfall returns to you.
Use the Anthropic API or SDK client to send the prompt to the AI model.
Let's look at a Python example using Anthropic Claude and Nightfall's Python SDK. You can download this sample code here.
Get an API key for Nightfall and set environment variables. Learn more about creating an API key here.
Step 2: Configure Detection
Create an inline detection rule with the Nightfall API or SDK client, or use a pre-configured detection rule in the Nightfall account. In this example, we will do the former.
If you specify a redaction config, you can automatically get de-identified data back, including a reconstructed, redacted copy of your original payload. Learn more about redaction here.
Step 3: Classify, Redact, Filter Your User Input
Send your outgoing prompt text in a request payload to the Nightfall API text scan endpoint. The Nightfall API will respond with detections and the redacted payload.
For example, let’s say we send Nightfall the following:
We get back the following redacted text:
Step 4: Send Redacted Prompt to Anthropic
Review the response to see if Nightfall has returned sensitive findings:
If there are sensitive findings:
You can choose to specify a redaction config in your request so that sensitive findings are redacted automatically.
Without a redaction config, you can simply break out of the conditional statement, throw an exception, etc.
If no sensitive findings or you chose to redact findings with a redaction config:
Construct your outgoing prompt.
If you specified a redaction config and want to replace raw sensitive findings with redacted ones, use the redacted payload that Nightfall returns to you.
Use the Claude API or SDK client to send the prompt to the AI model.
You'll see that the message we originally intended to send had sensitive data:
And the message we ultimately sent was redacted, and that’s what we sent to Anthropic:
Anthropic sends us the same response either way because it doesn’t need to receive sensitive data to generate a cogent response. This means we were able to leverage Claude just as easily but we didn’t risk sending Anthropic any unnecessary sensitive data. Now, you are one step closer to leveraging generative AI safely in an enterprise setting.
Generative AI systems like OpenAI's ChatGPT have revolutionized how we interact with technology, but they come with a significant risk: the inadvertent exposure of sensitive information (OWASP LLM06). Without proper safeguards, these AI platforms may receive, process, and potentially retain confidential data, including:
Personally Identifiable Information (PII)
Protected Health Information (PHI)
Financial details (e.g., credit card numbers, bank account information)
Intellectual property
Real-world scenarios highlight the urgency of this issue:
Support Chatbots: Imagine a customer service AI powered by OpenAI. Users, in their quest for help, might unknowingly share credit card numbers or Social Security information. Without content filtering, this sensitive data could be transmitted to OpenAI and logged in your support system.
Healthcare Applications: Consider an AI-moderated health app that processes patient and doctor communications. These exchanges may contain protected health information (PHI), which, if not filtered, could be unnecessarily exposed to the AI system.
Content filtering is a crucial safeguard, removing sensitive data before it reaches the AI system. This ensures that only necessary, non-sensitive information is used for content generation, effectively preventing the spread of confidential data to AI platforms.
Let's look at a Python example using OpenAI and Nightfall's Python SDK. You can download this sample code here.
Step 1: Setup Nightfall
Get an API key for Nightfall and set environment variables. Learn more about creating an API key here.
Step 2: Configure Detection
Create an inline detection rule with the Nightfall API or SDK client, or use a pre-configured detection rule in the Nightfall account. In this example, we will do the former.
If you specify a redaction config, you can automatically get de-identified data back, including a reconstructed, redacted copy of your original payload. Learn more about redaction here.
Step 3: Classify, Redact, Filter Your User Input
Send your outgoing prompt text in a request payload to the Nightfall API text scan endpoint. The Nightfall API will respond with detections and the redacted payload.
For example, let’s say we send Nightfall the following:
We get back the following redacted text:
Step 4: Send Redacted Prompt to OpenAI
Review the response to see if Nightfall has returned sensitive findings:
If there are sensitive findings:
You can choose to specify a redaction config in your request so that sensitive findings are redacted automatically.
Without a redaction config, you can simply break out of the conditional statement, throw an exception, etc.
If no sensitive findings or you chose to redact findings with a redaction config:
Initialize the OpenAI SDK client (e.g. OpenAI Python client), or use the API directly to construct a request.
Construct your outgoing prompt.
If you specified a redaction config and want to replace raw sensitive findings with redacted ones, use the redacted payload that Nightfall returns to you.
Use the OpenAI API or SDK client to send the prompt to the AI model.
You'll see that the message we originally intended to send had sensitive data:
And the message we ultimately sent was redacted, and that’s what we sent to OpenAI:
OpenAI sends us the same response either way because it doesn’t need to receive sensitive data to generate a cogent response. This means we were able to leverage ChatGPT just as easily but we didn’t risk sending OpenAI any unnecessary sensitive data. Now, you are one step closer to leveraging generative AI safely in an enterprise setting.