AI Generated Case Summaries
This document outlines key aspects of our GenAI implementation, including data handling, privacy, usage, and opt-out option.
AI Generated Case Summaries: How It Works
Overview
SenseOn’s AI Generated Case Summaries are powered by OpenAI's GPT-4 model through their Enterprise API. This document outlines key aspects of our AI implementation, including data handling, privacy, usage, and opt-out option.
Data Usage and Retention
Your data will not be used by OpenAI to train their models. Our agreement with OpenAI as a data sub-processor prohibits them from using any of your data to train their public models. Full details of OpenAI’s Enterprise Privacy Policy can be found here.
- The data sent to the AI model is used solely for generating responses to your environment.
- SenseOn adheres to strict data privacy practices to protect your information.
- SenseOn's data retention policies for case information and related data are separate from the AI processing and follow our standard retention schedules.
Customer Data Sent to the Large Language Model (LLM)
When generating AI Case Summaries, the following data may be sent to the LLM:
- Case Information:
- Case ID
- Case Score
- Case Description
- Case Status
- Timestamps
- Observation details
- MITRE techniques
- Device Information:
- Host name
- Host Description
- User
- Endpoint ID
- IP Address (including country and city)
- Device Process Information:
- Process name
- Process PID
- Process executable path
- Observation evidence (associated with the case):
- A subset of telemetry which is relevant to the observation may also be returned to the LLM for summarisation. This serves as the observation's evidence.
- The telemetry within the “endpoint process” and “network process” tables may be returned to the LLM depending on the observation type.
- The complete list of telemetry available to the LLM can be found within HuntLab’s Table references for “network process” and “endpoint process”.
Potential Changes in Data Processing
As we introduce new AI capabilities, the types and amount of data processed by the LLM may change:
- Data Types: New features may require additional types of data to be processed by the LLM. For example, future enhancements might involve analysing new forms of telemetry or threat intelligence data.
- Data Volume: The amount of data sent to LLM for processing may increase or decrease as we optimise our AI integration.
- Processing Frequency: Depending on the feature, the frequency of AI data processing may change, potentially becoming more real-time or batch-oriented based on specific use cases.
- Model & Provider Updates: We may update or change the LLM models we use, which could alter how data is processed and analysed.
Our Commitment to Transparency and Privacy
As we evolve our AI capabilities, we remain committed to:
- Keeping you informed about significant changes in AI features and data processing.
- Notifying you if a new data sub-processor (e.g. new model provider) is added to the list of agreed sub-processors.
- Maintaining strict data privacy and security standards, regardless of AI enhancements.
- Providing clear documentation on what data is being processed by AI models.
- Offering granular control over AI feature usage, including opt-out options for specific AI functionalities.
AI Training and Improvement
- You can provide feedback on the AI generated case summary within the SenseOn UI by selecting thumbs up/down for each summary. Additional free text feedback can be provided when a thumbs down rating is selected.
- This feedback is used by our development team to improve and update the quality of the AI’s output
- Any improvements or updates to the AI's capabilities are made through controlled processes
Ethical Use and Transparency
- We are committed to the responsible and ethical use of AI technology.
- Our AI is designed to assist and augment human decision-making, not to replace it.
- We maintain transparency about the AI's capabilities and limitations.
Opt-Out Option
We understand that some users or organisations may prefer not to use AI-assisted features. SenseOn provides an opt-out option for this reason. Your decision will be respected, and it will not affect your current services with us.
- To opt out of AI-generated case summaries and other AI-assisted features, please contact your SenseOn account manager or our support team.
- Opting out will result in Case Summaries being unavailable in your SenseOn platform.
- Your case data and other information will still be processed by our standard, non-AI systems to maintain threat detection and response capabilities.
- You can choose to opt back in at any time by contacting us.
Please note that opting out of AI features may impact certain functionalities and the timeliness of threat response. Our support team can provide more details on how opting out might affect your specific use case.
Limitations of AI Case Summary
While our AI Case Summaries are designed to enhance your experience with SenseOn, it's important to understand its limitations. Like all AI systems based on large language models, our AI Case Summaries have certain constraints:
Potential Inaccuracies and Hallucinations
- The AI may occasionally produce inaccurate or inconsistent information.
- It may misinterpret context or nuances in complex cybersecurity scenarios.
- The AI can sometimes generate "hallucinations" - plausible-sounding but entirely fabricated information.
Our development team minimises the likelihood of inaccuracies and hallucinations by continuously monitoring the quality of the model’s output and making adjustments.
Data Freshness
The summary does not update automatically when new observations are added to the case or actions have been taken (e.g. device isolated, case closed). Therefore the information in the summary may not always be up to date. You can generate a new summary at any time to receive a summary containing the case information.
Lack of Real-Time Knowledge
- The AI's knowledge is based on its training data, which has a cutoff date.
- It may not have information about the very latest threats or vulnerabilities.
Contextual Misunderstandings
The AI might misunderstand the full context of a situation, especially in complex or unusual cybersecurity scenarios.
Mitigating These Limitations
To address these limitations, we recommend:
- Always verifying critical information provided by the AI.
- Using AI Case Summaries as a supportive tool alongside human expertise, not as a replacement.
- Regularly providing feedback to help us improve the system.
- Staying informed about the latest updates to our SenseOn’s AI capabilities.
For more detailed information about our data handling practices, opt-out process, or if you have any concerns, please contact our support team or reach out directly to [email protected] (Product Manager).