propvivo-logo
Snappr
| Generative AI Trust Center
Last updated Friday March 15th
Last updated Friday March 15th
Snappr
Our organization prioritizes the security of Generative AI, actively addressing and mitigating the associated risks. We aim to remain at the forefront of tackling Generative AI security challenges, constantly updating our strategies to counter emerging threats. This page offers a clear outline of our methods to safeguard against risks related to Large Language Models (LLMs), serving as a resource for vendors and customers alike. Through our continuous efforts, we ensure the integrity and reliability of our Generative AI systems, maintaining a secure environment for all stakeholders.
Compliance
image 54
LSOC Type I
Model Provider
image 54
Stable Diffusion
Data Sent to LLMs
Controls
Customer Data Privacy
Direct querying data exfiltration risk mitigated
Untrusted sources checked for risks
Images validated before LLM input
RAG systems checked for data exfiltration risk
End Customer Security
Profane content risk mitigated
National security risk mitigated
User phishing risk mitigated
Attacker driven misinformation risk mitigated
LLM Application Risk Assessment
Security review conducted
Production outputs evaluated on regular basis
LLM Application Security Monitoring
Metrics collected on LLM performance and outputs
Reporting system in place for suspicious events
Controlling IP
System prompt exfiltration risk mitigated
Model architecture exfiltration risk mitigated
Training data sources kept confidential
Access Controls
Arbitrary action prevented
Consent protocols for customers established
LLM Application access controls well maintained
Risk Communication
Procedure in place to communicate LLM Security risks to customers
Customers and end useres can access LLM Security posture easily
LLM Model Security
Base models vetted before use for privacy considerations
Training data access controlled and policies established
propvivo-logo