
General Analysis x Together AI
TLDR: We are excited to announce our partnership with Together AI to stress-test the safety of open-source (and closed) language models.
We provide a repository of stress testing, jailbreaking, and red teaming methods—a knowledge base to understand and improve the performance and safety of AI models.
Comprehensive security assessment to identify exploitable vulnerabilities and OWASP Top 10/NIST/MITRE ATLAS compliance gaps in your AI systems and application layers.
►Research, analysis, and updates from our team
View all postsTLDR: We are excited to announce our partnership with Together AI to stress-test the safety of open-source (and closed) language models.
We have created a comprehensive overview of the most influential LLM jailbreaking methods.
TLDR: we utilized LegalBench as a diversity source to enhance the diversity of our generation of red teaming questions. We show that diversity transfer from a domain-specific knowledge base is a simple and practical way to build a solid red teaming benchmark.