• What are LLMs?
• Interactive Interfaces and Use Cases;
• Security Considerations;
• Protecting Against LLM Attacks;
• Exploiting LLM APIs with excessive agency;
• Exploiting vulnerabilities in LLM APIs;
• Indirect prompt injection;
• Exploiting insecure output handling in LLMs;
• LLM Zero-Shot Learning Attacks;
• LLM Homographic Attacks;
• LLM Model Poisoning with Code Injection;
• Chained Prompt Injection;
• Conclusion;
• References;
• Security Researchers.
#web #LLM
Please open Telegram to view this post
VIEW IN TELEGRAM
- Uncovering Hidden Information in AI Security;
- Model Extraction: A Red Teamer's Guide;
- Model Fingerprinting;
- Prompt Injection;
- Restricted Prompting;
- Tabular Attack;
- Tree of Attacks (TAP) Jailbreaking;
- Data Augmentation, and Model Training in NLP.
#LLM #Red_Team
Please open Telegram to view this post
VIEW IN TELEGRAM