• What are LLMs?
• Interactive Interfaces and Use Cases;
• Security Considerations;
• Protecting Against LLM Attacks;
• Exploiting LLM APIs with excessive agency;
• Exploiting vulnerabilities in LLM APIs;
• Indirect prompt injection;
• Exploiting insecure output handling in LLMs;
• LLM Zero-Shot Learning Attacks;
• LLM Homographic Attacks;
• LLM Model Poisoning with Code Injection;
• Chained Prompt Injection;
• Conclusion;
• References;
• Security Researchers.
#web #LLM
Please open Telegram to view this post
VIEW IN TELEGRAM