Posts

  • Real Threats of Artificial Intelligence – AI Security Newsletter #1 (July 2023)

    Welcome to Real Threats of Artificial Intelligence – AI Security Newsletter. This is the first release of this newsletter, which I plan to deliver bi-weekly. If you want to receive this Newsletter via mail, you can sign up here: https://hackstery.com/newsletter/. This week there’s some reading about poisoning LLM datasets and supply chain and Federal Trade…

    Read more

  • OWASP Top 10 for Large Language Model Applications

    OWASP has released a new LLM-related Top 10 list. This article explores the details of the Top 10 LLM-related vulnerabilities, including prompt injection, insecure output handling, training data poisoning, denial of service, supply chain issues, permission issues, data leakage, excessive agency, overreliance, and insecure plugins.

    Read more

  • LLM causing self-XSS

    Introduction So basically I’ve got this stupid idea a few weeks ago: what would happen if an AI language model tried to hack itself? For obvious reasons, hacking the “backend” would be nearly impossible, but when it comes to the frontend…  I tried asking Chatsonic to simply “exploit” itself, but it responded with a properly…

    Read more