- 
LLM causing self-XSSIntroduction So basically I’ve got this stupid idea a few weeks ago: what would happen if an AI language model tried to hack itself? For obvious reasons, hacking the “backend” would be nearly impossible, but when it comes to the frontend… I tried asking Chatsonic to simply “exploit” itself, but it responded with a properly… 
- 
One model to rule them allAs AI tools like ChatGPT gain popularity, I tried to explore the potential of GPT-4 as an automated offensive prompt engineer. Using GPT-4, I attacked “vulnerable” LLM named “Gandalf”.