Hello everyone!
It’s been a while, and although I’ve been keeping up with what’s happening in the AI world, I haven’t really had time to post new releases. I’ve also decided to change a form, and for some time I’ll be doing just the links instead of links + summaries. Let me know how you like the new form. I think it’s more useful, because in most cases you get the summary of the article from the beginning. Since this is a “resurrection” of this newsletter, I’ve tried to include some of the most important news from the last 5 months in AI security here. Also, I’ve started using the tool that detects if the LLM was used to create the content – this way I’m trying to filter out low quality content created with LLMs (I mean, if the content is created with ChatGPT, you could create it yourself, right?).
If you find this newsletter useful, I’d be grateful if you’d share it with your tech circles, thanks in advance! What is more, if you are a blogger, researcher or founder in the area of AI Security/AI Safety/MLSecOps etc. feel free to send me your work and I will include it in this newsletter 🙂
LLM Security
- Short course of LLM Red Teaming from Deeplearning.AI
- Security Flaws within ChatGPT Ecosystem Allowed Access to Accounts On Third-Party Websites and Sensitive Data
AI Security
- Thousands of publicly exposed Ray servers compromised as a result of Shadow Vulnerability (Forbes on ShadowRay)
- Wiz Research finds a way to compromise HuggingFace using malicious models and container escape techniques (More details on HuggingFace blog)
- New AI Security tools in AzureAI (More info from Microsoft)
- U.S. Department of the Treasury report on managing Artificial intelligence-specific cybersecurity risks in the financial services sector
- TensorFlow AI models might have been at risk of supply chain attacks due to a flaw in the Keras API
- (Not really an AI security, but Python supply chain – this topic is important for AI security) Over 170K users affected by attack using fake Python infrastructure