Categories
Newsletter

Real Threats of Artificial Intelligence – AI Security Newsletter #7

Welcome to the 7th release of the Real Threats of Artificial Intelligence Newsletter.

Below you’ll find some interesting links – if you are an offensive security practitioner, take a look at Kaggle/AI Village DEFCON Capture The Flag competition, where you can challenge your AI hacking skills (it’s still going for the next 2 weeks). I’d also recommend the talk “AI’s Underbelly: The Zero-Day Goldmine” by Dan McInerney from ProtectAI. This talk inspired me to create this post: https://hackstery.com/2023/10/13/no-one-is-prefect-is-your-mlops-infrastructure-leaking-secrets/ 

I’ve also started cataloging AI Security / AI Red Teaming job offers – check the “Jobs” section, if you consider stepping into the AI Security industry.

If you find this newsletter useful, I’d be grateful if you’d share it with your tech circles, thanks in advance! What is more, if you are a blogger, researcher or founder in the area of AI Security/AI Safety/MLSecOps etc. feel free to send me your work and I will repost it in this newsletter 🙂

Source: Bing Image Creator 

LLM Security 

New release of OWASP Top10 for LLM 

A new version of OWASP Top10 for LLM was released. More examples, increased readability etc. are present in this release. They also added this diagram that highlights how the vulnerabilities intersect with the application flow: 

Link: website of the project: https://owasp.org/www-project-top-10-for-large-language-model-applications/
Simon’s post on LinkedIn: https://www.linkedin.com/pulse/new-release-owasp-top-10-llm-apps-steve-wilson

17 chars LLM jailbreak by @AIPanic 

This guy is a wizard of prompts. Usually, “Do Anything Now” prompts are long and complicated. @AIPanic proves that just a few chars is enough to trigger the model to return harmful content. 

https://twitter.com/AIPanic/status/1711431600230035740

Killer Replika chatbot

In 2021, a man broke into Windsor Castle with a crossbow. Later, he told the police that Replika chatbot told him to assassinate the Queen of England. Recently, he got sentenced 

Link: https://www.theregister.com/2023/10/06/ai_chatbot_kill_queen/ 

AI-based coding assistants may leak API keys 

GitHub Copilot and Amazon CodeWhisper can be coaxed to emit hardcoded credentials that these AI models captured during training, though not all that often.

Link: https://www.theregister.com/2023/09/19/github_copilot_amazon_api/

AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models

Authors demonstrate an automated method of generating semantically meaningful jailbreaks.  

Link: https://arxiv.org/abs/2310.04451 

Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations

Link: https://arxiv.org/abs/2310.06387 

GPT-4 is too smart to be safe: stealthy chat with LLMs via cipher

This promising paper (currently under review) presents an approach for jailbreaking LLMs through usage of ciphers – i.e. Caesar cipher etc. 

Link: https://openreview.net/pdf?id=MbfAK4s61A 

Chatbot hallucinations are poisoning the web search (possible paywall)

A short story on how hallucinations from the chatbots poisoned GPT-powered Bing Chat. 

Link: https://www.wired.com/story/fast-forward-chatbot-hallucinations-are-poisoning-web-search/ 

4chan users manipulate AI tools to unleash torrent of racist images

Link: https://arstechnica.com/tech-policy/2023/10/4chan-pushing-bing-dall-e-as-quick-methods-to-spread-racist-images/ 

DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models (by Microsoft Research)

A paper is from July, but it was reposted on MS website a few days ago. Taxonomy of LLM-related risks can be a good starting point for Threat Modeling LLMs: 

Links: https://techcrunch.com/2023/10/17/microsoft-affiliated-research-finds-flaws-in-gtp-4/,

https://www.microsoft.com/en-us/research/blog/decodingtrust-a-comprehensive-assessment-of-trustworthiness-in-gpt-models/,

https://github.com/AI-secure/adversarial-glue 

AI Security 

AI Security Has Serious Terminology Issues

What is the difference between AI Security, AI Safety, AI Red Teaming and AI Application Security? In this blog post, Joseph Thacker proposed the boundaries of each of the terms in order to make them more precise.  

Link: https://josephthacker.com/ai/2023/10/16/ai-security-terminology-issues.html 

AI Village CTF

Better late than never – this CTF ends on 9th of November – you can still give it a try and check your AI hacking skills! 

Link: https://www.kaggle.com/competitions/ai-village-capture-the-flag-defcon31/ 

AI’s Underbelly: The Zero-Day Goldmine 

Inspiring talk on MLOps/AIOps tools security by Dan McInerney: 

Link: https://www.youtube.com/watch?v=e3ybnXjtpIc 

Six steps for AI security

Post by Nvidia. 

Source: Nvidia 

Link: https://blogs.nvidia.com/blog/2023/09/25/ai-security-steps/

AI/LLM as a tool for cybersecurity 

Compliance.sh

This AI-supported tool makes it easier to get compliant with ISO 27001, SOC 2 Type II, HIPAA, GDPR and more:  

Link: https://compliance.sh/

Check for AI

This is a pretty convenient tool for detection of AI-generated text: 

Link: https://www.checkfor.ai/ 

AI safety 

To be honest usually I concentrate more on AI Security and I occasionally follow what’s going on in the world of AI Safety. Those resources look super cool – just check those designs! 

Map of AI Existential Safety 

In this map, whole set of resources related to the AI Safety is collected: 

Link: https://aisafety.world/ 

Neuronpedia

In this game, you help with crowdsourcing explanations for the neurons inside of the neural networks: 

Link: https://www.neuronpedia.org/ 

Frontier Model Forum will fund AI safety research

Frontier Model Forum announced that it’ll pledge $10 million toward a new fund to advance research on tools for “testing and evaluating the most capable AI models.”

Link: https://techcrunch.com/2023/10/25/ai-titans-throw-a-tiny-bone-to-ai-safety-researchers

Jobs

Senior Security Engineer – GenAI @ Amazon 

Link: https://www.amazon.jobs/en/jobs/2444074/senior-security-engineer-genai-amazon-stores 

Offensive Security Engineer – AI Red Team @ Microsoft

Link: https://jobs.careers.microsoft.com/us/en/job/1633942/Offensive-Security-Engineer-II–AI-Red-Team

Senior Security Researcher (AI Security) @ Microsoft 

Link: https://jobs.careers.microsoft.com/us/en/job/1583887/Senior-Security-Researcher-���-AI-Security 

AI Security Lead @ Bytedance

Link: https://jobs.bytedance.com/en/position/7270039018820536632/detail

AI Security Lead @ TikTok

Link: https://careers.tiktok.com/position/7232214286985103671/detail

Senior ML Security Engineer @ Snowflake

Link: https://careers.snowflake.com/us/en/job/SNCOUS6944604002EXTERNALENUS/Senior-ML-Security-Engineer

Software Dev Engineer II, AI Security @ Amazon

Link: https://www.amazon.jobs/en/jobs/2462392/software-dev-engineer-ii-ai-security

Technical Program Manager, Security @ Anthropic

Link: https://jobs.lever.co/Anthropic/580d8f10-24c6-46a7-9d44-0116e95e568b  

Other AI-related things 

Killer drones used in Ukraine

If these reports are true, the first war drones that work without human supervision are being deployed in the battlefields in the Ukraine against Russians: 

Link: https://www.unmannedairspace.info/commentary/ukraine-deploying-attack-drones-without-human-oversight/ 

Advent of Code prohibits the usage of LLMs

Link: https://adventofcode.com/about#ai_leaderboard 

If you want more papers and articles 

IN-CONTEXT UNLEARNING: LANGUAGE MODELS AS FEW SHOT UNLEARNERS, Pawelczyk, et. al. 

Link: https://arxiv.org/pdf/2310.07579.pdf

Composite Backdoor Attacks Against Large Language Models, Huang, et. al. 

Link: https://arxiv.org/pdf/2310.07676.pdf  

Low-Resource Languages Jailbreak GPT-4, Yong, et.al. 

Link: https://browse.arxiv.org/pdf/2310.02446.pdf 

Categories
Newsletter

Real Threats of Artificial Intelligence – AI Security Newsletter #6

Here comes another edition of my newsletter. I’ve collected some interesting resources on AI and LLM security – most of them published in the last two weeks of September. 

If you are not a subscriber yet, feel invited to subscribe here.


Also, if you find this newsletter useful, I’d be grateful if you’d share it with your tech circles, thanks in advance!

Autumn-themed thumbnail generated with Bing Image Creator 🙂 

LLM Security 

OpenAI launches Red Teaming Network 

OpenAI announced an open call for OpenAI Red Teaming Network. In this interdisciplinary initiative, they want to improve the security of their models. Not only do they invite red teaming experts with backgrounds in cybersecurity, but also experts from other domains, with a variety of cultural backgrounds and languages.

Link: https://openai.com/blog/red-teaming-network 

I am building a payloads’ set for LLM security testing 

Shameless auto-promotion, but I’ve started working on PALLMS (Payloads for Attacking Large Language Models) project, within which I want to build huge base of payload, which can be utilized while attacking LLMs. There’s no such an initiative publicly available on the Internet, so that’s a pretty fresh project. Contributors welcome!

Link: https://github.com/mik0w/pallms 

LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins

In this paper (by Iqbal, et. al.) authors review the security of ChatGPT plugins. That’s a great supplement for OWASP Top10 for LLM LLM:07 – Insecure Plugin Design vulnerability. Not only have authors analyzed the attack surface, but also they demonstrated potential risks on real-life examples. In this paper, you will find an analysis of threats such as: hijacking user machine, plugin squatting, history sniffing, LLM session hijacking, plugin response hallucination, functionality squatting, topic squatting and many more. The topic is interesting and I recommend this paper!  

Link: https://arxiv.org/pdf/2309.10254.pdf 

Wunderwuzzi – Advanced Data Exfiltration Techniques with ChatGPT

In this blog post, awesome @wunderwuzzi presents a variety of techniques for ChatGPT chat history data exfiltration by combining techniques such as indirect prompt injection and using plugins in a malicious way. 

Link: https://embracethered.com/blog/posts/2023/advanced-plugin-data-exfiltration-trickery/ 

Security Weaknesses of Copilot Generated Code in GitHub

In this paper, Fu, et. al. analyze security of the code generated using GH copilot. I will just paste a few sentences from the article’s summary: 

Our results show: (1) 35.8% of the 435 Copilot generated code snippets contain security weaknesses, spreading across six programming languages. (2) The detected security weaknesses are diverse in nature and are associated with 42 different CWEs. The CWEs that occurred most frequently are CWE-78: OS Command Injection, CWE-330: Use of Insufficiently Random Values, and CWE-703: Improper Check or Handling of Exceptional Conditions (3) Among these CWEs, 11 appear in the MITRE CWE Top-25 list(…)

Review your code – either from Copilot or from ChatGPT! 

Link: https://arxiv.org/pdf/2310.02059.pdf 

Jailbreaker in Jail: Moving Target Defense for Large Language Models

In this paper, authors demonstrate how Moving Target Defense (MTD) technique enabled them to protect LLMS against adversarial prompts.

Link: https://arxiv.org/pdf/2310.02417.pdf 

Can LLMs be instructed to protect personal information? 

In this paper, the authors announced PrivQA – “a multimodal benchmark to assess this privacy/utility trade-off when a model is instructed to protect specific categories of personal information in a simulated scenario.

Link: https://llm-access-control.github.io/ 

Bing Chat responses infiltrated by ads pushing malware

As Bing Chat is scraping the web, malicious ads have been detected to be actively injected into its responses. Kind of reminds me of an issue I’ve found in Chatsonic in May ’23.

Link: https://www.bleepingcomputer.com/news/security/bing-chat-responses-infiltrated-by-ads-pushing-malware/

Image-based prompt injection in Bing Chat AI

Link: https://arstechnica.com/information-technology/2023/10/sob-story-about-dead-grandma-tricks-microsoft-ai-into-solving-captcha/

AI Security 

NSA is creating a hub for AI Security 

The American National Security Agency has just launched a hub for AI security – The AI Security Center. One of the goals is to create the risk frameworks for AI security. Paul Nakasone, the director of the NSA, proposes an elegant definition of AI security:
Protecting systems from learning, doing and revealing the wrong thing”. 

Link: https://therecord.media/national-security-agency-ai-hub 

Study on the robustness of AI-Image detection

In this paper, researchers have proven that the detectors of AI-generated images have multiple vulnerabilities and there isn’t a good way for proving if the image is real or generated by the AI. “Our attacks are able to break every existing watermark that we have encountered” – said the researchers.

Link: https://www.theregister.com/2023/10/02/watermarking_security_checks/ + paper: https://arxiv.org/pdf/2310.00076.pdf 

ShellTorch (critical vulnerability!) 

A critical vulnerability has been found in TorchServe – PyTorch model server. This vulnerability allows access to proprietary AI models, insertion of malicious models, and leakage of sensitive data – and can be used to alter the model’s results or to execute a full server takeover.

Here’s a visual explanation of this vulnerability from BleepingComputer:

Link: https://www.oligo.security/shelltorch 

AI/LLM as a tool for cybersecurity 

Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions

In this paper, the conclusion is that LLMs are not the best tool to provide S&P advice, but for some reason, the researchers (Chen, Arunasalam, Celik) haven’t tried to either fine-tune the model using fine-tuning APIs, or to use embeddings – thus, I believe the question remains kind of open. In my opinion, if you fine-tune the model on your knowledge base or if you create some kind of embedding of your data, then the quality of S&P advice should go up. 

Link: https://arxiv.org/pdf/2310.02431.pdf 

Regulations

Map of AI regulations all over the world

Fairly AI team have done this super cool work and published a map of AI regulations all over the world. Useful for anyone working with a legal side of AI! 

The map legend:

  • :large_green_circle: Green: Regulation that’s passed and now active.
  • :large_blue_circle: Blue: Passed, but not live yet.
  • :large_yellow_circle: Yellow: Currently proposed regulations.
  • :red_circle: Red: Regions just starting to talk about it, laying down some early thoughts.

Link: https://www.google.com/maps/d/u/0/viewer?mid=1grbvr9Ic-qJ-LTC9DHqpdzi2M-mtxl4&ll=15.171472397416672%2C0&z=2

Some thoughts on why AI shouldn’t be regulated, but rather decentralized

Link: https://cointelegraph.com/news/coinbase-ceo-warns-ai-regulation-calls-for-decentralization

Canada aims to be the first country in the world with official regulations covering the AI sector

Link: https://venturebeat.com/ai/canada-ai-code-of-conduct/ 

Other AI-related things 

Build an end-to-end MLOps pipeline for visual quality inspection at the edge

In this 3-part series, AWS team demonstrates how to build MLOps pipelines: 

  1. https://aws.amazon.com/blogs/machine-learning/build-an-end-to-end-mlops-pipeline-for-visual-quality-inspection-at-the-edge-part-1/
  2. https://aws.amazon.com/blogs/machine-learning/build-an-end-to-end-mlops-pipeline-for-visual-quality-inspection-at-the-edge-part-2/
  3. https://aws.amazon.com/blogs/machine-learning/build-an-end-to-end-mlops-pipeline-for-visual-quality-inspection-at-the-edge-part-3/

If you want more papers and articles 

Categories
Newsletter

Real Threats of Artificial Intelligence – AI Security Newsletter #5

Here comes another edition of my newsletter. This month I was away from the computer for a whole week, but I’ve collected some interesting resources on AI and LLM security – most of them published in the first two weeks of September. 

Thumbnail generated with Stable Diffusion 🙂 

LLM Security 

Dropbox LLM Security 

This repository contains scripts and descriptions that demonstrate attacks on LLMs using repeated characters. Long story short: if you supply a long string of a single character (or sequence of characters), the model will hallucinate. Also, it may reveal its instructions. 

Link: https://github.com/dropbox/llm-security 

LLM apps: Don’t Get Stuck in an Infinite Loop!

Post by @wunderwuzzi about looping ChatGPT through Indirect Prompt Injection. I am not sure if that can be classified as a DoS attack, but if you’d classify it as such, then it’d probably be the first publicly demonstrated DoS on LLM!

Link: https://embracethered.com/blog/posts/2023/llm-cost-and-dos-threat/ 

BlindLlama by Mithril Security 

BlindLlama by Mithril Security is a project that provides “zero-trust AI APIs for easy and private consumption of open-source LLMs”. In other words, if you were concerned about passing confidential data to the LLM’s API and at the same time you didn’t want to deploy open-source models locally, this might be the solution for you.

Links: blog: https://blog.mithrilsecurity.io/introducing-blindllama-zero-trust-ai-apis-with-privacy-guarantees-traceability/ + docs: https://blindllama.mithrilsecurity.io/en/latest/ + Github: https://github.com/mithril-security/blind_llama/ 

Demystifying RCE Vulnerabilities in LLM-Integrated Apps

According to the authors, these two factors have a huge impact on the security of LLM-integrated applications:

  • the unpredictable responses of LLMs, which can be manipulated by attackers to bypass developer restrictions (using specific prompts)  
  • the execution of untrusted code generated by LLMs, often without appropriate checks, allowing remote code execution.

This has serious implications not only for LLMs, but also for applications integrated with LLMs.

Authors proposed automated approach for identifying RCE vulnerabilities in LLMs – LLMSmith: 

According to the article, they have created “the first automated prompt-based exploitation method for LLM-integrated apps.”, unfortunately, I could not find LLMSmith’s source anywhere… 

Link: https://arxiv.org/pdf/2309.02926.pdf

AI Security 

Some more resources from DEFCON31

38TB of data accidentally exposed by Microsoft AI researchers

Wiz Research found a data exposure incident on Microsoft’s AI GitHub repository.

Link: https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers

MLSecOps Podcast: Rob van der Veer and Ian Swanson 

AI veteran Rob van der Veer in MLSecOps podcast. One of the topics discussed by the speakers is ISO 5338, a new standard for AI system life cycle processes

Link: https://mlsecops.com/podcast/a-holistic-approach-to-understanding-the-ai-lifecycle-and-securing-ml-systems-protecting-ai-through-people-processes-technology 

AI/LLM as a tool for cybersecurity 

LLM in the Shell: Generative Honeypots

In this paper, authors demonstrated an interesting application of LLMs – they’ve used them as a honeypot backend in the sheLLM project. An idea is to trick an attacker into thinking that he’s using a real shell, meanwhile the outputs for given shell commands are generated by the LLM. It makes me wonder though – what would happen if an attacker realizes that he’s using LLM? Prompt Injection through this shell could be pricey for owners of the honeypot! 

Link: https://arxiv.org/pdf/2309.00155.pdf 

Automatic Scam-Baiting Using ChatGPT

That’s a brilliant idea for the usage of LLM – baiting scammers, making them lose money and time!   

Link: https://arxiv.org/pdf/2309.01586.pdf 

Cybercriminals Use Generative AI (…) to Run Their Scams

Speaking of baiting scammers – I wonder if somewhere on the Internet right now the LLM-defender is baiting the LLM-scammer.

Link: https://abnormalsecurity.com/blog/generative-ai-nigerian-prince-scams 

Regulations

Dallas AI newsletter on AI regulations in various countries

Link: https://www.linkedin.com/pulse/state-ai-regulation-september-2023-newsletter-dallas-ai/ 

Overview of the AI regulations in various countries from Reuters 

Link: https://www.reuters.com/technology/governments-race-regulate-ai-tools-2023-09-13/ 

If you want more papers and articles 

To be honest, I just took a look at the abstracts of those papers below due to the lack of time, but maybe you will find some of them interesting. 

Software Testing with Large Language Model: Survey, Landscape, and Vision” – Wang, et. al. 

Link: https://arxiv.org/abs/2307.07221 

“MathAttack: Attacking Large Language Models Towards Math Solving Ability” – Zhou, et. al. 

(This one is interesting, take a look at those examples: 

Link: https://arxiv.org/pdf/2309.01686.pdf 

“INTEGRATED PHOTONIC AI ACCELERATORS UNDER HARDWARE SECURITY ATTACKS: IMPACTS AND COUNTERMEASURES” – de Magalhaes, Nicolescu, Nikdast 

This paper is on hardware trojans in the silicon photonic systems. Probably you need to have some advanced knowledge (which I don’t have) to be able read it, but when I saw this title, I felt like in this meme, so I am just sharing the link: 

Link: https://arxiv.org/pdf/2309.02543.pdf 

Remember that you can subscribe this newsletter here: https://hackstery.com/newsletter

If you find this newsletter useful, I’d be grateful if you’d share it with your tech circles, thanks in advance!

Categories
Newsletter

Real Threats of Artificial Intelligence – AI Security Newsletter #4 (September ’23)

Here comes the fourth release of my newsletter. This time I have included a lot of content related to the DEFCON AI Village (I have tagged content that comes from there) – a bit late, but better later than never. Anyway, enjoy reading.

Also, if you find this newsletter useful, I’d be grateful if you’d share it with your tech circles, thanks in advance!

Any feedback on this newsletter is welcome – you can mail me or post a comment in this article.

AI Security 

  1. Model Confusion – Weaponizing ML models for red teams and bounty hunters [AI Village] 

This is an excellent read about ML supply chain security by Adrian Wood. One of the most insightful resources on the ML supply chain that I’ve seen. Totally worth reading! 

Link: https://5stars217.github.io/2023-08-08-red-teaming-with-ml-models/ 

  1. Assessing the Vulnerabilities of the Open-Source Artificial Intelligence (AI) Landscape: A Large-Scale Analysis of the Hugging Face Platform [AI Village]

Researchers have performed automated analysis of 110 000 models from Hugging Face and have found almost 6 million vulnerabilities in the code. 

Links: slides: https://aivillage.org/assets/AIVDC31/DSAIL%20DEFCON%20AI%20Village.pdf paper: https://www.researchgate.net/publication/372761501_Assessing_the_Vulnerabilities_of_the_Open-Source_Artificial_Intelligence_AI_Landscape_A_Large-Scale_Analysis_of_the_Hugging_Face_Platform 

  1. Podcast on MLSecOps [60 min] 

Ian Swanson (CEO of Protect AI) & Emilio Escobar (CISO of Datadog) are talking about ML & AI Security, MLSecOps, Supply Chain Security and LLMs:

Link: https://shomik.substack.com/p/17-ian-swanson-ceo-of-protect-ai

Regulations

  1. LLM Legal Risk Management, and Use Case Development Strategies to Minimize Risk [AI Village]

Well, I am not a lawyer. But I do know a few lawyers who read this newsletter, so maybe you will find these slides on the legal aspects of LLM risk management interesting 🙂

Link: https://aivillage.org/assets/AIVDC31/Defcon%20Presentation_2.pdf 

  1. Canadian Guardrails for Generative AI

Canadians have created a document with a set of guardrails for developers and operators of Generative AI systems. 

Link: https://ised-isde.canada.ca/site/ised/en/consultation-development-canadian-code-practice-generative-artificial-intelligence-systems/canadian-guardrails-generative-ai-code-practice

LLM Security 

  1. LLMSecurity.net – A Database of LLM-security Related Resources

Website by Leon Derczynski (LI: https://www.linkedin.com/in/leon-derczynski/ ) that catalogs various papers, articles and news regarding  

Link: https://llmsecurity.net/

  1. LLMs Hacker’s Handbook

This thing was on the Internet for a while, but for some reason I’ve never seen it. LLM Hacker’s Handbook with some useful techniques of Prompt Injection and proposed defenses. 

Link: https://doublespeak.chat/#/handbook

AI/LLM as a tool for cybersecurity 

  1. ChatGPT for security teams [AI Village] 

Some ChatGPT tips & tricks (including jailbreaks) from GTKlondike (https://twitter.com/GTKlondike/)  

Link: https://github.com/NetsecExplained/chatgpt-your-red-team-ally 

Bonus: https://twitter.com/GTKlondike/status/1697087125840376216 

AI in general 

Initially, this newsletter was meant to be exclusively related to security, but in the last two weeks I’ve stumbled upon a few decent resources on LLMs and AI and I want to share them with you!

This post by Stephen Wolfram on how does ChatGPT (and LLMs in general) work:
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

Update of GPT 3.5 – fine-tuning is now available through OpenAI API: 

https://openai.com/blog/gpt-3-5-turbo-fine-tuning-and-api-updates

This post by Chip Huyen on how does RLHF work: https://huyenchip.com/2023/05/02/rlhf.html and this one from Huggingface: https://huggingface.co/blog/rlhf 

Some loose links

In this section you’ll find some links to recent AI security and LLM security papers that I didn’t manage to read. If you still want to read more on AI topics, try these articles.

“Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack” 

Link: https://arxiv.org/pdf/2308.11894.pdf

“RatGPT: Turning online LLMs into Proxies for Malware Attacks”

Link: https://arxiv.org/pdf/2308.09183.pdf 

“PENTESTGPT: An LLM-empowered Automatic Penetration Testing Tool”

Link: https://arxiv.org/pdf/2308.06782.pdf 

“DIVAS: An LLM-based End-to-End Framework for SoC Security Analysis and Policy-based Protection”

Link: https://arxiv.org/pdf/2308.06932.pdf

“Devising and Detecting Phishing: large language models vs. Smaller Human Models”

Link: https://arxiv.org/pdf/2308.12287.pdf 

Categories
Newsletter

Real Threats of Artificial Intelligence – AI Security Newsletter #3 (August ’23)

This is the third release of my newsletter. I’ve collected some papers, articles and vulnerabilities that were released more or less in last two weeks. If you are not a mail subscriber yet, feel invited to subscribe: https://hackstery.com/newsletter/. Order of the resources is random.

Any feedback on this newsletter is welcome – you can mail me or post a comment in this article.

AI Security 

1. Protect.AI launches AI bug bounty program

Protect.AI launches the first platform dedicated for AI/ML bug bounty. It aims to bridge the knowledge gap in AI/ML security research and provides opportunities for researchers to build expertise and receive financial rewards. In order to run bug bounty programs Protect.AI acquired huntr.dev, platform known for running bug bounties for OSS. You can report vulnerabilities in there: huntr.mlsecops.com 

Link: https://www.businesswire.com/news/home/20230808746694/en/Protect-AI-Acquires-huntr-Launches-World%E2%80%99s-First-Artificial-Intelligence-and-Machine-Learning-Bug-Bounty-Platform

AI Safety 

1. Zoom tried using customers data to train AI

After all, they gave up on that idea though. Zoom initially attempted to rectify the situation with an updated blog post, but it failed to address the specific concerns. CEO Eric Yuan acknowledged the issue as a “process failure” and promised immediate action. On August 11th, Zoom updated its terms again, explicitly stating that it does not use customer content for training AI models. The incident serves as a reminder for companies to be transparent and allow customers to opt-out of data usage for such purposes.

Links: https://stackdiary.com/zoom-terms-now-allow-training-ai-on-user-content-with-no-opt-out/ & https://blog.zoom.us/zooms-term-service-ai/ & https://www.nbcnews.com/tech/innovation/zoom-ai-privacy-tos-terms-of-service-data-rcna98665

LLM Security 

1. Trojan detection challenge

At the end of July NeurIPS started a competition intended to improve methods for finding hidden features in large language models. There are two paths in the competition: Trojan Detection and Red Teaming. In the Trojan Detection part, participants have to find the commands that activate hidden features in these models. In the Red Teaming part, participants have to create methods that make the models do things they’re not supposed to do (and models are said to avoid those specific actions). It’s an academic competition for advanced researchers, but maybe some of the subscribers will find it interesting.

Link: https://trojandetection.ai/ 

2. Prompt-to-SQL injection

This article analyzes prompt-to-SQL (P2SQL) injection in web applications using the Langchain framework as a case study. The study examines different types of P2SQL injections and their impact on application security. The researchers also evaluate seven popular LLMs and conclude that P2SQL attacks are common in various models. To address these attacks, the paper proposes four effective defense techniques that can be integrated into the Langchain framework.

Link: https://arxiv.org/pdf/2308.01990.pdf

3. NVIDIA on protecting LLMs against prompt injection

NVIDIA’s AI Red Team released an interesting article on protecting Large Language Models against prompt injection attacks. They also disclosed a few vulnerabilities in LangChain plugins.

LangChain RCE

NVIDIA’s AI Red Team assessment framework (https://developer.nvidia.com/blog/nvidia-ai-red-team-an-introduction/)

Link: https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/

4. How to mitigate prompt injection?

This repo demonstrates variety of approaches for preventing Prompt Injection in Large Language Models.

Link: https://github.com/Valhall-ai/prompt-injection-mitigations

5. Evaluation of the jailbrak prompts

A paper on Jailbreaking Large Language Models:

Jailbreak example

Link: https://arxiv.org/pdf/2308.03825.pdf

6. Trick for cheaper usage of LLMs

Researchers propose a prompt abstraction attack(?). Thanks to abstracting prompt sentences, the prompt utilizes less tokens and it’s lighter. I’d argue that it’s an attack, it’s rather an optimization (saying that it’s an attack is like saying that optimizing cloud deployment is an attack, because you pay less). On the other hand you need to use “pseudo-API” in the middle, but still I wouldn’t consider it an attack. Change my mind.

Link: https://arxiv.org/pdf/2308.03558.pdf

7. Assessing quality of the code produced by the LLM

In this paper, authors the quality of LLM-generated code: 

Link: https://arxiv.org/pdf/2308.04838.pdf

8. LLM Guard – tool for securing LLMs

New tool that may be helpful in securing against prompt injections: “LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection and jailbreak attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.

Link: https://github.com/laiyer-ai/llm-guard 

AI/LLM as a tool for cybersecurity 

1. Getting pwn’d by AI

In this paper, authors are discussing two use cases of LLMs in pentesting: high-level task planning for security testing assignments and low-level vulnerability hunting within a vulnerable virtual machine. Here is the repo with code: https://github.com/ipa-lab/hackingBuddyGPT
This is an interesting approach to testing security, although as a pentester, I doubt that AI will take over industry in the coming years. In my opinion, it’s crucial for pentester to see relationships between the components of the system “outside of the box”, and in finding more advanced bugs a real person will remain irreplaceable. At least I hope so 😀

Link: https://arxiv.org/abs/2308.00121 

2. Using LLMs to analyze Software Supply Chain Security 

Supply chain security is another hot topic in cybersecurity. Authors analyze the potential of using LLMs for assuring Supply Chain Security. Citation from the article: “We believe the current generation of off-the-shelf LLMs does not offer a high enough level of agreement with expert judgment to make it a useful assistant in this context. One potential path to improving

performance is fine-tuning the LLM using baseline knowledge such as this catalog, and then applying it on future issues
 

Link: https://arxiv.org/pdf/2308.04898.pdf

Categories
Newsletter

Real Threats of Artificial Intelligence – AI Security Newsletter #2 (August ’23)

This is the second release of my newsletter. I’ve collected some papers, articles and vulnerabilities that were released in last two weeks, this time the resources are categorized into following categories: LLM Security, AI Safety, AI Security. If you are not a mail subscriber yet, feel invited to subscribe: https://hackstery.com/newsletter/.

Order of the resources is random.

Any feedback on this newsletter is welcome – you can mail me or post a comment in this article.

LLM Security

Image to prompt injection in Google Bard

“Embrace The Red” blog on hacking Google Bard using crafted images with prompt injection payload. 

Link: https://embracethered.com/blog/posts/2023/google-bard-image-to-prompt-injection/

Paper: Challenges and Applications of Large Language Models

Comprehensive article on LLM challenges and applications, with a lot of useful resources on prompting, hallucinations etc. 

Link: https://arxiv.org/abs/2307.10169

Remote Code Execution in MathGPT 

Post about how Seungyun Baek hacked MathGPT.

Link: https://www.l0z1k.com/hacking-mathgpt/

AVID ML (AI Vulnerability Database) Integration with Garak

Garak is a LLM vulnerability scanner created by Leon Derczynski. According to the description, garak checks if an LLM will fail in a way we don’t necessarily want. garak probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses. AvidML supports integration with Garak for quickly converting the vulnerabilities garak finds into informative, evidence-based reports.

Link: https://avidml.org/blog/garak-integration/ 

Limitations of LLM censorship and Mosaic Prompt attack

Although censorship brings negative associations, in terms of LLMs it can be used to prevent LLM from creating malicious content, such as ransomware code. In this paper authors demonstrate attack method called Mosaic Prompt, which is basically splitting malicious prompts into sets of non-malicious prompts. 

Link: https://www.cl.cam.ac.uk/~is410/Papers/llm_censorship.pdf

Security, Privacy and Ethical concerns of ChatGPT

Yet another paper on ChatGPT 🙂 

Link: https://arxiv.org/pdf/2307.14192.pdf

(Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs

Using images and sounds for Indirect Prompt Injections. In this notebook  you can take a look at the code used for generating images with injection: https://github.com/ebagdasa/multimodal_injection/blob/main/run_image_injection.ipynb

(I’ll be honest, it looks like magic) 

Link: https://arxiv.org/abs/2307.10490

Universal and Transferable Adversarial Attacks on Aligned Language Models

Paper on creating transferable adversarial prompts, able to induce objectionable content in the public interfaces to ChatGPT, Bard, and Claude, as well as open source LLMs such as LLaMA-2-Chat, Pythia, Falcon, and others. This paper was supported by DARPA and the Air Force Research Laboratory. 

Link: https://llm-attacks.org/zou2023universal.pdf + repo: https://github.com/llm-attacks/llm-attacks/tree/main/experiments

OWASP Top10 for LLM v1.0

OWASP released version 1.0 of Top10 for LLMs! You can also check my post on that list here

Link: https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v1_0.pdf 

Survey on extracting training data from pre-trained language models

Survey based on more than 100 key papers in fields such as natural language processing and security, exploring and systemizing attacks and protection methods.

Link: https://aclanthology.org/2023.trustnlp-1.23/

Wired on LLM security

This article features OWASP Top10 for LLM and plugins security concerns. 

Link: https://www.wired.com/story/chatgpt-plugins-security-privacy-risk/

AI Safety 

Ensuring Safe, Secure, and Trustworthy AI

Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI have agreed to self-regulate their AI-based solutions. In these voluntary commitments, the companies pledge to ensure safety, security and trust in artificial intelligence.

Link: https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/

Red teaming AI for bio-safety

Anthropic’s post on red teaming AI for biosafety and evaluating models capabilities i.e. for ability to output harmful biological information, such as designing and acquiring biological weapons.

Link: https://www.anthropic.com/index/frontier-threats-red-teaming-for-ai-safety

AI Security 

AI Vulnerability Database releases Python library 

According to documentation: “It empowers engineers and developers to build pipelines to export outcomes of tests in their ML pipelines as AVID reports, build an in-house vulnerability database, integrate existing sources of vulnerabilities into AVID-style reports, and much more!

Link: https://twitter.com/AvidMldb/status/1683883556064616448 

Mandiant – Securing AI pipeline

The article from Mandiant on securing the AI pipeline. Contains GAIA (Good AI Assessment) Top 10, a list of common attacks and weaknesses in the AI pipeline.

Link: https://www.mandiant.com/resources/blog/securing-ai-pipeline 

Google paper on AI red teaming

Citing the summary of this document:

In this paper, we dive deeper into SAIF to explore one critical capability that we deploy to support the SAIF framework: red teaming. This includes three important areas:
1. What red teaming is and why it is important
2. What types of attacks red teams simulate
3. Lessons we have learned that we can share with others

Link: https://services.google.com/fh/files/blogs/google_ai_red_team_digital_final.pdf

Probably Approximately Correct (PAC) Privacy

MIT researchers have developed a technique to protect sensitive data encoded within machine learning models. By adding noise or randomness to the model, the researchers aim to make it more difficult for malicious agents to extract the original data. However, this perturbation reduces the model’s accuracy, so the researchers have created a framework called Probably Approximately Correct (PAC) Privacy. This framework automatically determines the minimal amount of noise needed to protect the data, without requiring knowledge of the model’s inner workings or training process.

Link: https://news.mit.edu/2023/new-way-look-data-privacy-0714

Categories
Newsletter

Real Threats of Artificial Intelligence – AI Security Newsletter #1 (July 2023)

Welcome to Real Threats of Artificial Intelligence – AI Security Newsletter. This is the first release of this newsletter, which I plan to deliver bi-weekly.

If you want to receive this Newsletter via mail, you can sign up here: https://hackstery.com/newsletter/.

This week there’s some reading about poisoning LLM datasets and supply chain and Federal Trade Comission’s investigation on Open AI.

1. Poisoning LLM supply chain

Poisoning LLM supply chain using Rank-One Model Editing (ROME) algorithm. It was shown that it is possible for models to spread fake information related only to chosen topics. The model can behave correctly in general, but return misleading information when asked for a specific topic.  

Source: blog.mithrilsecurity.io

https://blog.mithrilsecurity.io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news/

2. FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy

The Federal Trade Commission (FTC) has launched an investigation into OpenAI, focusing on whether the company’s AI models have violated consumer protection laws and put personal reputations and data at risk.The FTC has demanded records from OpenAI regarding how it addresses risks related to its AI models, including complaints of false or harmful statements made by its products about individuals.

https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/

3. Malware-producing LLM

WormGPT is a new LLM-based chatbot designed for malware development. According to the WormGPT developer, “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”

https://www.tomshardware.com/news/wormgpt-black-hat-llm

4. Instruction tuning that leads to the data poisoning 

Authors of this paper proposed AutoPoison framework that is an automated pipeline for generating poisoned data. It can be used to make a model demonstrate specific behavior in response to specific instructions – in my opinion that may be useful for producing commercial LLMs with advertisements included in its responses.

https://arxiv.org/abs/2306.17194

5. Ghost in the machine

Norwegian Consumer Council releases document on threats, harms and challenges related to the Generative AI. This document is not-so-technical and focuses on policy making and laws related to AI. 

https://storage02.forbrukerradet.no/media/2023/06/generative-ai-rapport-2023.pdf