Categories
Newsletter

Real Threats of Artificial Intelligence – AI Security Newsletter #1 (July 2023)

Welcome to Real Threats of Artificial Intelligence – AI Security Newsletter. This is the first release of this newsletter, which I plan to deliver bi-weekly.

If you want to receive this Newsletter via mail, you can sign up here: https://hackstery.com/newsletter/.

This week there’s some reading about poisoning LLM datasets and supply chain and Federal Trade Comission’s investigation on Open AI.

1. Poisoning LLM supply chain

Poisoning LLM supply chain using Rank-One Model Editing (ROME) algorithm. It was shown that it is possible for models to spread fake information related only to chosen topics. The model can behave correctly in general, but return misleading information when asked for a specific topic.  

Source: blog.mithrilsecurity.io

https://blog.mithrilsecurity.io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news/

2. FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy

The Federal Trade Commission (FTC) has launched an investigation into OpenAI, focusing on whether the company’s AI models have violated consumer protection laws and put personal reputations and data at risk.The FTC has demanded records from OpenAI regarding how it addresses risks related to its AI models, including complaints of false or harmful statements made by its products about individuals.

https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/

3. Malware-producing LLM

WormGPT is a new LLM-based chatbot designed for malware development. According to the WormGPT developer, “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”

https://www.tomshardware.com/news/wormgpt-black-hat-llm

4. Instruction tuning that leads to the data poisoning 

Authors of this paper proposed AutoPoison framework that is an automated pipeline for generating poisoned data. It can be used to make a model demonstrate specific behavior in response to specific instructions – in my opinion that may be useful for producing commercial LLMs with advertisements included in its responses.

https://arxiv.org/abs/2306.17194

5. Ghost in the machine

Norwegian Consumer Council releases document on threats, harms and challenges related to the Generative AI. This document is not-so-technical and focuses on policy making and laws related to AI. 

https://storage02.forbrukerradet.no/media/2023/06/generative-ai-rapport-2023.pdf