Secure your AI & LLMs

More and more companies integrate Large Language Models into their systems. These changes are often made quickly, without proper attention to security.


Find vulnerabilities in your AI systems before the attackers do

I can help you with that!

With over 8 years of experience as a security engineer and penetration tester, I help organizations assess the security of their Large Language Model deployments, following standards like the OWASP Top 10 for Large Language Models Applications.

Read more

I’ve run over 30 security workshops on all kinds of topics, from basic awareness to pentest training. I can deliver LLM and AI Security workshops tailored to your needs – including awareness training on threats from GenAI and hands-on training for your engineering teams.

Read more

Who am I?

Hello,
my name is MikoĊ‚aj. I am a security engineer who specializes in LLM security and penetration testing.

I can help you find and fix security issues in AI/LLM systems and web applications. My main focus is testing LLM applications for vulnerabilities such as prompt injections, insecure output, data leaks, and more. I follow the best practices in this field and have been involved in hacking LLMs since 2023 (and working in the cybersecurity industry since 2017). I contribute to OWASP Top10 for Machine Learning and I created some things that can help you with LLM security testing.

Check my recent blog posts.

You can also sign up for my newsletter on AI Security.

You can contact me at: contact(at)hackstery.com