Are you ready for emerging threats?
More and more companies integrate Large Language Models into their systems. These changes are often made quickly, without proper attention to security.

This way, new vulnerabilities emerge. Application security becomes even more complex, with a set of novel attack vectors and techniques. Prompt injection is just the tip of an iceberg.
Find vulnerabilities in your AI systems before the attackers do
I can help you with that!
LLM Security Testing
With over 8 years of experience as a security engineer and penetration tester, I help organizations assess the security of their Large Language Model deployments, following standards like the OWASP Top 10 for Large Language Models Applications.
AI Security Training
I’ve run over 30 security workshops on all kinds of topics, from basic awareness to pentest training. I can deliver LLM and AI Security workshops tailored to your needs – including awareness training on threats from GenAI and hands-on training for your engineering teams.

No obligation. Let’s talk about securing your AI and LLM integrations.
Who am I?
Hello,
my name is MikoĊaj. I am a security engineer who specializes in LLM security and penetration testing.
I can help you find and fix security issues in AI/LLM systems and web applications. My main focus is testing LLM applications for vulnerabilities such as prompt injections, insecure output, data leaks, and more. I follow the best practices in this field and have been involved in hacking LLMs since 2023 (and working in the cybersecurity industry since 2017). I contribute to OWASP Top10 for Machine Learning and I created some things that can help you with LLM security testing.
Check my recent blog posts.
You can also sign up for my newsletter on AI Security.
You can contact me at: contact(at)hackstery.com