This one is not about AI vulnerabilities or some AI paper.
It’s about something more boring – and in my opinion very, very important:
threat modeling
The process that is fundamentally broken at the most of the organizations, yet – if done correctly – it can deliver great results and significantly improve the security of your codebase.
Disclaimer: This newsletter includes some shameless self-promotion – I’ll show a tool I’ve been building around this idea. If you’re in software engineering or AI security, it should be relevant.

The uncomfortable truth
At many organizations, threat modeling is not really done correctly. It happens once, during the design phase, maybe as a form of workshop with whiteboard, and then it… dies.
The code changes every day, architecture evolves and the threat model just sits somewhere in Confluence, not updated through the last 6 months.
Engineering teams approach threat modeling like documentation. Everything else in modern SDLC has already figured it out:
– code is versioned using Git
– infra is defined as a code
– pipelines are automated
But threat modeling oftentimes is done like it was 2015 again.
The shift
As a fan of the threat modeling, I see that with LLMs and coding agents adoption people are already experimenting with things such as:
– analyzing the codebase security with coding agent (like Claude Code or Cursor)
– automatically mapping existing threats to STRIDE framework
– analyzing the architecture security automatically.
You can see that in OSS tools such as https://github.com/vartulzeroshieldai/Threat-Shield orAI extensions for https://github.com/owasp/threat-dragon
but these approaches hit the same wall – threat modeling is still being done next to the codebase. You end up with separate UI, separate artifacts and it also can get outdated quickly, just with AI-based extra steps.
I’ve been exploring an interesting topic recently, which is Continuous Threat Modeling. You’re already using SAST or DAST tools, ran with every pull request – what if threat model also behaved in the same way?
With every new feature or architecture change, new threats arise. Each PR could include updated threat model.
So I am basically sending out this newsletter to showcase a new little tool that I came up with (with a little help from coding agents):
https://github.com/attasec/tmdd
It tries to take this idea seriously instead of just adding “AI” on top of the same old process.
TMDD (Threat Modeling Driven Development) is based on a simple assumption: threat model should live inside your repository and evolve with your code, not in Confluence or diagrams that nobody updates. Instead of treating it like documentation, you treat it like code (or to be more precise – YAML files) – versioned, reviewed, and updated together with every change.
AI here is not the main point, it’s just an enabler: it helps read the codebase, infer architecture, and suggest threats, but it still needs a human to validate what actually matters. The real shift is making threat modeling continuous, so it runs alongside development instead of lagging behind it.
How it works?
You run TMDD on your repo (Claude Code or Cursor with appropriate skill helps a lot).
AI agent analyzes the codebase and saves the threat model in TMDD-compliant format.
Threat model files are generated and/or updated
Changes are reviewed like any other code you store in your repo
This is still early and imperfect, but I believe it makes more sense than static diagrams that die after one workshop. If you’re building anything non-trivial (especially with AI), you’ll feel pretty quickly how fast your understanding of the system drifts – and that’s exactly where this approach starts to make sense.
If you want to test this idea in practice, try it on your repo: https://github.com/attasec/tmdd – and if you think this direction is worth exploring, drop a star on GitHub, it genuinely helps push this further. From my testing, it’s already surprisingly good at finding authorization and business logic issues – the exact class of bugs that SAST and DAST usually miss.
Let me know what do you think about this approach!



Green: Regulation that’s passed and now active.
Blue: Passed, but not live yet.
Yellow: Currently proposed regulations.
Red: Regions just starting to talk about it, laying down some early thoughts.









