Researchers at the University of Florida have developed ThreatLens, an LLM-driven multi-agent framework for automating hardware security threat modeling and test plan generation.
The current hardware security verification processes rely on labor-intensive manual efforts, which struggle to scale with increasing design complexity and evolving attack methodologies.
ThreatLens integrates retrieval-augmented generation (RAG), LLM-powered reasoning, and user feedback to automate threat assessment and generate practical test plans.
The framework reduces manual verification effort, enhances coverage, and ensures a structured, adaptable approach to hardware security verification.