CTIBench: Evaluating LLMs in Cyber Threat Intelligence with Nidhi Rastogi - #729 15k2d

30/04/2025

Today, we're ed by Nidhi Rastogi, assistant professor at Rochester Institute of Technology to...

Today, we're ed by Nidhi Rastogi, assistant professor at Rochester Institute of Technology to discuss Cyber Threat Intelligence (CTI), focusing on her recent project CTIBench—a benchmark for evaluating LLMs on real-world CTI tasks. Nidhi explains the evolution of AI in cybersecurity, from rule-based systems to LLMs that accelerate analysis by providing critical context for threat detection and defense. We dig into the advantages and challenges of using LLMs in CTI, how techniques like Retrieval-Augmented Generation (RAG) are essential for keeping LLMs up-to-date with emerging threats, and how CTIBench measures LLMs’ ability to perform a set of real-world tasks of the cybersecurity analyst. We unpack the process of building the benchmark, the tasks it covers, and key findings from benchmarking various LLMs. Finally, Nidhi shares the importance of benchmarks in exposing model limitations and blind spots, the challenges of large-scale benchmarking, and the future directions of her AI4Sec Research Lab, including developing reliable mitigation techniques, monitoring "concept drift" in threat detection models, improving explainability in cybersecurity, and more.
The complete show notes for this episode can be found at https://twimlai.com/go/729.

How OpenAI Builds AI Agents That Think and Act with Josh Tobin - #730 1 mes 01:07:42 From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731 1 mes 01:01:53 RAG Risks: Why Retrieval-Augmented LLMs are Not Safer with Sebastian Gehrmann - #732 24 días 57:37 Google I/O 2025 Special Edition - #733 16 días 26:37 Grokking, Generalization Collapse, and the Dynamics of Training Deep Neural Networks with Charles Martin - #734 9 días 01:25:37 Ver más en APP Comentarios del episodio 4s5o6a