모바일 메뉴 닫기
 

커뮤니티

제목
Benchmarking LLMs for Secure Code Generation (Yizheng Chen / Assistant Professor at the University of Maryland)
작성자
첨단컴퓨팅학부
작성일
2025.02.07
최종수정일
2025.02.07
분류
세미나
링크URL
https://surrealyz.github.io/
게시글 내용



일시: 2025. 2. 17.(월) 13:00
장소: 제4공학관 D504


Title: Benchmarking LLMs for Secure Code Generation


Presentor: Dr. Yizheng Chen

Assistant Professor / Computer Science at the University of Maryland

Abstract: Large Language Models (LLMs) have demonstrated promising capabilities in discovering and patching real-world security vulnerabilities. But how do we determine which LLM-based system performs best? In this talk, I will explore the challenges of benchmarking LLMs for cyberdefense.

I will begin by presenting our work on evaluating LLMs’ ability to generate secure code. Notably, we find that results from prior code-generation benchmarks do not translate to LLMs’ secure coding performance in real-world software projects. Next, I will discuss a key issue: memorization. LLMs may not be solving security problems from first principles but rather recalling secure solutions they have already seen. Finally, I will discuss future research directions in effectively evaluating and improving LLMs for cybersecurity applications.

Bio: Yizheng Chen is an Assistant Professor of Computer Science at the University of Maryland. Her research focuses on Large Language Models for Code Generation and AI for Security. Her recent work PrimeVul has been used by Gemini 1.5 Pro for vulnerability detection evaluation. Previously, she received her Ph.D. in Computer Science from the Georgia Institute of Technology, and was a postdoc at University of California, Berkeley and Columbia University. Her work has received an ACM CCS Best Paper Award Runner-up, a Google ASPIRE Award, and Top 10 Finalist of the CSAW Applied Research Competition. She is a recipient of the Anita Borg Memorial Scholarship.

Homepage: 
https://surrealyz.github.io/