Using LLMs to Evaluate Code



Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

 

What Will Attendees Learn?

  • how well LLMs can evaluate source code
  • evolution of capability as new LLMs are released
  • how to address potential gaps in capability

Speaker and Presenter Information

Dr. Mark Sherman is the Technical Director of the Cybersecurity Foundations directorate in the CERT Division of the Carnegie Mellon University Software Engineering Institute (CMU SEI). Sherman leads a diverse team of researchers and engineers on projects that focus on foundational research on the lifecycle for building secure software, data-driven analysis of cybersecurity, cybersecurity of quantum computers, cybersecurity for and enabled by machine learning applications, and detecting fake media. Prior to his tenure at the SEI, Sherman worked on mobile systems, integrated hardware-software appliances, transaction processing, languages and compilers, virtualization, network protocols, and databases at IBM and various startups.

Relevant Government Agencies

DOD & Military


Register as Attendee


Add to Calendar


Event Type
Webcast


This event has no exhibitor/sponsor opportunities


When
Wed, Oct 1, 2025, 1:30pm - 2:30pm ET


Cost
Complimentary:    $ 0.00


Website
Click here to visit event website


Organizer
CMU - SEI


Contact Event Organizer



Return to search results