Team leaders
Xiangzhe Xu (2025-26)
PhD student focusing on code language model and program analysis. Research formulates program semantics via static and dynamic program analysis, incorporating program domain knowledge in CodeLM via data augmentation and post-training. These techniques help CodeLM understand programs with limited symbol information.
Jinyao Guo (2026)
PhD student focusing on LLM agent, code language model, and program analysis. Extensive experience in neural–symbolic code auditing. LLM-based auditing systems (RepoAudit and BugScope) uncovered 500+ bugs across real-world open-source repositories and industrial codebases from Uber and Microsoft.
Team members
Zian Su (2025-26)
PhD student focusing on code language models (2025: knowledge editing of LLMs; 2026: architecture design, post-training, and continual learning). Experience in fine-tuning large multi-modal models and designing transformer architecture. Recent research: knowledge editing of language models, multi-agent design, and reinforcement learning for diffusion language models.
Guangyu Shen (2025-26)
PhD student focusing on AI security. Current lead of Purdue team in multi-round TrojAI competitions by IARPA and NIST. Team has won most competitions in the past 3.5 years. Research focuses on securing AI systems against backdoor and adversarial attacks.
Siyuan Cheng (2025-26)
PhD student specializing in trustworthy machine learning, with focus on adversarial/backdoor attacks and defenses. Team lead in TDC2023 competition (for LLM backdoor scanning); team achieved top rankings in development phase.
Hanxi Guo (2025-26)
PhD student focusing on AI security. Current lead of Purdue team in NIST GenAI competition. Team secured second place among 15+ teams from renowned research institutions and corporations in blue teaming track, while ranking in top three in red teaming track.
Jiasheng Jiang (2025-26)
PhD student specializing in software vulnerability detection. Vulnerability detector has found hundreds of vulnerabilities in Linux kernel, OpenSSL, FFmpeg, and Apache HTTPS.
Yunshu Mao (2026)
PhD student focusing on improving logical reasoning and mathematical problem-solving abilities of large language models. Research explores methods for enhancing structured reasoning, reliability, and robustness, including adversarial attacks and data poisoning.
Yaoyang Ye (2026)
PhD student focusing on automated vulnerability analysis and security-oriented program synthesis. Drawing from international CTF experience, develops agentic frameworks leveraging LLMs to automate vulnerability reproduction and PoC generation.
Yifei Gao (2026)
Third-year PhD student focusing on LLM-based program translation and program repair. Experience in CTF security competitions, including vulnerability discovery, exploitation, and reverse engineering.
Tiantai Zhang (2026)
PhD student focusing on binary analysis and bug detection. Research builds practical foundations for understanding real-world binaries, from disassembly correctness and function-entry recovery to LLM-assisted vulnerability discovery and exploit generation for IoT firmware.
Xiaolong Jin (2025)
PhD student in Computer Science at Purdue, focusing on AI safety and efficiency, especially jailbreaking. Developed MULTIVERSE, an innovative jailbreaking tool that explores weaknesses in various contexts.
Lu Yan (2025)
PhD student focusing on AI security with particular interest in applying software security techniques to enhance AI system robustness. Experience in fine-tuning LLMs with QLoRA. Developed ParaFuzz, a prompt paraphrasing/fuzzing technique to detect and remove backdoors.
Xuan Chen (2025)
PhD student focusing on developing RL-driven solutions to understand and mitigate AI vulnerabilities. Developed RLbreaker, a novel technique using reinforcement learning to construct jailbreaking prompts
Faculty advisors
Xiangyu Zhang (2025-26)
Expert in AI security, program analysis, vulnerability detection, and malware analysis. Led successful projects with DARPA, IARPA, and ONR, notably winning rounds in IARPA TrojAI competitions. Research group of 20 PhD students and 2 post-docs has developed tools for detecting AI backdoors and software vulnerabilities.
Chengpeng Wang (2025-26)
Specializes in static analysis to improve software reliability and performance. Bug detection methods used in industry; current work integrates neurosymbolic static analysis techniques like LLMDFA and LLMSAN, using LLMs and symbolic analysis to suppress hallucinations and allow customization without compilation.
Zhuo Zhang (2025)
Renowned hacker with significant real-world impact, including preventing $2 million loss. Led teams to victory in DEFCON CTF 2020 and Paradigm CTF 2023. Award-winning research in software and malware analysis earned 2024 ACM SIGSAC Distinguished Dissertation Award.