Large Language Models (LLMs) have emerged as transformative tools across research and industry,
revolutionizing how we interact with information. However, their immense capabilities bring critical
security challenges—the same features that drive innovation can be exploited for malicious purposes
through unauthorized distillation, fine-tuning, compression, or editing. These vulnerabilities pose
severe threats including intellectual property theft, generation of sophisticated disinformation,
bypass of safety alignments, and erosion of user trust in AI systems.
This workshop aims to bring together researchers and practitioners from academia
and industry who are advancing the frontiers of LLM security and protection. We seek to confront the
unauthorized use of LLMs head-on by exploring novel and robust mechanisms designed to make these
models inherently resistant to exploitation while maintaining their beneficial capabilities.
The workshop also hosts the 2025 TrustAI Rising Star Award.
Topics of interest include, but are not limited to:
1. Un-Distillable LLMs: Preventing unauthorized model replication and intellectual property theft
2. Un-Finetunable LLMs: Resisting malicious parameter updates and behavior alterations
3. Un-Compressible LLMs: Maintaining model integrity against unauthorized compression
4. Un-Editable LLMs: Safeguarding against knowledge tampering and misinformation injection
5. Un-Usable LLMs: Ensuring traceability and preventing misuse through watermarking and verification
The rapid advancement of Large Language Models (LLMs) has brought unprecedented capabilities alongside critical security challenges. The Lock-LLM workshop aims to confront the unauthorized use of LLMs head-on by exploring novel and robust mechanisms designed to make these models inherently resistant to exploitation while maintaining their beneficial capabilities. We invite high-quality submissions that advance our understanding of LLM protection mechanisms and address the dual-use nature of these technologies.
We encourage submissions on topics including, but not limited to:
Papers can be up to 6 pages (excluding references and supplementary material). All submissions must use the NeurIPS 2025 LaTeX style file. Papers should be submitted through OpenReview. All submissions will undergo double-blind peer review. Authors must ensure their submissions are properly anonymized. We welcome submissions currently under review at other venues. Papers accepted at the main NeurIPS 2025 conference will go through a light-touch review solely to assess their relevance, without a second peer review process.
Paper Submission Deadline | August 22, 2025, AoE |
Notification of Acceptance | September 22, 2025, AoE |
Camera-Ready Deadline | October 15, 2025, AoE |
Workshop Date | December 6/7 (TBD), 2025 |
Outstanding submissions will be recognized with Best Paper and Runner-up Best Paper awards. Winners will present their work in dedicated oral sessions.
The TrustAI Rising Star Award was established to honor early-career researchers (senior Ph.D. students and postdoctoral fellows) who have made significant contributions to the trustworthiness, security, and responsible use of large language models. In 2025, the TrustAI Rising Star Award will be hosted by the Lock-LLM Workshop at NeurIPS 2025, and two researchers will be selected and awarded. The awardees will receive certificates and give oral presentations of their work at the workshop to showcase their research, share insights, and connect with other researchers in the field.
Awards will be announced in early December
Award talks and ceremonies will take place at The 1st Lock-LLM Workshop: Prevent Unauthorized Knowledge Use from LLMs, co-located at NeurIPS 2025
Candidate materials due: September 2, 2025
Reference letters due: September 9, 2025
At the 2025 NeurIPS Lock-LLM Workshop, the TrustAI Rising Star Award will be presented to two young researchers who have made significant contributions to the trustworthiness, security, and responsible use of large language models. The awardees will give a presentation about their research at the workshop. We strongly encourage researchers from minority or underrepresented groups to apply.
We welcome applicants working on, but not limited to, the following topics:
Applicants are required to submit the following via this form (except for recommendation letters):
Sessions | Title | Host/Speaker |
Registration / Poster Setup | - | |
Opening Remarks | - | |
Keynote 1 | Atlas Wang (UT Austin / XTX Markets) | |
Keynote 2 | Yu Cheng (CUHK) | |
Keynote 3 | Dawn Song (University of California, Berkeley) | |
Lunch + Poster Session 1 | - | |
Keynote 4 | Huan Liu (Arizona State University) | |
Keynote 5 | Bhavya Kailkhura (Lawrence Livermore National Laboratory) | |
Oral Presentation | Best Paper Award | |
Oral Presentation | Runner-up Best Paper Award | |
Keynote 6 | [Tentative] Zico Kolter (Carnegie Mellon University) | |
Keynote 7 | [Tentative] Charles Fleming (Cisco Research) | |
Poster Session 2 | - | |
Rising Star Intro and Award | Yu Cheng (CUHK) | |
Rising Star Talk 1 | Speaker A | |
Rising Star Talk 2 | Speaker B | |
Closing Remarks | - | |
Social Event (TBD) | - |
Contact the Organizing Committee: tianlong@cs.unc.edu, angliece@umd.edu