đź”’Lock-LLM: Prevent Unauthorized Knowledge Use from LLMs

Sat. 6 Dec. TBD — p.m. PST

Location: TBD

NeurIPS 2025 Workshop

Workshop Summary

Large Language Models (LLMs) have emerged as transformative tools across research and industry, revolutionizing how we interact with information. However, their immense capabilities bring critical security challenges—the same features that drive innovation can be exploited for malicious purposes through unauthorized distillation, fine-tuning, compression, or editing. These vulnerabilities pose severe threats including intellectual property theft, generation of sophisticated disinformation, bypass of safety alignments, and erosion of user trust in AI systems.

This workshop aims to bring together researchers and practitioners from academia and industry who are advancing the frontiers of LLM security and protection. We seek to confront the unauthorized use of LLMs head-on by exploring novel and robust mechanisms designed to make these models inherently resistant to exploitation while maintaining their beneficial capabilities. The workshop also hosts the 2025 TrustAI Rising Star Award.

Topics of interest include, but are not limited to:
1. Un-Distillable LLMs: Preventing unauthorized model replication and intellectual property theft
2. Un-Finetunable LLMs: Resisting malicious parameter updates and behavior alterations
3. Un-Compressible LLMs: Maintaining model integrity against unauthorized compression
4. Un-Editable LLMs: Safeguarding against knowledge tampering and misinformation injection
5. Un-Usable LLMs: Ensuring traceability and preventing misuse through watermarking and verification

Call for Applications: 2025 TrustAI Rising Star Award - Apply by September 2, 2025!

Keynote Speakers (A-Z order)

Yu Cheng

The Chinese University of Hong Kong

Charles Fleming

Cisco Research

Bhavya Kailkhura

Lawrence Livermore National Laboratory

Zico Kolter

Carnegie Mellon University

Huan Liu

Arizona State University

Dawn Song

University of California, Berkeley

Atlas Wang

University of Texas at Austin/XTX Markets

Call for Papers

The rapid advancement of Large Language Models (LLMs) has brought unprecedented capabilities alongside critical security challenges. The Lock-LLM workshop aims to confront the unauthorized use of LLMs head-on by exploring novel and robust mechanisms designed to make these models inherently resistant to exploitation while maintaining their beneficial capabilities. We invite high-quality submissions that advance our understanding of LLM protection mechanisms and address the dual-use nature of these technologies.

Topics of Interest

We encourage submissions on topics including, but not limited to:

  • Un-Distillable LLMs: Methods to prevent unauthorized model replication through distillation, including watermarking approaches, noise injection, adversarial perturbations, and system-level controls for intellectual property protection
  • Un-Finetunable LLMs: Techniques to prevent unauthorized parameter updates, including gradient-blocking mechanisms, non-differentiable components, and domain-specific fine-tuning restrictions
  • Un-Compressible LLMs: Approaches to maintain model integrity against unauthorized compression, quantization, and pruning, including degradation mechanisms and post-hoc identification methods
  • Un-Editable LLMs: Safeguards against malicious knowledge editing, misinformation injection, and fact tampering, including detection-based approaches and tamper-resistant behaviors
  • Un-Usable LLMs: Watermarking, fingerprinting, and verification mechanisms for ownership verification, usage tracking, and preventing model substitution attacks
  • Theoretical Foundations: Information-theoretic limits, cryptographic protocols, and formal guarantees for LLM protection
  • Evaluation Frameworks: Benchmarks, metrics, and methodologies for assessing LLM security, robustness, and protection effectiveness
  • Real-world Applications: Case studies, deployment strategies, and practical considerations for secure LLM systems in production environments
  • Ethical and Societal Implications: Addressing dual-use concerns, access equity, and responsible innovation in LLM security

Submission Guidelines

Papers can be up to 6 pages (excluding references and supplementary material). All submissions must use the NeurIPS 2025 LaTeX style file. Papers should be submitted through OpenReview. All submissions will undergo double-blind peer review. Authors must ensure their submissions are properly anonymized. We welcome submissions currently under review at other venues. Papers accepted at the main NeurIPS 2025 conference will go through a light-touch review solely to assess their relevance, without a second peer review process.

Important Dates

Paper Submission Deadline August 22, 2025, AoE
Notification of Acceptance September 22, 2025, AoE
Camera-Ready Deadline October 15, 2025, AoE
Workshop Date December 6/7 (TBD), 2025

Awards

Outstanding submissions will be recognized with Best Paper and Runner-up Best Paper awards. Winners will present their work in dedicated oral sessions.

Submit (Coming Soon)

2025 TrustAI Rising Star Award Announcement

The TrustAI Rising Star Award was established to honor early-career researchers (senior Ph.D. students and postdoctoral fellows) who have made significant contributions to the trustworthiness, security, and responsible use of large language models. In 2025, the TrustAI Rising Star Award will be hosted by the Lock-LLM Workshop at NeurIPS 2025, and two researchers will be selected and awarded. The awardees will receive certificates and give oral presentations of their work at the workshop to showcase their research, share insights, and connect with other researchers in the field.

Call for Year 2025 TrustAI Rising Star Award Applications

Awards will be announced in early December
Award talks and ceremonies will take place at The 1st Lock-LLM Workshop: Prevent Unauthorized Knowledge Use from LLMs, co-located at NeurIPS 2025

Candidate materials due: September 2, 2025
Reference letters due: September 9, 2025

Objective

At the 2025 NeurIPS Lock-LLM Workshop, the TrustAI Rising Star Award will be presented to two young researchers who have made significant contributions to the trustworthiness, security, and responsible use of large language models. The awardees will give a presentation about their research at the workshop. We strongly encourage researchers from minority or underrepresented groups to apply.

Domains of Interest

We welcome applicants working on, but not limited to, the following topics:

  • Security, integrity, and robustness of LLMs
  • Defenses against unauthorized distillation, fine-tuning, compression, editing, or misuse
  • Watermarking, ownership verification, and intellectual property protection
  • Dual-use risk mitigation for generative models
  • Trustworthy AI and ethics in the context of LLM deployment
  • Robustness certification and verification for LLMs
  • Privacy and security in LLM-based systems
  • Novel applications or methods advancing trustworthy and secure LLMs

Eligibility and Requirements

  • Senior PhD students enrolled in a PhD program before December 2022, or
  • Researchers holding postdoctoral positions who obtained their PhD after April 2023

Application Materials

Applicants are required to submit the following via this form (except for recommendation letters):

  1. CV, including a list of publications
  2. Research statement (up to 2 pages, single column, excluding references) describing your research accomplishments and future directions
  3. Two recommendation letters, to be uploaded by your referees before September 9, 2025 (AoE) via this form
Submit Application

Schedule (PST)

Sessions Title Host/Speaker
8:30-9:00 Registration / Poster Setup -
9:00-9:10 Opening Remarks -
9:10-10:10 Keynote 1 Atlas Wang (UT Austin / XTX Markets)
10:10-11:10 Keynote 2 Yu Cheng (CUHK)
11:10-12:10 Keynote 3 Dawn Song (University of California, Berkeley)
12:10-13:10 Lunch + Poster Session 1 -
13:10-14:10 Keynote 4 Huan Liu (Arizona State University)
14:10-15:10 Keynote 5 Bhavya Kailkhura (Lawrence Livermore National Laboratory)
15:10-15:20 Oral Presentation Best Paper Award
15:20-15:30 Oral Presentation Runner-up Best Paper Award
15:30-16:30 Keynote 6 [Tentative] Zico Kolter (Carnegie Mellon University)
16:30-17:30 Keynote 7 [Tentative] Charles Fleming (Cisco Research)
17:30-17:50 Poster Session 2 -
17:50-18:00 Rising Star Intro and Award Yu Cheng (CUHK)
18:00-18:10 Rising Star Talk 1 Speaker A
18:10-18:20 Rising Star Talk 2 Speaker B
18:20-18:25 Closing Remarks -
18:25- Social Event (TBD) -

Organizers

Tianlong Chen

University of North Carolina at Chapel Hill

Ang Li

University of Maryland, College Park

Furong Huang

University of Maryland, College Park

Avi Schwarzschild

University of North Carolina at Chapel Hill

Neil Zhenqiang Gong

Duke University

Bo Li

University of Chicago

Yuxiong He

Snowflake

Student Organizers

Pingzhi Li

The University of North Carolina at Chapel Hill

Guoheng Sun

University of Maryland, College Park

Zhen Tan

Arizona State University

Ziyao Wang

University of Maryland, College Park

Song Wang

University of Virginia

Contacts

Contact the Organizing Committee: tianlong@cs.unc.edu, angliece@umd.edu