Uncertainty Quantification for Large Language Models: 2nd edition, AAAI-2026 Permalink
Tutorial, AAAI-2026, Singapore
Uncertainty quantification has gained increasing importance in natural language processing (NLP), offering a conceptual and methodological framework to address critical issues such as hallucinations in answers of LLMs, detection of low-quality responses, out-of-distribution detection, and reducing response latency, among others. While UQ for text classification models in NLP has been covered in previous tutorials, applying UQ to LLMs poses far greater challenges. This complexity stems from the fact that LLMs generate sequences of conditionally dependent predictions with varying levels of importance. As a result, many UQ techniques effective for classification models are either ineffective or not directly applicable to LLMs. In this tutorial, we cover foundational concepts of UQ for LLMs, present state-of-the-art techniques, demonstrate practical applications of UQ in various tasks, and equip researchers and practitioners with tools for developing new UQ methods and harnessing uncertainty in various contexts. As research advances beyond purely text-based LLMs toward multimodal reasoning models, we also showcase UQ applications for these cutting-edge models, highlighting its potential not only as a safety mechanism but also as a means to enhance the effectiveness and efficiency of multi-step reasoning. Through this tutorial, we aim to lower the barrier to entry into UQ research and applications for individual researchers and developers.
