Reasoning with Large Language Models

: 08h00, ngày 31/03/2025 (Thứ Hai)

: P104 D3, ĐH Bách Khoa Hà Nội

: Machine Learning và Data Mining

: Nguyễn Tùng

: Khoa Toán - Tin, Đại học Bách Khoa Hà Nội

Tóm tắt báo cáo

Large Language Models (LLMs) such as OpenAI o1, DeepSeek, and Gemini have demonstrated remarkable capabilities in a wide range of tasks—from natural language understanding to code generation and problem solving. Yet, the extent to which these models can perform reasoning, both symbolic and commonsense, remains an open and evolving question. This seminar explores the current landscape of reasoning with LLMs, covering core paradigms such as chain-of-thought prompting, tool use, retrieval-augmented generation, and self-refinement techniques. I will examine how LLMs simulate reasoning through statistical associations versus structured logical processes, and discuss recent benchmarks, case studies, and limitations.


Đánh giá bài viết


Xem thêm