OU Portal
Log In
Welcome
Applicants
Z6_60GI02O0O8IDC0QEJUJ26TJDI4
Error:
Javascript is disabled in this browser. This page requires Javascript. Modify your browser's settings to allow Javascript to execute. See your browser's documentation for specific instructions.
{}
Close
Publikační činnost
Probíhá načítání, čekejte prosím...
publicationId :
tempRecordId :
actionDispatchIndex :
navigationBranch :
pageMode :
tabSelected :
isRivValid :
Record type:
stať ve sborníku (D)
Home Department:
Ústav pro výzkum a aplikace fuzzy modelování (94410)
Title:
Generalization of LLMs in SAT Reasoning via Structured Scratchpad Interaction
Citace
Dušek, F., Hyner, P. a Hůla, J. Generalization of LLMs in SAT Reasoning via Structured Scratchpad Interaction.
In:
Conference on Artificial Intelligence and Theorem Proving: AITP 2025 Book of Abstracts 2025-08-31 Aussois.
Subtitle
Publication year:
2025
Obor:
Number of pages:
Page from:
neuvedeno
Page to:
neuvedeno
Form of publication:
Elektronická verze
ISBN code:
neuvedeno
ISSN code:
Proceedings title:
AITP 2025 Book of Abstracts
Proceedings:
Mezinárodní
Publisher name:
neuvedeno
Place of publishing:
neuvedeno
Country of Publication:
Název konference:
Conference on Artificial Intelligence and Theorem Proving
Místo konání konference:
Aussois
Datum zahájení konference:
Typ akce podle státní
příslušnosti účastníků:
Evropská akce
WoS code:
EID:
Key words in English:
Large Language Models; SAT Solving; CDCL; Algorithmic Reasoning; Neural Symbolic Learning; Scratchpad Memory; Out-of-Distribution Generalization; Tool-Augmented Inference
Annotation in original language:
Large Language Models (LLMs) have recently demonstrated promising multi-step reasoning capabilities through techniques like chain-of-thought prompting and scratchpads. However, they struggle with tasks requiring complex search and backtracking, such as SAT solving. Our goal is to examine whether LLMs can learn and generalize the Conflict-Driven Clause Learning (CDCL) algorithm from supervised solver traces.
Annotation in english language:
Large Language Models (LLMs) have recently demonstrated promising multi-step reasoning capabilities through techniques like chain-of-thought prompting and scratchpads. However, they struggle with tasks requiring complex search and backtracking, such as SAT solving. Our goal is to examine whether LLMs can learn and generalize the Conflict-Driven Clause Learning (CDCL) algorithm from supervised solver traces.
References
Reference
R01:
Complementary Content
Deferred Modules
${title}
${badge}
${loading}
Deferred Modules