Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 88150
LAL: Library-Aided Language Model
Authors: Victor Chen
Abstract:
Prompting methods such as Chain of Thoughts (CoT) improve the performance of Large Language Models (LLMs) by decomposing tasks into steps. To solve each step accurately, approaches such as Program-Aided Language (PAL) models prompt LLMs to generate Python code and execute the code with an external interpreter. Here, LLMs attempt to map problem descriptions into low-level instructions, which may be challenging for complicated domain-specific applications. In this paper, we present Library-Aided Language (LAL) models, which prompts LLMs with domain-specific function libraries. LLMs generate function calls in each step and receive intermediate feedback from each function execution. This allows LLMs to simplify problem solving by reasoning at the abstract function level instead of atomic operations. We demonstrate the effectiveness of LAL across mathematical, symbolic, and algorithmic reasoning tasks from popular benchmarks. Experiments have shown satisfactory results by reasoning at the functional level via domain-specific libraries. Various extensions of LAL, including solving problems with multi-libraries and blended libraries, as well as the automatic generation of domain-specific libraries, are also studied in this research.Keywords: chain of thought, few-shot prompting, large language model, prompt engineering
Procedia PDF Downloads 1