A Multi-Attribute Mixture Expert Reasoning Approach Based on Large Language Models
29 Pages Posted: 6 Mar 2025
Abstract
Large language models (LLMs) face significant challenges in knowledge-intensive and complex logical reasoning tasks due to limitations in knowledge timeliness, logical coherence, and error accumulation. To address these issues, this paper proposes a Multi-Attribute Mixture Expert (MAME) framework, integrating group decision theory and multi-attribute prompting (MAP). MAME enhances reasoning through iterative feedback among agents with diverse social attributes, reducing logical biases and improving answer diversity. Extensive experiments on four datasets demonstrate that MAME outperforms state-of-the-art methods in mathematical reasoning and logical reasoning. Additionally, we reveal a trade-off between agent quantity and reasoning efficiency: increasing agents improves diversity but introduces coordination challenges. This work provides a novel paradigm for enhancing LLMs’ reasoning capabilities through structured multi-agent collaboration. The code is open source: https://github.com/ioio0614/MAME
Keywords: Large Language Model, Multi-agent, Group Decision making
Suggested Citation: Suggested Citation