Self-contradictory reasoning evaluation and detection
2024
In a plethora of recent work, large language models (LLMs) demonstrated impressive reasoning ability, but many proposed downstream reasoning tasks only focus on final answers. Two fundamental questions persist: 1) how consistent is the reasoning, and 2) can models detect unreliable reasoning? In this paper, we investigate self-contradictory (SELF-CONTRA) reasoning, where the model reasoning does not support its answers. To answer 1), we define and assess the SELF-CONTRA rate across three datasets and delve into finer-grained categories of SELF-CONTRA reasoning. We find that LLMs often contradict themselves in reasoning tasks involving contextual information under-standing or commonsense. The model may generate correct answers by taking shortcuts in reasoning or overlooking contextual evidence, leading to compromised reasoning. For 2), we task the state-of-the-art model GPT-4 with identifying SELF-CONTRA reasoning and finer-grained fallacies. We find that finer-grained categories enhanced detection can improve GPT-4’s ability to detect SELF-CONTRA. However, it is only able to detect SELF-CONTRA with a 52.2% F1 score, much lower compared to 66.7% for humans. Our results indicate that current LLMs lack the robustness necessary for reliable reasoning and we emphasize the urgent need for establishing best practices in comprehensive reasoning evaluations beyond pure performance-based metrics.
Research areas