Saturday 29 November 2025
Researchers have long been fascinated by the way humans evaluate the plausibility of answers to everyday questions. After all, our brains are wired to make quick judgments about whether a particular response feels right or wrong – often without even realizing it. In an effort to better understand this process, a team of scientists has developed a novel approach to studying human cognition using large language models (LLMs).
The researchers created a series of multiple-choice questions that tested common sense reasoning, and then asked both humans and LLMs to evaluate the plausibility of various answer choices. But here’s the twist: they didn’t just stop at presenting the questions alone. Instead, they also provided the participants with rationales – short arguments for or against each answer choice.
When humans were presented with PRO rationales (arguments in favor of an answer), their ratings of plausibility increased significantly. This suggests that our brains are highly influenced by persuasive reasoning, and that we’re more likely to accept an answer if it’s backed up by a logical argument. Conversely, when CON rationales (arguments against an answer) were presented, human ratings decreased accordingly.
The LLMs, on the other hand, showed a similar pattern of influence – but with some interesting differences. OpenAI’s language model, for example, was more susceptible to PRO rationales than its non-OpenAI counterpart. This could be due to the unique training data used by OpenAI, which may have emphasized persuasive reasoning over other forms of logic.
But what happens when both PRO and CON rationales are presented together? That’s where things get really interesting. In this scenario, humans’ ratings of plausibility became even more nuanced, taking into account both the strengths and weaknesses of each answer choice. The LLMs, however, showed a more binary response – often tilting towards one side or the other based on the dominant rationale.
These findings have significant implications for our understanding of human cognition and language processing. They suggest that our brains are constantly weighing evidence and making judgments about the plausibility of different answers – a process that’s shaped by both logical reasoning and emotional biases.
The use of LLMs in this study also highlights the potential of artificial intelligence to inform our understanding of human thought patterns. By analyzing how these language models respond to different types of rationales, researchers may be able to develop more sophisticated AI systems that better mimic human decision-making processes.
Cite this article: “The Influence of Rationales on Human and Artificial Intelligence Judgments”, The Science Archive, 2025.
Human Cognition, Language Models, Plausibility, Reasoning, Arguments, Persuasion, Logic, Biases, Artificial Intelligence, Decision-Making, Multiple-Choice Questions







