Loading...
Do humans recalibrate the confidence of advisers or take confidence at face value?
Title / Series / Name
Publication Volume
Publication Issue
Pages
Authors
Editors
Keywords
adviser preference
calibration
metacognition
overconfidence
Artificial Intelligence
Computer Science Applications
Human-Computer Interaction
Cognitive Neuroscience
calibration
metacognition
overconfidence
Artificial Intelligence
Computer Science Applications
Human-Computer Interaction
Cognitive Neuroscience
URI
https://hdl.handle.net/20.500.14018/27046
Abstract
Who we choose to learn from is influenced by the relative confidence of potential informants (Birch, Akmal, & Frampton, 2010). More confident advisers are preferred based on an assumption that confidence is a good indicator of accuracy. However, oftentimes, accuracy and confidence are not calibrated, either due to strategic manipulations of confidence or unintentional failures of metacognition. When accuracy information is readily available, people are additionally vigilant to the calibration of informants, penalizing incorrect, yet confident advisers (Tenney, MacCoun, Spellman, & Hastie, 2007). The current experiment tested whether participants can leverage inferences about two advisers' calibration profiles to make optimal trial-by-trial decisions. We predicted that choice of advisers reflects relative differences in the advisers' probability of being correct given their stated confidence (recalibrated confidence), as opposed to stated confidence differences. The prediction was not supported by data, but calibration had a modulating effect on choices, as more confident advisers were more influential only when they were also calibrated. Further, participants' decision confidence was informed only by the confidence of the adviser whose advice was chosen, disregarding the confidence of the second adviser.
Topic
Publisher
Place of Publication
Type
Conference paper
Date
2022