Abstract
We report an experimental study in which participants make a series of linguistic judgements with help from a fictional text-editing aid. The study illuminates how acceptance of a software aid's advice might be affected by, and in turn affects, users’ trust and confidence.
We measure the propensity to accept automated advice by adapting Signal Detection Theory (SDT), and particularly the construct of bias. Our study demonstrates a significant difference in bias between groups who interact with aids that perform at different levels of reliability. Participants’ pre-task trust in similar systems also affected bias towards the aid, but we found no evidence for such an effect of prior perceived self-efficacy.
Users show an awareness of their own performance, demonstrated by higher self-reported confidence in correct responses than in incorrect ones. Further, participants using a more reliable aid are more confident when accepting than when rejecting the aid's advice. Overall, participants were overconfident in their judgements.
Our research demonstrates opportunities of using an experimental paradigm, analysed with SDT, to study aspects of performance, trust, and confidence in knowledge-based cognitive tasks aided by an artificially intelligent aid.
We measure the propensity to accept automated advice by adapting Signal Detection Theory (SDT), and particularly the construct of bias. Our study demonstrates a significant difference in bias between groups who interact with aids that perform at different levels of reliability. Participants’ pre-task trust in similar systems also affected bias towards the aid, but we found no evidence for such an effect of prior perceived self-efficacy.
Users show an awareness of their own performance, demonstrated by higher self-reported confidence in correct responses than in incorrect ones. Further, participants using a more reliable aid are more confident when accepting than when rejecting the aid's advice. Overall, participants were overconfident in their judgements.
Our research demonstrates opportunities of using an experimental paradigm, analysed with SDT, to study aspects of performance, trust, and confidence in knowledge-based cognitive tasks aided by an artificially intelligent aid.
Original language | English |
---|---|
Number of pages | 15 |
Journal | Behaviour & Information Technology |
Early online date | 17 Mar 2025 |
DOIs | |
Publication status | E-pub ahead of print - 17 Mar 2025 |