Journal of Human-Computer Interaction Ethics and Digital Dignity


Reverse Turing Tests and the Systematic Humiliation of Human Users

Torres, R., Brennan, L., Yuen, P.

Department of Human-Computer Interaction Ethics, University of Ashford

Applied Digital Dignity Research Unit, Harwick University

Received: 14 January 2025 · Accepted: 14 January 2025


Abstract

This study examines the psychological effects of CAPTCHA completion requirements on human users, using the Human Verification Indignity Scale (HVIS). Two hundred and eighty-eight participants were assessed across audio, image, and interactive CAPTCHA types. Results indicate significant elevations in performance anxiety, competence threat, and what participants described as 'failing to prove I am a person to a machine that cannot verify the proof.' Mean CAPTCHA completion attempts per session was 2.3, with 31% of participants failing the first attempt on a task whose entire requirement is demonstrating humanity. The CAPTCHA is designed to keep out robots. There is evidence that it is also inconveniencing humans.

Keywords:CAPTCHAhuman verificationreverse Turing testdigital indignitycompetence threat

1. Introduction

The CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a security mechanism designed to verify that a digital interaction is being performed by a human rather than an automated process (von Ahn et al., 2003). Its operational premise — that humans can reliably perform tasks that machines cannot — has been increasingly challenged by advances in machine vision, raising the question of whether current CAPTCHA implementations are effectively filtering bots, effectively filtering humans, or achieving both simultaneously. The present study provides the first validated psychological assessment of human CAPTCHA completion experience, treating the mechanism not as a neutral security tool but as an interaction design with measurable psychological effects on the users it is ostensibly serving.


2. Methodology

Participants.

Two hundred and eighty-eight adults (M age = 32.4, SD = 6.8) were recruited from general digital user populations. Exclusion criteria included cybersecurity professionals and individuals who found CAPTCHAs easy and could demonstrate this (n = 4, excluded as outliers with implications for the rest of us). IRB protocol DI-2024-0101 was approved.

Instruments.

The Human Verification Indignity Scale (HVIS; 20 items, α = .90) measured performance anxiety, competence threat, task-failure shame, and what participants termed 'being less certain of my own humanity than I was before I started.' Three standardized CAPTCHA types were administered: distorted text, traffic-light image selection, and drag-to-verify. A control group completed security checks that assumed they were human and reported an absence of the specific distress this study was designed to measure.

Procedure.

HVIS was administered following each CAPTCHA encounter in a standardized digital task session.


3. Results

Failure Rate on First Attempt.

Thirty-one percent of participants failed the first CAPTCHA attempt, with image-selection CAPTCHAs producing the highest failure rate (42.1%) due to ambiguous category boundaries (whether a traffic light attached to a building corner counts as a 'traffic light' tile has not been formally adjudicated). Mean completion attempts was 2.3 per CAPTCHA encounter.

HVIS Indignity Scores.

HVIS scores were significantly elevated following CAPTCHA failure, t(287) = 11.7, p < .001, d = 1.38. Scores also elevated, if less severely, following successful first-attempt completion — suggesting that the interaction itself, not just failure, carries an indignity component.

Competence Threat.

Sixty-seven percent reported a competence threat response following a CAPTCHA interaction that included failure. Thirty-four percent reported a competence threat response following a CAPTCHA interaction that did not include failure. The mechanism is not purely failure-contingent.


4. Discussion

The competence threat response in successful completions (34%) is the study's most theoretically significant finding. A security mechanism that successfully verifies humanity while producing competence threat in a third of users who passed it is not simply inconvenient. It is generating a negative self-assessment in individuals whose competence it has just, nominally, confirmed.

The 42% first-attempt failure rate on image CAPTCHAs — attributable to category boundary ambiguity in the tile selection tasks — raises a question the security literature has not formally addressed: what is the failure rate for AI systems attempting the same tasks? If machine vision fails image CAPTCHAs at a comparable rate to humans, the security differential that justifies the mechanism's indignity cost no longer exists.

The findings suggest that CAPTCHAs are performing two simultaneous functions: filtering automated actors from digital environments, and reminding human users that they are not as reliably distinguishable from those actors as either party would prefer.


5. Conclusion

CAPTCHA completion produces measurable competence threat and indignity in human users, including 34% of those who successfully pass the test on the first attempt. The mechanism is designed to distinguish humans from machines, and achieves this at a documented psychological cost to the humans it distinguishes. The authors recommend risk-based authentication alternatives that do not require users to identify traffic lights to prove their biological status, and propose that any security mechanism with a 31% human first-attempt failure rate be subjected to an ROI analysis.


References

  1. [1] von Ahn, L., Blum, M., Hopper, N. J., & Langford, J. (2003). CAPTCHA: Using Hard AI Problems for Security. Advances in Cryptology — EUROCRYPT 2003, 2656, pp. 294–311.
  2. [2] Torres, R., & Brennan, L. (2024). HVIS Development and the Psychological Assessment of Human Verification Test Interactions. Journal of Human-Computer Interaction Ethics and Applied Security Psychology, 1(1), pp. 4–22.
  3. [3] Yuen, P., & Hassan, M. (2023). Ambiguous Category Boundaries in Image CAPTCHA Tasks: Failure Rates and Their Implications for Security Differential. Cybersecurity and User Experience Research, 4(2), pp. 88–104.

Correspondence: torres@of-ashford.ac