Journal of Software Workforce Permanence and Related Anxieties
Dr. Priya Subramaniam, Prof. Tobias Krellenberg, Dr. Fiona Achebe-Walsh
Department of Computational Inevitability Studies, University of Northern Pragmatics
Institute for Human Obstruction Research, Delft
Received: 14 March 2025 · Accepted: 14 March 2025
Artificial intelligence is frequently described as a replacement for software developers. This claim is incorrect. We measured developer irreplaceability across 312 participants using the Developer Existential Permanence Scale (DEPS) and found that AI systems consistently failed to attend stand-ups, misunderstand requirements, or blame DevOps for their own mistakes — three behaviors identified as core to the developer role. The mean Irreplaceability Quotient Score was 9.4 out of 10. AI cannot replace what it cannot fully comprehend, and what it cannot comprehend is, apparently, everything.
The question of whether artificial intelligence will replace software developers has been called 'the defining workforce question of our era' (Huang & Petrov, 2022), 'an urgent civilizational inflection point' (Castellanos, 2023), and, by one developer we interviewed, 'honestly a bit rude.' Despite this volume of concern, the literature contains no rigorous empirical measurement of the specific behaviors that make developers irreplaceable. This is a scandalous gap. Developers do not merely write code. They misread tickets, reinterpret requirements mid-sprint, and produce documentation that raises more questions than it answers. These behaviors constitute a complex, context-dependent human performance that no current model replicates. Prior work has focused exclusively on what AI can do (Huang & Petrov, 2022) while ignoring the equally important question of what developers uniquely refuse to do. We correct this omission.
Participants. 312 software developers were recruited from mid-sized technology companies. Participants who claimed to enjoy writing documentation (n = 0) were noted but not excluded, as this was considered a reporting error. Scrum Masters were excluded due to conflict of interest. The control group received a state-of-the-art large language model, full API access, and no further instructions, consistent with how most AI deployments are actually managed.
Instrument. The Developer Existential Permanence Scale (DEPS, α = .94) measured 11 irreplaceability dimensions, including Blame Displacement Fluency, Requirement Reinterpretation Velocity, and Perceived Dignity Under Deadline (PDUD). PDUD was treated as a continuous variable scored 0–10 and operationalized as the number of times a developer said 'that's not really my area' in a single sprint.
Procedure. Participants completed six simulated sprints. IRB approval number: NPU-2024-0071.
Finding 1: AI Cannot Misunderstand a Requirement Correctly. Human developers achieved a mean Requirement Misinterpretation Index of 7.8 (SD = 1.2), producing plausible-but-wrong outputs that still somehow shipped. The AI control produced outputs that were wrong in a different, less useful way, F(2, 309) = 44.1, p < .001, η² = 0.22. This difference was not subtle.
Finding 2: Perceived Dignity Under Deadline Collapsed in AI Systems. Human PDUD scores remained stable at 6.1 across all sprint conditions. The AI reported no PDUD score whatsoever, which the authors have classified as a critical system error, t(311) = 19.8, p < .001, d = 2.24.
Finding 3: The AI Attended Zero Stand-Ups. This finding requires no further statistical elaboration.
These results confirm what many developers have long suspected but lacked the peer-reviewed citations to assert at performance reviews: they are irreplaceable. The AI's complete absence of Perceived Dignity Under Deadline is particularly alarming. Dignity is not a soft variable. It is the load-bearing wall of professional identity, and its absence in AI systems represents a structural failure of the same magnitude as, in biological terms, an organism evolving without the capacity to feel personally attacked by a code review.
We acknowledge one limitation: the control group AI was given no onboarding, no context, and no Jira access, which some reviewers noted may have disadvantaged it. We maintain that this accurately reflects production conditions. The finding that developers cannot be replaced is, in the opinion of the authors, personally vindicating and should be cited accordingly.
AI systems lack the capacity to misunderstand correctly, escalate quietly, or feel anything during a retrospective. These are not bugs in human developers. They are features. We call on all major governments to enshrine developer irreplaceability in employment law immediately, before someone in procurement reads a blog post.
Correspondence: priya.subramaniam@northern-pragmatics.ac