A new randomised controlled trial from METR found that when 16 experienced open‑source developers were allowed to use AI tools, they took 19% longer to finish 246 tasks on their own code repositories, despite expecting a 24% speedup.If AI slows expert engineers, imagine the friction points inside compliance, where accuracy, auditability, and explainability are non‑negotiable.
What this study hints we should watch for in compliance functions:
• Overconfidence & confirmation bias: Users feel faster even when review cycles balloon
• Low acceptance rates = extra review time: Devs accepted <44% of AI suggestions; in compliance, every untrusted output is another round of checks
• Context gaps: AI struggles with tacit knowledge and edge cases—exactly where regulatory nuance lives
• Documentation debt: More prompting and correcting means more to capture for audit trails and model governance
Takeaway: As AI becomes embedded in KYC reviews, client screening, monitoring, policy drafting, and reporting, we need to design workflows that measure real impact (not just perceived speed), keep humans in the loop, and build controls around validation, versioning, and accountability.
Curious: Where has AI actually saved you time in compliance, and where has it quietly added hidden work?