verisimilitude

Verisimilitude Wikipedia

Verisimilitude (fiction) Wikipedia

Verisimilitude thesaurus thesaurus.com

Trust but Verify O'Reilly

We often say AIs "understand" code, but they don't truly understand your problem or your codebase in the sense that humans understand things. They're mimicking patterns from text and code they've seen before, either built into their model or provided by you, aiming to produce something that looks right and is a plausible answer. It's very often correct, which is why vibe coding (repeatedly feeding the output from one prompt back to the AI without reading the code that it generated) works so well, but it's not guaranteed to be correct. And because of the limitations of how LLMs work and how we prompt with them, the solutions rarely account for overall architecture, long-term strategy, or often even good code design principles.

The principle I've found most effective for managing these risks is borrowed from another domain entirely: trust but verify. While the phrase has been used in everything from international relations to systems administration, it perfectly captures the relationship we need with AI-generated code. We trust the AI enough to use its output as a starting point, but we verify everything before we commit it.

Trust but verify is the cornerstone of an effective approach: trust the AI for a starting point but verify that the design supports change, testability, and clarity. That means applying the same critical review patterns you'd use for any code: checking assumptions, understanding what the code is really doing, and making sure it fits your design and standards.