Discussion about this post

User's avatar
Jose v2's avatar

I’ve seen the same pattern: probabilistic models don’t fail at the technical layer, they fail at the trust layer. A score between 0 and 1 isn’t ‘truth’ without calibration, operational cutoffs, and continuous drift/performance checks. Once users lose trust in a model, recovery is nearly impossible—every unverified or opaque output accelerates the decay. A BFT-style human-compute layer feels inevitable anywhere where correctness, not plausibility, is the product.

Expand full comment
Rainbow Roxy's avatar

Regarding the topic of the article, it's a brilliant distinction you make about LLMs being relevance engines, and honestly, sometimes it feels like we, as humans, are just trying to outsource our own responsibilites for truth to the biggest, fastest auto-completer available.

Expand full comment

No posts

Ready for more?