Discussion about this post

User's avatar
Alain Di Chiappari's avatar

Really good framing of the problem.

What I've been seeing is that the "Individual to Team Sport" shift is the hardest part of this transition because it's a moving target problem.

I see more and more adoption asynchronicity, where "autocomplete-only" engineers work close to early adopters who have already moved from many MCP to self-evolving skills through claude code hooks.

The real difficulty I've been observing isn't just sharing docs or knowledge in general but it's that any rigid guideline or "best practice" becomes technical debt within months. We are no longer managing a static stack, but a co-evolution of human-agent workflows.

BTW: "What's good for humans is good for AI" this is absolutely true!

Pawel Jozefiak's avatar

The bottleneck shift point is the most honest thing I've read about AI coding in months. Code generation gets fast; everything downstream stays the same speed. What your data can't fully capture is that the bottleneck moves again when you switch models - a weaker model generates cheaper code faster but pushes more broken assumptions downstream into review and CI. The cost isn't visible until you're 30 hours into a build and realizing you've been debugging model-confidence artifacts rather than logic errors. I tested this directly at the Mistral EU Hackathon building a real app under time pressure.

The productivity delta between frontier and near-frontier models is bigger than the marketing numbers suggest: https://thoughts.jock.pl/p/mistral-ai-honest-review-eu-hackathon-2026

No posts

Ready for more?