What I've been seeing is that the "Individual to Team Sport" shift is the hardest part of this transition because it's a moving target problem.
I see more and more adoption asynchronicity, where "autocomplete-only" engineers work close to early adopters who have already moved from many MCP to self-evolving skills through claude code hooks.
The real difficulty I've been observing isn't just sharing docs or knowledge in general but it's that any rigid guideline or "best practice" becomes technical debt within months. We are no longer managing a static stack, but a co-evolution of human-agent workflows.
BTW: "What's good for humans is good for AI" this is absolutely true!
The bottleneck shift point is the most honest thing I've read about AI coding in months. Code generation gets fast; everything downstream stays the same speed. What your data can't fully capture is that the bottleneck moves again when you switch models - a weaker model generates cheaper code faster but pushes more broken assumptions downstream into review and CI. The cost isn't visible until you're 30 hours into a build and realizing you've been debugging model-confidence artifacts rather than logic errors. I tested this directly at the Mistral EU Hackathon building a real app under time pressure.
Really good framing of the problem.
What I've been seeing is that the "Individual to Team Sport" shift is the hardest part of this transition because it's a moving target problem.
I see more and more adoption asynchronicity, where "autocomplete-only" engineers work close to early adopters who have already moved from many MCP to self-evolving skills through claude code hooks.
The real difficulty I've been observing isn't just sharing docs or knowledge in general but it's that any rigid guideline or "best practice" becomes technical debt within months. We are no longer managing a static stack, but a co-evolution of human-agent workflows.
BTW: "What's good for humans is good for AI" this is absolutely true!
Great article.
Matches most of the things we are doing to accelerate AI Adoption, especially iterative progress, sharing obstacles and best practices etc.
I can't speed up the deployment and feedback, due to the unique nature of healthcare.
One best practice
I maintain and share a 4 quadrant list named 4O.
4O- Objectives, Outcomes, Observations and Obstacles.
That way we can avoid reinventing the wheel and accelerate from green field to brown filelds and complicated 'mine fields'.
The bottleneck shift point is the most honest thing I've read about AI coding in months. Code generation gets fast; everything downstream stays the same speed. What your data can't fully capture is that the bottleneck moves again when you switch models - a weaker model generates cheaper code faster but pushes more broken assumptions downstream into review and CI. The cost isn't visible until you're 30 hours into a build and realizing you've been debugging model-confidence artifacts rather than logic errors. I tested this directly at the Mistral EU Hackathon building a real app under time pressure.
The productivity delta between frontier and near-frontier models is bigger than the marketing numbers suggest: https://thoughts.jock.pl/p/mistral-ai-honest-review-eu-hackathon-2026