MCP challenges, atomic habits, platform teams vs central teams, and Accelerate capabilities💡
Monday Ideas — Edition #161
1) Does MCP address AI API challenges? 📈
This idea is brought to you by today’s sponsor — Postman!
Last week we published a long piece on how LLMs are changing how we design APIs.
MCP is obviously trying to address this, but how much does it really help? To be fair, it tackles some of the challenges:
🔍 Discovery and capabilities — MCP servers explicitly declare what they can do, solving the "how does AI know what's available" problem.
❌ Standardized errors — the protocol defines consistent error formats that AI can interpret.
⬜ Stateless operations — each MCP call is self-contained, aligning with AI's non-deterministic nature.
But it doesn’t address others:
💸 Token economy — MCP doesn't specify (or recommend) token-efficient formats; verbose (and thus expensive) responses remain verbose.
⏱️ Latency optimization — no built-in batching or streaming optimizations for slow LLM workflows.
🩹 Self-healing hints — while errors are standardized, MCP doesn't provide standard avenues for recovery suggestions or alternative endpoints.
To me, this is largely ok, because 1) it is unclear whether the scope of MCP should include all or any of these, and 2) at this stage it’s more important to converge on a standard than to design a perfect standard.
We explored more ideas about AI-first APIs in the full article 👇
Also, Postman recently launched an MCP catalog, making it easier for developers to find and share MCP servers. They also debuted their own MCP generator, to easily create MCP servers.
2) Atomic Habits Video! 📙
Our book reviews have always been among the most popular Refactoring articles (check out our latest one, on The Manager’s Path!), so I just tried to do something new by turning one into a Youtube video!
I picked Atomic Habits, because 1) it is one of my favorite books, and 2) I believe it is useful to just about everyone, regardless of their job. Here is the video 👇
You can also find the full newsletter article below 👇
3) 🧱 Platform teams should never lose sight of their customers
Earlier this year Camille Fournier published a full article on Refactoring on how to create good platform engineering teams.
She wrote at length about how platform teams are fundamentally different from the classic central teams, and one of the core differences is that central teams often lose sight of their customers.
Instead of thinking about the people who use their systems as their customers, they view them as those clueless application engineers who just don’t get it. They don’t read the docs, they don’t know how to use systems in the right way, they don’t want to try the new stuff and give feedback on it.
Treating your customers as an inconvenience to be managed is one of the main contributors to the bad reputation of central teams.
We need to view our platforms as products not just because we want them to be thoughtful abstractions that are easy to use but also because we want to make sure that we are building things that the customer actually wants and needs.
Your team will have lots of good ideas for products you could be building, but in order for those products to be successful they need to be evaluated for product market fit: will the application engineers at your company actually use this thing once you build it?
You can make something that seems great on paper, with easy onboarding, great docs, and widespread customer awareness, but still get no adoption because it just doesn’t meet a pressing need for the application teams.
This is more than just hiring some product managers, making a product roadmap, setting some adoption metrics, and calling it a day. Your whole platform team needs to develop customer empathy and connections with customer teams who can give you feedback on what is important to them and where their pain points lie.
Your best products may even come from application teams who have built something useful for themselves that turns out to be something you could expand for the rest of the company.
You can find the full article by Camille below 👇
4) 📊 Accelerate is more than DORA
Ask anyone about Accelerate, and chances are they will mention the DORA metrics.
These four KPIs define how teams can measure their software delivery performance, and became instantly famous:
🚀 Deployment Frequency — how often you release to production.
⏱️ Lead Time for Changes — the amount of time it takes a commit to get to production.
📉 Change Failure Rate — the percentage of deployments causing a failure.
🛠️ Time to Restore Service (MTTR) — how long it takes to recover from a failure.
One of the reasons why the metrics caught on is because they provided, for the first time, a research-backed way to evaluate software delivery across two dimensions:
Throughput → via Deployment Frequency + Lead Time for Changes.
Stability → via Change Failure Rate + MTTR.
But here’s the thing: if you think Accelerate is only about metrics, you're missing 90% of the picture. The core of Accelerate is not the metrics: it's the engine that enables them.
The book meticulously identifies and validates 24 key capabilities that have been statistically shown to improve software delivery performance. The metrics are the outcome, while the capabilities are the drivers. And the research proves this connection with extreme rigor. It moves the conversation from "what good looks like" to "what specific actions demonstrably lead to good.
With some degree of simplification, we can organize these capabilities into three buckets: cultural, process, and technical.

These buckets work as levels of a pyramid, each one supporting the health of the ones above:
Good culture is what makes people work well together and feel good about their work environment. It keeps retention high, stress low, and enables the creation of good process 👇
Process exists to make work flow well through the system. Good process is about tight feedback loops and minimizing waste.
Good culture and good process naturally lead to the technical practices that enable elite software delivery, like continuous deployment and empowered teams.
We reviewed Accelerate in our book club two months ago, and we reviewed all the 24 capabilities in our full book review 👇
And that’s it for today! If you are finding this newsletter valuable, consider doing any of these:
1) 🔒 Subscribe to the full version — if you aren’t already, consider becoming a paid subscriber. 1700+ engineers and managers have joined already! Learn more about the benefits of the paid plan here.
2) 📣 Advertise with us — we are always looking for great products that we can recommend to our readers. If you are interested in reaching an audience of tech executives, decision-makers, and engineers, you may want to advertise with us 👇
If you have any comments or feedback, just respond to this email!
I wish you a great week! ☀️
Luca
I’d say MCP is starting to support “self healing hints” sort of
With elicitations, it provides a way to correct information if your system would have returned an error during an operation.
However, I think you can get a better user experience sometimes by just returning a good error message, and lettting the user send a new message, vs filling out a form.
Really they should have just named elicitations “forms” haha.
For anyone not sure what this is for, they made an example in the official protocol that shows a restaurant tool call. Where you can book a time to come in.
If the time you want is not available when requesting it, the tool can “elicit” the client and provide a dropdown with alternative times, instead of just returning an error.