How to make your tech stack AI-friendly 📏
A deep dive into strategies and mental models with Jamie Turner, founder of Convex.
The dominant conversation in software engineering today is obviously about AI.
One of the reasons why this is so discussed — other than the tech itself — is that figuring out the value, or even just the potential, of AI is incredibly hard by now.
In fact, while for new tech it is normal not to have a consensus on what to expect, the magnitude of this variance for AI is just… dumbfounding.
Just a couple of weeks ago, new research from METR dropped and challenged the very assumption that developers were getting any help from AI. Pretty much the opposite: they concluded that models slow down humans on realistic coding tasks.
This is not the only research that debunks AI impact: DORA basically said the same in their latest State of DevOps report.
But of course I can point to plenty of counterexamples.
To name one: when I interviewed Salvatore Sanfilippo, aka Antirez, the creator of Redis, he reported being 5x faster thanks to AI. Salvatore is extremely methodical and far from an enthusiast by default, so he should be trusted. He is also the prototype of a 10x developer: he writes some of the world’s most complex system code, largely in C. This is exactly the type of stuff where you wouldn’t expect AI to make a dent, and instead it does.
We could be skeptical about either Salvatore’s or METR’s findings, but the gulf between being slowed down and being 5x faster is just incredible, so we need to reflect on it.
It seems obvious to me that, right now, there are teams and individuals who are able to get a lot more out of AI than others, so we should understand why. A lot of this conversation focuses on workflows: prompting strategies, using this or that tool, connecting to MCPs, and so on. But there is an angle that I see rarely discussed, and that is about tech choices.
How does AI productivity depend on your tech stack? Should you take AI into account when you pick a language or a framework? How? What makes some tech a good fit for AI? And so, how should tech be designed in order to be such a good fit?
These questions to me are incredibly interesting because not only do they impact our choices today — they will impact how tech will be designed in the future.
So, is there any difference between how tech stacks should be designed for humans, vs for AI?
To explore this, I had a long chat last month with Jamie Turner, co-founder of Convex.
Convex is one of the very few companies today ambitious enough to rethink our entire tech stack from the ground up, starting with the database. They designed a new, open-source database from scratch, an entire backend framework around it, up to a full vibe-coding platform with Chef.
And they did so thanks to an enviable track record: Jamie and James, the two co-founders, were respectively Senior Director and Principal Engineer at Dropbox, where they designed from scratch one of the most daunting storage systems in the history of the Internet.
So I sat down with Jamie and we explored tech design principles to amplify AI strengths, while mitigating its weaknesses.
Here's the agenda:
🧠 AI succeeds where humans succeed — why the best AI-optimized systems double down on good design principles.
🔧 Creating constraints — how static typing, conventions, and guardrails make AI more effective.
💻 Everything as code — why configuration-as-code and language unity reduce AI confusion.
🎯 Limiting context — how component architecture and unified platforms help AI focus.
🚀 Designing for AI adoption — building evals, feedback loops, and constraints that scale with AI.
Let's dive in!
Disclaimer: I am a fan of what Convex is building and I am grateful to them for partnering on this piece. However, I will only write my unbiased opinion about the practices and tools covered here, Convex included.
You can learn more about the Convex below 👇
🧠 AI succeeds where humans succeed
Before getting to specific ideas that make AI perform better, we should ask ourselves: why is this even necessary? How is AI different from a human developer?
Jamie gave me a great answer on this which works as an anchor to almost everything we will cover:
"LLMs work well or badly with the same things people do. If something confuses a human, it's going to confuse an AI even more. But if something is well-designed for humans, chances are AI will thrive in it"
So, rather than thinking that AI is different so we need different approaches, a healthy approach is to see AI as an amplifier of existing patterns, both good and bad.
If your API is confusing to a new human engineer, it's going to be really confusing to an AI. If your codebase has inconsistent patterns, AI will struggle to pick the right one. If your docs are scattered across five different places, AI won't know where to look.
But the flip side is equally true: if your system is well-designed for humans, AI will thrive in it.
Jamie has a peculiar vantage point on this: he can continuously test how AI is able to use the Convex tech, and tweak it. For his team, it’s like fast-forwarding user research.
What he usually finds is that AI immediately exposes the same friction points in the system that humans eventually discover, but much faster.
A human developer might spend weeks working with a suboptimal API before finally (or without ever) complaining about it. AI is more likely to hit that wall in just a few interactions. A human might gradually learn to navigate an inconsistent framework, while AI gets confused and starts hallucinating.
So, AI is like having thousands of new developers join your team simultaneously, all with zero context. To get the best out of them you need to get your sh*t together, or they are immediately going to expose every sharp edge, every inconsistency and every piece of implicit knowledge your existing team takes for granted.
So, by and large, Jamie believes the conversation we should have is not about what design principles change with AI (there are a few, and we will see them later), but rather about doubling down on the good old ones we have always known.
Many teams have been getting away with all kinds of design shortcuts and tech debt because their human developers have learned to work around them. But AI doesn't learn to work around bad design—it just fails.
This sets up pretty much everything else we'll discuss today: the constraints, the language choices, the architecture decisions. They're all applications of this core idea that AI succeeds where humans succeed, but with less tolerance for ambiguity and inconsistency.
Let’s go 👇
🔧 Create constraints
So what are timeless design ideas that also perform particularly well with AI?
The first one to me is to create smart constraints in your system. AI (just like humans) performs better when it has to make fewer choices.
Constraints are useful because they allow you to intercept errors earlier in the process, or prevent them altogether. Now, a lot of shift-left strategies historically have posed a simple trade-off: making coding a little bit more cumbersome, in exchange for more safety. But with AI taking care of a lot of the coding, the bad side of the equation got a lot less relevant, and you are just left with the benefits.
An obvious example is static typed languages. James argues that using e.g. TypeScript instead of vanilla JavaScript has always been a smart choice for teams, but now it’s just a no-brainer.
With TypeScript, AI gets immediate feedback about whether the code works. The type system acts like guardrails, catching mistakes before they become bugs at runtime. Conversely, the amount of assumptions AI has to make to generate bug-free JavaScript is just too much. Static typed code is also easier to understand semantically for developers, which is an enormous benefit for AI-written code.
This approach extends to every level of your system. Jamie spends a lot of engineering effort on what he calls forcing functions — constraints that make it impossible to use their system incorrectly.
Examples of this are all kinds of convention-over-configuration ideas, or using strong linting. The more decision surface you can eliminate, the more reliably AI can work within your system.
💻 Everything as code
One of the things on which AI is remarkably different than humans is that it is no good at using GUIs or navigating dashboards.
We have already discussed this a couple of weeks ago, when we noted how Shopify betted the farm on configuration as code, and Jamie shares the same sentiment. By now, you should be expressing everything you can through code. This brings the classic code benefits — reproducibility, version control, self-documentation, automation — but now largely without the added cognitive load on humans, because AI can write the config.
For this reason, Jamie insists that everything at Convex is expressed through code: deployment config, database schemas, queries, and more. And in a further step towards simplicity, Convex allows devs to do all of this without ever leaving TypeScript: no YAML, JSON, SQL, or other scripting languages.
Jamie argues that this is perfect for AI because of 1) the consistency of having the same language across everything, and 2) the benefits of the type safety you get for free.
🔬 Limiting context
Another aspect where AI differs from humans is context management.
I wouldn’t say AI is better or worse then humans at context — it’s rather that it excels at different things.
Humans are amazing at keeping broad context in their heads. A senior developer can work on a feature that touches auth, database, backend + frontend components, all while keeping the big picture in mind and navigating the right business tradeoffs.
For AI this is hard (for now). Whenever you have a deep web of dependencies, AI struggles to keep everything together.
Conversely, AI is extraordinary at what we may call deep context. As long as everything is in the same place, AI can make sense of extremely complex codebases in seconds, or perfectly summarize hundreds of pages of information.
In other words, AI excels at local reasoning, but struggles with global reasoning.
So how should we design systems to maximize local reasoning opportunities?
One of the biggest factors that contributed to the success of React was his component model.
React made complex UIs manageable by breaking them into isolated, composable pieces. By now this is an obvious idea, but at the time it was absolutely not. Each component can be designed to have a clear interface, minimal dependencies, and can be reasoned about independently.
Now, compartmentalization and low coupling have been a staple of good engineering since pretty much forever, but the current generation of frontend frameworks has brought this topic even more front and center.
Modern frontend has been often criticized for being over engineered and for adding an unbearable cognitive load to developers. If you ask me, having started my career on Ruby on Rails and having lived through everything until React these days, I believe such criticism is 100% valid, but I also suspect the strong engineering ideas that back React and its friends are going to pay big dividends with AI.
Jamie argues we should apply the same componentization approach to their entire stack. Database functions designed to be self-contained. API endpoints with minimal cross-dependencies. Business logic broken into small, independent modules.
The result is that AI can work on individual pieces without needing to understand the entire system. And this is exactly what you want to maximize the power of local reasoning and minimize errors.
🤖 Design for AI adoption
So far we have addressed this topic from the perspective of a team that needs to choose tech or design apps in a way that is AI-friendly.
But what if you are building technology — a language, a framework, a database — and you want AI to use it well?
For many observers, this is a dreadful task: AI works best with things it has seen a lot during its training, so how can new tech catch up? When I interviewed Guillermo Rauch, CEO of Vercel, he told me he believes React will be the last framework — new stuff has zero chance of getting any traction.
So Jamie knows they are in for an uphill battle, but they have an advantage: they can build with AI in mind from the start.
1) 🔍 Build evals from day one
The first thing Jamie told me about this was: "We treat AI as a first-class user of our platform. That means we build evaluation systems for AI interactions just like we build tests for human interactions."
Most teams build their product, then bolt on AI support later. But if you're serious about AI adoption, you need to accumulate evals over time from the beginning.
Here's how this works in practice: every time they add a new feature to Convex, they also add tests that verify AI can use that feature correctly. They test against multiple models, different prompt styles, and various levels of context.
These evals accumulate over time, just like regular tests, and are able to capture AI regressions when either the tech or the AI models change.
Because the key insight is that AI adoption patterns change fast. What works with GPT-4 might not work with Claude. What works with one prompting style might fail with another. If you're not continuously testing these interactions — and if you are not doing it for all the most popular models — you're basically flying blind.
2) ⏳ AI as early adopters with infinite time
Jamie had a great way of thinking about this: "AI models are like early adopters who have infinite time and zero ego. They'll try your new APIs immediately, and they'll tell you exactly where they break down."
This is actually a massive advantage if you lean into it. New APIs that might take months to get real-world feedback from human developers get “tested” by AI in hours.
Still, AI picks up new things slightly differently than humans. Humans read docs, understand context, and can work around rough edges. AI just tries to pattern match against what it's seen before.
This means if you're building something genuinely new, you need to be extra careful about how you introduce it. AI won't give you the benefit of the doubt that human early adopters might.
3) ⚡ Fast and cheap beats slow and expensive
Finally, one of the most practical insights from Jamie was about model selection.
Everyone obsesses over using the most powerful model, but for development workflows, Jamie believes speed often matters more than raw capability:
"[When choosing AI models] we optimize for fast feedback loops over perfect output. It's better to get 80% correct code in 2 seconds than 95% correct code in 2 minutes."
Why? Because developers can quickly spot and fix the 20% of issues in fast feedback, but they lose their train of thought waiting for slow responses.
This influences how they design their entire system. Instead of trying to give AI perfect context about everything, they focus on making it easy for AI to get "good enough" results quickly, then iterate.
Also, if you are doing a good job making your system understandable by the fast models, chances are you will do great with the expensive ones too.
📌 Bottom line
And that's it! Here are the key takeaways from our deep dive into AI tech stacks:
🧠 AI amplifies existing patterns — don't reinvent design principles for AI: double down on the good ones you already know. AI succeeds where humans succeed, but with less tolerance for ambiguity.
🔧 Constraints are your friend — static typing, conventions, and forcing functions make AI more reliable. The tradeoff between safety and convenience disappears when AI handles the heavy lifting. Bet on safety!
💻 Stay in one language — every context switch between languages, configs, and formats creates friction for AI. Configuration-as-code and (e.g.) TypeScript everywhere isn't just cleaner—it's faster for AI.
🎯 Design for local reasoning — AI excels at deep, focused context but struggles with broad, interconnected systems. Component architectures and unified platforms reduce cognitive load.
🔍 Test AI interactions like user interactions — if you're building technology for others to use, treat AI as a first-class user with dedicated evals that accumulate over time.
⚡ Fast feedback beats perfect output — Optimize for quick iterations over flawless results. Developers can fix 20% of issues faster than they can wait for 100% perfect code.
I thank again Jamie and the Convex folks for partnering on this piece and making themselves available to all my questions! If you want to learn more about Convex, you can check it out below 👇