Vibe-coding workflows 🪄
A top-down process for doing AI coding right, plus a full cast of supporting techniques.
Hey everyone! Our most popular article over the last few months is—maybe unsurprisingly—about how to use AI in engineering teams.
It got an awesome reception, but it was mostly a qualitative piece, giving you direction and trying to predict what comes next.
So, after that, many of you reached out via email asking for more. In particular, you asked for real world use cases, complete with prompts and everything!
Today we are doing just that, by bringing in Justin Reock, deputy CTO at DX, who analyzed data from DX customers about GenAI tool usage, use cases, and impact. Justin and his team also collected insights from interviews with S-level leaders who have successfully rolled out AI code assistants to thousands of developers.
The result is a list of the ten most impactful AI use cases in engineering, based on a survey of developers who have self-reported at least an hour of time savings a week using AI assistants 👇

Today we are covering all these techniques, including examples and prompts. We are also going beyond the mere list: we are using this as a starting point to synthesize a basic workflow for vibe-coding successfully.
Here is the agenda:
🗺️ Vibe-coding workflow — how to do it right by working top-down, optimizing cognitive load, and intercepting errors at the right level.
🛠️ Supporting techniques — a tool belt of ideas to leverage AI across the whole coding spectrum.
Let’s dive in!
🗺️ Vibe-coding workflow
Using generative AI can be incredibly effective to plan out and implement entire projects — but it can also make you run in circles and eventually lose time with respect to doing the work by yourself.
The difference between a successful and an unsuccessful output often lies in good prompting, and, even more, in your prompting workflow.
For new developments, like new features or entire small projects, we believe that the most effective workflow is a top-down interaction that goes through several steps.
You can begin with a discussion about your work requirements, and, step by step, get to high-level system design, the full code spec, classes and functions scaffolding, up to the fully coded version.
Working top-down (as we will see in the example) is incredibly helpful in keeping us — humans — engaged in the work. It optimizes our cognitive load and allows us to intercept errors at the right level. With that respect, there is little difference between designing software with or without AI: you just don’t want to rectify system design elements, or requirements misunderstandings, by the time you already have the full code — regardless of whether it was written by a human or an AI.
Still, many vibe-coding examples online dive code-first into AI implementation, which is easily overwhelming for humans and more prone to errors.
So let’s see how this works in a real-world example.
1) 🧠 Brainstorming & Planning
For an initial requirements brainstorming, you can use a simple prompt like this:
Example prompt
I am a product manager and you are a senior software architect. Let's have a brainstorming session where you ask me one question at a time about these requirements, and come up with a specification that I can hand to a software developer.
I want to design an application that will create an ElasticSearch index for a large table stored in Cassandra. Help me design a bulletproof, zero-loss system to do this.
Example output
Great! Let’s break this down systematically. I’ll start with foundational questions, then dive deeper into specific requirements and constraints.
Question 1: What is the primary use case for this ElasticSearch index?
Are we optimizing for full-text search, fast lookups, analytics, or something else?
Will the data in Cassandra be static, append-only, or frequently updated?
How large is the dataset (number of rows, data volume, expected growth)?
This will guide our approach to indexing, data modeling, and ensuring a zero-loss system.
These initial steps benefit from a number of back-and-forth questions where you get to detail what you have in mind. About this, remember that voice prompting is faster than typing, and less cognitively intense! You can use that especially at this stage.
Once you’ve engaged in enough questions, the AI will let you know that you are moving into final questions, and may produce a summary similar to this:
Your system is now:
Auto-Scaling (Kafka, ElasticSearch, Cassandra)
Fault-Tolerant with Multi-Region Replication
Zero-Loss with At-Least-Once Processing & Dead Letter Queue
Real-Time Indexing with Bulk Writes
Enterprise-Grade Security (RBAC, Encryption, SIEM Logging)
Disaster Recovery Across Regions with Failover Handling
Given a similar prompt to the one above, the assistant will then ask if you are ready to create a code spec based on these answers, and whether there are any final requirements.
It will produce a spec similar to the following truncated example output:
Software Specification — Cassandra to ElasticSearch Indexing System
1. Overview
This system is designed to index data from a large Cassandra table into ElasticSearch for full-text search. The architecture ensures zero data loss, near real-time processing (≤30s delay), and enterprise-grade security while supporting auto-scaling and multi-region disaster recovery.
2. Data Characteristics
Source Database: Cassandra
Record Structure: 8 columns, mostly text fields
Primary Key: Single primary key per row
Row Size: Up to 50MB
Insert Rate: ~50,000 new rows per minute
Data Retention: 6 months
No Updates/Deletes: Append-only dataset
3. Architecture Overview
Cassandra → Kafka (Streaming data pipeline)
Kafka → ElasticSearch (Near real-time indexing with bulk writes)
Failure Handling: Retry queue & Dead Letter Queue (DLQ)
Auto-Scaling for all components
Multi-Region Disaster Recovery
Enterprise-Grade Security & Monitoring
[…]
10. Summary
Auto-Scaling for Kafka, ElasticSearch, and Cassandra
Fault-Tolerant Multi-Region Replication
Zero-Loss Processing with Retry Queues & DLQ
Enterprise-Grade Security & Compliance
Advanced Monitoring & Alerting (Slack Notifications)
Near Real-Time Indexing with Bulk Writes
As powerful as this process already is at this stage, we can take this workflow even further by scaffolding code from this spec, and then using mid-loop code generation on that scaffolding to generate a draft of the application code.
By working top-down it is easier for a developer to retain control over what is being produced, and correct mistakes at the right altitude. Design mishaps can get rectified before any code is generated, and structure corrections can happen when the code is still at scaffolding stage.
This is still vibe-coding, but it arguably generates better vibes.
Finally, for large batches of work, you can copy the entire transcript from the conversation into a reasoning model such as ChatGPT O3 with a prompt like:
Create a detailed blueprint to implement a project described by the following specification. Break this blueprint down into single units of work that build on one another. Ensure that each unit of work is testable and feasible to implement. Output a series of prompts that can be fed into a code-generating LLM that will guide a developer to implementing each unit of work. Make sure that each unit of work stays within the same scope of work, and that there's no overlap, and no modules that are missing. Here is the spec: [copy full spec]
This will produce a stepwise series of work units that are easy to follow and prompt for, such as:
Unit of Work #6: Dead Letter Queue Handling
Goal: Implement the infrastructure to handle failures that exceed retry limits.
Tasks:
Create a separate Kafka topic (cassandra-to-es-dlq).
Implement a consumer or archiving process that logs or stores these failing messages for further inspection.
Provide a way to manually reprocess or diagnose these messages if needed.
Testable Outcome: Any records that cannot be indexed after retries end up in a DLQ where they can be inspected.
Prompt for Unit of Work #6 "Create a Dead Letter Queue using a new Kafka topic cassandra-to-es-dlq. Implement a consumer or scheduled job that reads from this topic and logs the failing records, including error details. Demonstrate how an operator could reprocess or export these dead-letter records for manual investigation."
You can then simply follow the prompts to scaffold your code (see below). If you use an agentic model such as Cursor, you can even instruct it to create repositories and images to support the work.
2) 🗃️ Initial Scaffolding
Once requirements and system design are in place, you may move forward to understanding how your application outline will be shaped, and come up with an initial set of classes or functions.
With minimal prompting, a code assistant can help you frame your work, before you get to the full implementation.
Example prompt
Create the code outline for a Java application that will listen on a Kafka topic and create a multicast pattern to a Postgres endpoint, a RESTFul POST endpoint, and an SMTP endpoint.
Example output
kafka-multicast-app/
├── src/main/java/com/example/kafkamulticast/
├── KafkaConsumerService.java
├── PostgresService.java
├── RestService.java
├── EmailService.java
├── AppConfig.java
├── MainApplication.java
├── src/main/resources/
├── application.properties
│── pom.xml (if using Maven)
├── build.gradle (if using Gradle)
Class and interface outline (just for one class)
public class KafkaConsumerService {
// Configure and initialize Kafka consumer
// Listen to a specified Kafka topic
// Deserialize incoming messages
// Pass the message to multicast services
}
Again, starting with basic scaffolding makes it easier for developers to find mistakes or opportunities for improvements, as opposed to going through the full code on multiple files.
3) 🔄 Mid-Loop Code Generation
Once you have scaffolding in place, you can generate scoped blocks of code by providing a code outline or function description and asking the AI to finish the work.
This may look obvious and has been among the first uses of AI in coding (since the first Copilot release), but it gets new meaning when used as part of your wider top-down strategy.
Example scaffolding
private static int generateFibonacci(int n) {
// This function should generate Fibonacci numbers
}
4) 🔍 Test Case Generation
The final step in this workflow might be to generate test cases to ensure robust code coverage, reducing manual effort.
For TDD fans, you can take the opposite approach as well! You can design your tests first, and then feed those tests into a similar prompt:
Example prompt
Create a code outline for a Java class that will pass the following JUnit tests: [paste in tests]
Some IDEs directly have a “generate tests” inline option, or you can use a basic prompt asking to generate test cases for the selected class.
🛠️ Supporting Techniques
You can consider the above as a basic workflow for handling the interaction with AI coding agents, but it is far from the only way AI helps in the dev process.
Here are a series of supporting techniques that are handy in your daily life:
1) 📜 Stack Trace Analysis
AI-powered stack trace interpretation quickly identifies root causes of errors, saving time diagnosing and fixing runtime issues.
Make it a knee-jerk reaction any time a stack trace is generated with an error to ask the code assistant to explain the error, rather than manually parsing it.
Example prompt
Analyze this Java stack trace and suggest the root cause and potential fixes: [Insert stack trace]
Example output
2) 🔧 Refactoring Existing Code
AI-assisted restructuring of legacy code improves readability and efficiency, enabling large-scale, consistent refactoring.
Before manually refactoring code, see what suggestions are made by a code assistant.
Example prompt
Refactor this Java function to improve readability and efficiency: [Insert longer explanation, context, and code]
3) 📖 Learning New Techniques
An underrated use case of AI is for accelerating learning / onboarding to new frameworks and libraries. You can ask the code assistant to explain various aspects of the code and get friendly results.
Example prompt
I have five years of experience writing Java and Spring. Show me how to create a Java 24 virtual thread in Spring.
Example output
4) 🔬 Complex Query Writing
Sometimes you need to generate complex / tedious queries such as in regex or SQL. AI can generate many of these code-native expressions for you.
Example data to parse
[2025-03-14 12:35:29] INFO User: jdoe | IP: 192.168.1.10 | Action: LOGIN_SUCCESS | TXN: A12B3C
[2025-03-14 12:36:10] WARN User: asmith | IP: 10.0.0.25 | Action: PASSWORD_ATTEMPT_FAILED | TXN: X9Y8Z7
[2025-03-14 12:37:45] ERROR User: mwong | IP: 172.16.5.3 | Action: DATABASE_TIMEOUT | TXN: KLMNOP
[2025-03-14 12:40:12] INFO User: kthomas | IP: 203.0.113.99 | Action: DATA_EXPORT | TXN: QWERTY
Example prompt
Step 1: Create a regular expression that will parse timestamps, usernames, IP addresses, user actions, and transaction ids from log data in this format:
[2025-03-14 12:35:29] INFO User: jdoe | IP: 192.168.1.10 | Action: LOGIN_SUCCESS | TXN: A12B3C
Step 2: Generate Java code that will integrate this RegEx
Example output
// Regular expression pattern
String regex = "\\[(\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2})\\] \\w+\\s+User: (\\w+) \\| IP: ([\\d.]+) \\| Action: (\\w+) \\| TXN: (\\w+)";
// Compile pattern
Pattern pattern = Pattern.compile(regex);
Matchermatcher=pattern.matcher(logEntry);
5) 📑 Code Documentation
Code assistants can help you comment code for better understandability and readability. It can also generate actual docs in markup formats like AsciiDoc and LaTeX.
Example prompt
Generate documentation for this code in AsciiDoc markup format for this code meant to be read by other developers.
This output will resolve into an AsciiDoc format similar to this:
6) 💡 Code Explanation
Code assistants are amazing at providing insights into the functionality of existing code. This can be especially helpful when trying to interpret code in frameworks you may be less familiar with.
Example code
import org.springframework.web.bind.annotation.*;
import org.springframework.boot.autoconfigure.SpringBootApplication;import org.springframework.boot.SpringApplication;
@SpringBootApplication
@RestController
public class SpringHelloWorld {
@GetMapping("/")
public String hello() {
return "Hello, World!";
}
public static void main(String[] args) {
SpringApplication.run(SpringHelloWorld.class, args);
}
}
Example prompt
Explain the purpose of each annotation in this Spring Boot controller class: [Insert code]
Example output
📌 Bottom line
And that’s it for today! Here are the key takeaways on using AI effectively in your coding process:
🗺️ Adopt a top-down workflow — Start AI interactions with requirements and high-level design before generating code, to maintain control and catch errors early.
🏗️ Scaffold first, then generate — Use AI to create code outlines (classes, functions) from specifications before asking it to fill in the implementation details.
🧪 Leverage AI for testing — Generate test cases automatically for your code, or use AI to implement code based on your pre-written tests (TDD style).
🔍 Automate tedious diagnostics — Instantly analyze stack traces or generate complex queries (SQL, Regex) by feeding them to your AI assistant.
🔧 Refactor and document smarter — Use AI suggestions for refactoring existing code and generating comments or basic documentation, saving significant manual effort.
📖 Accelerate learning on the fly — Ask AI to explain unfamiliar code snippets, new libraries, or programming techniques directly within your workflow.
Finally, I want to thank DX again for graciously partnering on this piece. You can find their full guide on AI assisted engineering below 👇
It includes the top use cases for AI, prompting techniques, and leadership strategies for encouraging AI use.
I wish you a great week!
Sincerely 👋
Luca
I love your guidance on generating the spec doc. I tried it out this morning and was surprised at how far I could get with Cursor in the first hour. Your guidance also made it easier to know when to create agentic rules because each step of the prompt instructions made it easy to "eject" prompts for cursor rules.
One tweak my GPT made was to incorporate more of the test writing into the process rather than treating it as something after the fact. I imagine that spending more effort on the initial prompts could optimize this.
The use cases you list have a surprisingly low adoption rate. The best use case (stack trace analysis) was only reported by 30% of developers, and the range drops all the way to 10%. It's almost like the chart should be flipped to show use cases developers haven't adopted yet because it shows how 70-90% of developers still need help adopting AI for these situations. How were these use cases chosen?
We did some related research on Dev Interrupted recently. Here are some of the early results:
- The most common use cases where devs are working with AI are writing code (75%), writing tests (68%), and writing docs (53%).
- The most commonly reported use cases where AI does the entire task autonomously are PR descriptions (25%), writing tests (22%), and writing docs (18%)
- 85% of devs are still managing project management tasks entirely themselves (creating tasks, prioritizing work, defining task requirements). There is still a lot of room to grow upstream from writing code.
You can read about our experiment here: https://devinterrupted.substack.com/p/the-matrix-that-makes-your-ai-strategy