I Built 90+ Production Lambdas in 120 Days — Here's the Operating Model That Made It Possible
I've written before about building a serverless platform on AWS as a solo operator. That post covered the what: the architecture, the services, the cost. This post is about the how. Specifically, the operating model that made it possible for one person to ship what normally requires a team of ten and months of sprints.
The Problem With "Vibe Coding"
Let's get something out of the way. Yes, AI can generate code fast. Open ChatGPT, describe what you want, paste the output into your editor. Congratulations, you're vibe coding. And about 80% of what you just generated will need to be refactored, rewritten, or thrown out entirely because it knows nothing about your codebase, your patterns, or your constraints.
I know because I tried it that way first. Early on I was prompting Claude and Copilot the way everyone does, describing what I wanted and hoping for the best. The code looked right. It ran. But it didn't follow my DynamoDB key conventions. It didn't match my error handling patterns. It imported libraries I'd explicitly decided against. Every PR was a negotiation between what the AI thought was correct and what my architecture actually required. I was spending more time fixing AI output than I would have spent writing it myself.
ADRs Changed Everything
The turning point was when I started treating my Architecture Decision Records not as documentation for humans, but as instructions for AI. Every decision I'd made about the platform, why DynamoDB over Aurora, why single-table design, why Lambda over ECS, how to structure error responses, what the naming conventions are, all of it went into ADRs that became a searchable knowledge base.
This is what Context Engineering actually means. Not prompt engineering. Not fine-tuning. Giving AI the same onboarding material you'd give a senior engineer joining your team, except the AI reads all of it instantly and never forgets. When I asked for a new Lambda function, the AI already knew my patterns. It already knew my DynamoDB access conventions. It wrote code that passed review on the first try, because it had the same context I had.
The Self-Correcting Loop
The real multiplier wasn't just front-loading context. It was the feedback loop. When generated code drifted from my standards, it wasn't a failure, it was a signal that my documentation had a gap. I'd fix the code, write a new ADR capturing the missing pattern, and that mistake never happened again. Not just for me. For every future task the AI would ever run against that codebase.
After a few weeks the system was meaningfully smarter than when I started. After a few months, the 90% first-time approval rate wasn't aspirational, it was just the baseline. I wasn't prompting anymore, I was co-engineering.
What 90+ Lambdas Actually Looks Like
User auth, content moderation, payment processing, creator payouts, vector memory, analytics pipelines, notification systems, admin tooling. Not toy functions. Production services handling real traffic, real money, real compliance requirements. A consultancy quoted $50K for just the analytics piece. I built it in 8 hours. The entire platform runs for roughly $5/month.
When one of those Lambdas grew to 1,348 lines and became untouchable, I didn't spend a sprint planning the refactor. I fed the ADRs and code-map into the context, and the AI decomposed it into clean, testable modules that matched my existing patterns. Because it already knew what "good" looked like in my codebase.
The Operating Model, Not the Engineer
I want to be clear about something. This isn't a story about one fast engineer. It's a story about an operating model that makes any engineer faster. The 90+ Lambdas, the 120-day timeline, the $5/month operating cost, those are outputs. The input is OutcomeOps: Context Engineering, ADR-driven development, and a self-correcting loop that gets smarter with every task.
If you're still prompting AI like a search engine and wondering why the output doesn't match your standards, build skills, not agents. Give the AI your context. Let it learn your architecture the same way a great engineer would. The results will speak for themselves.