AI Didn't Write My Code. It Let Me Design It.

5 min read
AISoftware DesignCareerDecision Making

AI gives us leverage, but it doesn't free us from responsibility. We still need to know what good looks like — and care enough to insist on it.

Part 3 of a series on building with the modern web platform — and letting AI handle the learning curve.

Throughout this series I've shared how I built a blog with a framework I'd never used, removed JavaScript the browser didn't need, and replaced a custom menu script with a single HTML attribute. In every case, AI was in the room. But it wasn't writing the code. It was giving me the space to make decisions.

This last article is about that shift — and why leverage without responsibility is the fastest way to build something you'll regret.

The senior advisor

The best way I can describe how I work with AI is as a senior advisor. Not a junior developer who writes what I tell it. Not a magic box that produces features. More like having a senior engineer who's also a teacher — someone who knows the technical landscape, can explain tradeoffs, and lets me decide.

When I started building the blog, the very first architecture the AI suggested was far from ideal. Too many layers, unnecessary abstractions, the kind of structure that looks impressive on a diagram but creates friction everywhere. I didn't accept it. I used the conversation to explore my own thinking — what felt right, what felt heavy, where I wanted simplicity. The AI adapted. After a few iterations, we landed on something clean.

That's the dynamic. I don't take the first answer. I use the AI to think out loud, test ideas, and get to a design that matches what I believe good looks like. The AI doesn't care whether it builds something simple or something overengineered. It solves whatever problem you put in front of it. It's up to me to define the problem well.

Leverage doesn't mean freedom from responsibility

This is the part I think most people miss. AI gives us extraordinary leverage. I can explore a new framework, evaluate an unfamiliar API, and ship a working feature — all in a fraction of the time it used to take. But leverage is neutral. It amplifies whatever direction you point it in. Good decisions become great outcomes. Bad decisions become expensive mistakes, faster.

I've seen it in my own work. When I'm clear about what I want — remove complexity, use the platform, keep it simple — the AI helps me get there efficiently. When I'm vague or lazy about the direction, the AI will happily produce something that works but that I wouldn't want to maintain. It doesn't push back. It doesn't say "this is going to be a mess in six months." That's my job.

It's not unlike how contractors sometimes work. They deliver what the client asks for. They make the client happy in the short term. But they're not invested in the long-term health of the codebase. If you don't design the system, someone — or something — will design it for you. And it won't optimize for what you care about.

We still need to know what good looks like. We still need conviction about what's possible when we refuse to accept "good enough" as the final answer. AI doesn't replace that. It tests it.

From coding to deciding

In previous companies, I advocated for coding guidelines to keep things simple. No complex design patterns when a function will do. No heavy libraries when the platform suffices. No "we may need this in the future." Those conversations were hard to win. There's always pressure to over-build, to follow the hype, to add layers "just in case."

Working with AI on my own projects, I finally have the leverage to practice what I always preached. I spend less time on the mechanics of writing code and more time on the decisions that shape it. Which pattern fits? What can I remove? Does the platform handle this already? What's the simplest thing that works?

That's what I mean by "AI let me design it." Not design in the visual sense — design as in the architecture of decisions. I have more time for the ins (the structure, the tradeoffs, the simplification) because AI handles more of the outs (the syntax, the boilerplate, the "how do I do X in this framework").

Even this series was built that way

This series itself is an example. The Popover API article? That started as a conversation where I asked "can we use this?" The AI researched, proposed an approach, implemented it. It broke. We debugged together. I made the calls — no resize handler, simpler backdrop, gentler transition. The AI executed. I designed.

Then we planned the articles. I shared context about my background, my philosophy, what I wanted to say. The AI interviewed me, proposed a structure, wrote drafts. I rewrote sections, rejected what didn't sound like me, added angles it hadn't considered. The collaboration worked because I knew what I wanted the series to be. The AI made it faster to get there.

Where this is going

AI has already changed how I work. Not in some theoretical future — right now. The tools improve after every iteration. What took a full conversation six months ago now takes a single prompt. The feedback loop keeps shrinking.

I don't know where this is heading. Nobody does. But I know it will look completely different from anything we've seen before. And I know that the developers who treat AI as leverage for better decisions — not as a replacement for having them — will be the ones who thrive.

The ones who ignore it may be left behind. Not because AI will take their jobs, but because others will use it to do the same work with more intention, more speed, and less complexity.

That's what this series was about, really. Not Astro, not the Popover API, not even AI. It was about paying attention to what's possible now — and having the conviction to use it well.