This Blog Was Made by AI
The point of the article is not the act itself of making a blog website with AI, but reflecting on how vibe coding is evolving engineering field.
I've been coding for 20 years, and there is a pretty big mental shift that's needed to go from 'thinking backwards from the desired outcome about the necessary architecture and implementing it' to 'thinking about the desired outcome and guiding the implementation'.
The blog post is my reflection on the counter-intuitive findings going from one mental model to another.
I rebuilt my entire blog without writing a single line of code. Here's how:
I gave Claude Code access to my old gajus.com codebase and the following instructions:
- Migrate to
react-routerusing experimental RSC features - Generate a Skill that uses Playwright to fetch articles, download images, and convert them to markdown
- Use Exa to search for all my articles across the Internet and create a
migration.mdchecklist - Deploy background agents to migrate articles one by one
- Adopt the visual style of a typical Vercel website
- Fix all SEO and accessibility issues
- Test everything using Playwright
I woke up to a new website. This website.
This experiment led me to a few realizations:
I would never migrate frameworks without a commercial reason or genuine interest in the new stack. For larger applications, migrations were too expensive to prioritize.
Watching this happen autonomously changes everything. The cost of switching has collapsed.
This extends beyond frameworks. Every software company will soon have an agent helping new customers migrate data from competitors. Vendor lock-in becomes a problem of the past.
I pushed back against vibe coding early on. LLMs didn't have enough context–how the website looks, how users interact with it, visual details.
But now the LLM opens a browser, takes screenshots, reflects on padding and layout. I watched it identify a hyperlink with incorrect display: block breaking the layout. It saw the problem visually and fixed the styles.
MCP integrations made end-to-end testing seamless.
One deliberate constraint: don't touch the code. I used to obsess over structure–frameworks, code style, organization. This experiment tested whether I could achieve outcomes as a user without ever looking at the code.
To enforce this, I didn't use my keyboard. Every input came through voice dictation via Wispr Flow.
At first, this felt like a handicap–constant frustration that I couldn't just edit a line. But as I opened more tabs with Claude Code instances, it became a superpower. Less obsession with details, more focus on the bigger picture, more velocity.
LLMs make mistakes. They also fix them quickly when you isolate the scope. I saw bad choices–markdown parsing, wrong libraries. Pausing the agent, giving context, letting it correct itself was smooth.
This is what the engineering role becomes: orchestrator. That's exciting for leaders who want to iterate faster and see visions come true.
I only catch mistakes because of years of coding experience. I know what to look for.
What about the next wave of engineers? Unless AI stops making mistakes, people without that foundation won't navigate the gaps. What used to require research and trial-and-error is now a process with little opportunity to learn. That's a real problem–we'll see a shortage of talent capable of using these tools sustainably.
The tools are powerful. Experience to wield them still matters.
Building software is no longer about writing code. Engineering isn't gone–it's evolving into orchestration rather than playing the instrument yourself.