How to Ship Fast Without Breaking Things
There's a false dichotomy that floats around startup culture: move fast and break things, or move carefully and ship slowly.
The best teams do neither. They ship frequently and rarely break things. The two aren't in tension — they're achieved with the same disciplines.
Small changes ship safely
The biggest predictor of a broken deployment is the size of the change being deployed.
A 50-line change is easy to review, easy to test, and easy to roll back. A 2,000-line change is none of those things. When something breaks after a large deployment, you have a lot of code to look through. The root cause could be anywhere.
Habit: prefer small, complete changes.
"Complete" is the key word. Small changes should still be meaningful — a complete feature, a complete fix, a complete refactor of a single function. Not half a feature checked in because the day ended.
Feature flags: ship before you're ready to release
A feature flag lets you deploy code to production without it being visible to users. The code is live; the switch is off.
This decouples two things that shouldn't be coupled: deploying (getting code to the server) and releasing (making it available to users).
With feature flags:
- You can deploy any time, without coordinating with marketing or waiting for QA
- You can release to a subset of users first (% rollout)
- You can turn off a feature instantly if something goes wrong, without a deployment
The overhead of implementing feature flags is small. A basic implementation in Nuxt is a useFeatureFlags composable that checks an environment variable or a remote config.
Pre-deployment checks that run automatically
Manual review catches some things. Automated checks catch everything they're configured to catch, every time, with no effort from the developer.
At minimum:
- TypeScript — eliminates a whole class of runtime errors at compile time
- Linting — catches common code quality issues before they reach review
- Automated tests — even a small suite of tests for your core flows catches regressions
These run in CI (GitHub Actions, for most teams) on every pull request. Nothing merges until they pass.
The investment: a few hours to set up. The return: you stop spending time on bugs that automated checks would have caught immediately.
The smoke test habit
After every deployment, spend three minutes manually testing the critical paths:
- Can a new user sign up?
- Can an existing user log in?
- Can a user complete the core action (create a project, send a message, make a payment)?
These three checks take less time than a coffee break and catch the class of regressions that are most visible and most damaging: broken authentication, broken payments, broken core flow.
Write these down as a literal checklist. Run it every time.
Monitoring that tells you before users tell you
Error tracking (Sentry, Bugsnag, etc.) is table stakes. If an exception is thrown in your application, you should know about it before a user files a support ticket.
Beyond errors, consider:
- Uptime monitoring — if your app goes down, you want to know in minutes, not hours
- Performance monitoring — if an API call that used to take 200ms is suddenly taking 2000ms, something has changed
- Business metric monitoring — if sign-ups drop 80% overnight, you want to know immediately (and this is different from a technical error)
The configuration time for these is low. The value the first time they catch something is high.
Write postmortems for production incidents
When something does break, the default response is: fix it and move on.
Better: fix it, write down what happened in two paragraphs, and add one process improvement that would have prevented it.
Over time, this produces a shared understanding of where your system is fragile, and a track record of improvements that make it less so. The discipline doesn't need to be formal — a Notion doc or a Slack message works.
Rollback as a first-class option
Sometimes the fastest fix for a broken deployment is to revert to the previous version while you diagnose the root cause.
Vercel makes this trivial — one click to deploy a previous version. Most CI/CD systems support it.
Make sure your team knows how to roll back. It shouldn't be a 30-minute investigation the first time you need it.
The compounding effect
Each of these disciplines, applied consistently, compounds. A team with fast CI, small PRs, feature flags, and automated testing ships faster than a team without them — not despite the quality practices, but because of them.
Fewer regressions means less time firefighting. Less time firefighting means more time shipping. More frequent shipping means problems surface smaller and get fixed faster.
Speed and quality aren't in tension. The practices that produce quality are also the practices that produce speed.