Ad Engine started as a sketch on a whiteboard in late February: a unified cockpit for launching, optimizing, and measuring programmatic ad campaigns across Meta and Google, with budget pacing and creative rotation baked in. Six weeks later it was in production. This is the build log, architecture, tradeoffs, and the two mistakes that cost us real time.
The shape of the problem
Most ad tooling is either a thin wrapper around one network’s dashboard or a bloated enterprise suite that assumes a dedicated ops team. The gap we set out to fill was narrower: a single surface where a founder or a two-person marketing team could orchestrate creatives, pace budgets, and read clean unified performance, without tab-switching between Ads Manager, Google Ads, and a spreadsheet named FINAL_v7.xlsx.
That meant three non-negotiable capabilities: unified campaign creation with channel-specific validation, automated budget pacing and creative rotation rules, and real-time performance analytics that actually reconcile with what the networks report.
The stack
Nothing exotic. Every choice was made for boringness, a word we use as a compliment:
- Next.js 15 with the App Router, server components, and Server Actions for mutations
- TypeScript, strict mode, end-to-end
- Postgres for everything (Supabase-hosted for speed of setup)
- Node workers coordinating the ad-network API calls
- Tailwind + Recharts for the cockpit UI
There is no Redis. There is no message queue. There is no Kubernetes. When you have six weeks, you earn every new piece of infrastructure by proving you cannot ship without it. We never hit that bar.
Week 1: building the foundation
We spent the first week almost entirely on the data model. If you get this part wrong, every week afterward feels like renovation work. The key insight: ad networks are not always reliable. They send events out of order, resend numbers, and sometimes backfill conversions days later. Your setup has to be able to handle that.
We decided to store every update as a new entry, not overwrite old ones. Current totals are a sum of all entries. This way, when Meta resends a day of conversions at 2am, nothing gets counted twice, and if a number changes three weeks later, we have a full history of how it got that way.
Weeks 2 and 3: the dashboard
The campaign creation UI was the piece we knew users would touch most, so we gave it disproportionate design attention. The central idea: one form, multiple channels, live validation. The form collects channel-agnostic fields (objective, budget, audience hints, creatives) and the orchestration layer splits those into network-specific API calls under the hood.
Every field validates against every selected channel in real time. Try to ship a 10-second video to a placement that caps at 6 seconds? The form knows before you hit save. This is the kind of polish that doesn't sell itself in a demo but earns your users' trust in the first hour they use the product.
Week 4: pacing and optimization
This is the engine in Ad Engine. A scheduled worker runs every 15 minutes and asks, for every active campaign: are we pacing toward the budget goal, and are we respecting the performance rules? If spend is under-pacing, the worker nudges bids up. If a creative's ROAS falls below threshold for two consecutive windows, the worker pauses it and reallocates budget to the top performer.
The rules are user-configurable, ROAS, CPA, CTR, custom metrics, but the defaults ship tuned for the common case. Nine out of ten users never touch them, which is the point. Automation should be opinionated by default and configurable when needed.
Week 5: analytics
Because of the ledger schema, analytics was almost free. Every question, "what was my ROAS on this creative last Tuesday?" "which audience is converting best on Google but tanking on Meta?", is a query over the same ledger. Recharts on the frontend for visualization. Drill-downs are just filter predicates.
The one piece of real engineering: materialized views for the default dashboard queries, refreshed every 60 seconds. The raw ledger stays authoritative, but the view makes the "at a glance" screen instant.
Week 6: launch, and the two mistakes
We launched on schedule. It worked. But there were two things we'd do differently.
Mistake 1: Putting credential rotation off
We stored encrypted OAuth refresh tokens per workspace from day one, good. What we didn't build until week 5 was graceful handling of token expiry. When Meta invalidated a token on a long-running sync, the worker crashed loudly, and the user got a cryptic error in the UI. A small amount of foresight here, automatic token refresh with a fallback re-auth prompt, would have saved us a frantic afternoon the day before launch.
Mistake 2: Over-engineering the first version of rules
Our initial rules engine could express arbitrary boolean combinations of conditions. Beautiful. Unused. Every real user wanted to say "if ROAS drops below X for two days, pause." We shipped with that one pattern plus three presets, and the full expression engine is still sitting in a feature flag waiting for its first real use case. Build what you can see people reaching for, not what you think they might want eventually.
What we'd build next
Ad Engine today covers Meta and Google. The architecture is deliberately channel-agnostic, so adding TikTok and LinkedIn is a matter of writing two new adapters and a bit of creative-spec validation. That's the next lap.
The deeper bet is on AI-assisted creative generation and optimization, not novelty-for-novelty but genuinely embedding models into the rules engine so pacing can consider creative fatigue and audience saturation in ways a human rule can't.
If you're trying to ship something similar, a platform that needs to feel inevitable in six weeks, we'd love to talk. Drop us a line.