Back to journal
EngineeringMarch 28, 2026 · 7 min read

The tools we reach for when building software in 2026

Every team has its favorite tools. These are ours, why we pick them, where they run into limits, and the backup choices we keep ready for when a project needs something different.

Every studio has defaults. The stack we reach for when a new project starts isn't chosen for novelty or resume-building, it's chosen because we know how these tools behave at 2am when something breaks. Here's what we pick, why we pick it, and the escape hatches we keep ready.

The frontend: Next.js + TypeScript + Tailwind

Next.js has been our frontend default for four years and nothing has come close to replacing it. The App Router, React Server Components, and Server Actions collapse a lot of boilerplate that used to live in a separate API layer. A form now posts to a server function in the same file as the component, with full type safety. That's not nothing, it's a meaningful reduction in the number of places a bug can hide.

TypeScript on strict mode is non-negotiable. Not because it catches every bug, but because it gives us fearless refactoring. When a product ships and we're four months in, the confidence to rename a domain concept across 400 files comes entirely from the compiler. Without that, you get ossified code, engineers who won't touch the model because nobody knows what'll break.

Tailwind handles styling. We used to roll design systems by hand. We used to think Tailwind was verbose. We were wrong on both counts, the trade is a little HTML density for a massive win in maintainability and speed of iteration.

Escape hatch: if a project is mostly static content and SEO is the whole ballgame, we reach for Astro instead. Next.js earns its weight when you have real application state; Astro wins for content-first sites where shipping a 3KB page matters.

The database: Postgres, almost always

Postgres remains the correct answer for 95% of what we build. It gives us JSON when we need it, relational integrity when we need that, full-text search that's good enough for most products, and a rich extension ecosystem (pgvector, pg_trgm, timescaledb) when we need to reach for specialized workloads.

The mistake teams make is reaching for specialized stores too early. You rarely need Elasticsearch on day one. You rarely need a dedicated vector database when pgvector will do. Add complexity when the product earns it, not because you read a blog post about how Company X scaled to a billion users.

The BaaS layer: Supabase

For the majority of our builds, Supabase sits on top of Postgres as authentication, row-level security, storage, and realtime. It lets us ship a secure multi-tenant app in days instead of weeks. The tradeoff: you're buying into their APIs and their conception of RLS. For products where auth and data access are the hard parts of the problem, that's fine. For products where they're incidental, it's a massive head start.

When Supabase isn't a fit, usually because the auth model is genuinely bespoke, or because the product needs to run in a customer's VPC, we drop down to vanilla Postgres plus Auth.js (formerly NextAuth) and handle RLS ourselves.

The backend: FastAPI for Python, Node / Go for everything else

When we need a dedicated backend service, AI workloads, heavy document processing, or integrations with vendor SDKs that only ship Python, FastAPI is the pick. Type hints, automatic OpenAPI, Pydantic for request/response validation. It feels like Next.js's API routes, but Python.

For services where Python isn't required, Node is the default (because the team already lives in TypeScript), with Go reserved for performance-sensitive orchestrators, the kind of thing that needs to fan out a thousand API calls efficiently. SynapSynk uses Go for exactly this reason: concurrent sync across multiple store APIs, and Go's goroutines make the code beautifully simple.

AI: Claude, with pragmatism about everything else

Claude is our default model for everything from content extraction to agentic workflows. It's the one that most reliably does what we ask without elaborate prompt scaffolding. For specialized extraction, tax documents, receipts, structured fields on invoices, we pair Claude with Azure Document Intelligence or a purpose-trained ONNX model, in that order of preference, depending on the accuracy bar.

The pattern that works: specialized tool first, Claude as the safety net. A model trained on W-2s will beat a general LLM on a W-2 every time, until it hits an edge case, and then Claude fills the gap. Building it the other way around is how you end up with hallucinated tax data.

Deployment: Netlify, Vercel, or Docker. In that order.

For most Next.js projects, Netlify or Vercel is the answer. Both have matured into genuinely boring production platforms. Push to main, CI runs, a preview URL gets posted in the PR, and the green deploy button happens automatically when you merge. We don't need more than that for 80% of what we ship.

When we do need more, customer-hosted deployments, specialized infra, long-running workers, we drop to Docker containers on Fly, Railway, or AWS ECS. But we reach for that when the product requires it, not when we imagine it might someday.

Monitoring: Sentry and structured logging

Sentry on every project from day one. Not because we expect bugs to make it to production (we don't), but because the ones that do should reach us before they reach a user. Structured logs go to whatever the platform gives us natively, we avoid coupling to specific logging vendors unless a project really needs it.

The rule: every bug report should include a Sentry link, not a "can you reproduce this?" loop. The time we save not playing detective pays for the entire tool a hundred times over.

What we deliberately don't use

A stack is as much about what you exclude as what you include. We avoid:

  • GraphQL on small teams, the resolver tax isn't worth it when three people own every endpoint and REST + TypeScript already give you full type safety
  • Microservices before product-market fit, distributing a monolith you don't yet understand is a great way to ship bugs across three networks instead of one
  • Custom CSS methodologies, Tailwind won; re-litigating it wastes everyone's time
  • ORMs for complex queries, we use Drizzle or Prisma for CRUD and drop to raw SQL the moment the query gets interesting. Every team ends up here eventually; we just start there.

The meta-principle

A stack is not an aesthetic. It's a set of bets about what will and won't break. The best stack is the one your team already knows cold, with the smallest number of novel pieces for this specific project. Our default stack exists because we ship with it weekly, not because we've decided it's universally optimal. When a project needs something else, we reach for it. When it doesn't, we don't add it just to feel modern.

The second-best stack you know how to operate beats the best stack you've never debugged at 3am.

Got a project that needs a studio that knows its tools? We'd love to hear about it.

#engineering#stack#architecture#tooling