My 2026 Stack & Tooling of Choice
The tooling is constantly evolving, and so is the ecosystem. You might feel like chasing every new tool, but I don’t believe you have to. Find the tool that fits your needs and gives you an amazing DX, and stop there.
This is less “the definitive 2026 stack” and more “what I reach for today and why”. Some choices are deeply researched and experimented with through projects on a daily basis. Others are AI suggestions I stuck with because they just worked. A few more are pragmatic picks driven by reputation: I needed something in a category, I scanned the well-maintained contenders, and I moved on. All three paths are valid.
Editor & environment
- WebStorm as my daily editor. I’ve been in the JetBrains ecosystem for years, the refactoring tools, type-aware navigation, and debugger are hard to give up. I pair it with Claude Code in a terminal alongside, so the IDE handles writing and Claude handles everything else (multi-file refactors, boilerplate, investigations).
- Linux on my desktop, macOS on the MacBook Pro when I’m mobile. I’ve written about a decade on Linux as my daily driver, and the MBP is my laptop whenever desktop isn’t an option. Same shell, same tools, same muscle memory across both.
Runtime: Bun & Node.js
On side projects, Bun has become my default runtime. Professionally I’m still on Node.js, that’s the pragmatic choice when you’re on a team with production constraints you inherit, not pick. But on my own stack, I reach for Bun first, and I haven’t hit the limitations a lot of people warn about.
Bun runs your code fast, bun:test runs your tests fast, and the ecosystem is solid enough that you can ship binaries straight from it. That’s what I did when I built the Varsafe CLI: a Bun + NestJS + Commander app that does an amazing job. Yes, SEA exists (Single Executable Applications in Node), but when I asked myself what I’d actually be using, every piece of my codebase (API, landing page, everything else) was already running on Bun. So why bother with SEA just for the CLI? I stuck with what Bun already gave me.
One runtime for everything (API, CLI, scripts, tests), native TypeScript support, built-in test runner, fast install, workspaces, and binary output. Fewer moving parts, less config, same tool everywhere.
Testing
- Vitest in the Node.js world. Fast, Vite-native, watch mode that just works, Jest-compatible API so migration is painless.
bun:testin the Bun world. Zero setup, ships with the runtime, understands TypeScript and Bun APIs out of the box. No Jest, no config file.- Playwright for e2e and dashboard visual audits. Real browsers, real network, traces and screenshots when things break. The only e2e tool I trust not to flake for dumb reasons.
- Stryker for mutation testing. Coverage tells you which lines your tests touched; Stryker tells you whether those tests actually caught a bug. The difference is night and day, and it’s the fastest way to find “green but useless” tests.
Git hooks: Lefthook
Pre-commit runs lint + format + typecheck. Pre-push runs tests + build. Lefthook is written in Go, parallel execution, simple YAML config, no Node runtime dependency. Coming from Husky, the jump in speed and ergonomics is what sold me.
Linting & formatting: the oxc suite
Rust-based, 50–100x faster than ESLint + Prettier. On a large monorepo, lint goes from “grab a coffee” to “already done”. The rule set covers the issues I actually care about without the endless plugin configuration.
Dead code & unused deps: knip
Wired into Lefthook so I can’t merge code that leaves unused exports or dependencies behind. Monorepos rot fast. Unused files, orphaned exports, stale devDependencies pile up. Knip catches it before it compounds.
Once the foundation (runtime, tests, hooks, lint) is solid, everything downstream gets easier. Here’s what sits on top.
Language & typechecking
- TypeScript 6 with
@typescript/native-preview(tsgo), the Go-rewritten TypeScript compiler. Typecheck is an order of magnitude faster thantsc, which means it can actually run in pre-commit. - Zod 4 at every boundary (HTTP, env, external APIs). One schema → runtime validation + inferred TypeScript type. No more drift between “what the type says” and “what the payload actually is”.
- AJV + JSON Schema when raw throughput matters (hot validation paths, large payloads). AJV pre-compiles the schema into a native-speed validator and beats Zod on pure perf. The trick I use: keep authoring in Zod for DX, then convert Zod → JSON Schema and feed it to AJV at startup. You get Zod’s ergonomics at dev time and AJV’s speed at runtime, with no duplicate source of truth.
Monorepo
- Turborepo for task orchestration. Honestly, it came up as an AI suggestion early on and I stuck with it. I haven’t done a deep comparison with Nx or the alternatives. Turbo just does the job, caches what I’d expect, and has never been a bottleneck. The day it becomes one, I’ll reevaluate.
- Bun workspaces for the package layout itself. Native to the runtime I’m already using: one lockfile, one install,
workspace:*links resolve out of the box. Worth noting: workspaces and Turbo solve different problems. Workspaces handle install + linking (andbun run --filtercan walk the dep graph), Turbo handles task caching and orchestration on top. Not redundant, complementary.
Dependency hygiene
- Renovate on a self-hosted GitLab runner. Stays current without me doing it manually: group minors, isolate majors, schedule batches.
- SonarQube in the CI check stage, self-hosted. I’d heard about it, figured it was a solid addition on top of linting, and spun up my own instance. It runs in CI on every pipeline and produces a detailed report, which I then feed back to Claude to iterate on and clean the codebase up progressively. Static analysis + AI cleanup loop is a surprisingly strong combo.
Backend stack
- NestJS + Fastify adapter (not Express). Fastify is faster and has a better plugin model; NestJS gives me DI, module boundaries, and testability.
- Kysely as SQL query builder. I kept asking myself, Prisma? Drizzle? Professionally I’ve always reached for Knex, and Kysely is basically Knex with real type safety. In an AI-assisted world, the ORM debate feels over to me: don’t overcook your code with extra sugar when a typesafe query builder does the job and outputs clean raw SQL with zero runtime overhead. I know exactly what hits the DB.
- dbmate for migrations. Hadn’t used it before, found it through AI lookups while scoping the stack, and it matched my expectations from day one: plain SQL up/down files, language-agnostic, no DSL to learn. Very efficient, zero friction.
- BullMQ on Valkey for jobs and queues. Battle-tested, good observability, scales past single-process.
- Kafka is what I’d reach for if I needed a proper event streaming backbone (via KafkaJS or the newer Platformatic Kafka client). Not in my stack today, but worth knowing where I’d start.
cacheablefor L1 (memory) + L2 (Valkey) tiered cache. I’ve been a hugekeyvuser for years, andcacheableis built by the same team, so adopting it was an easy call. In practice, hot data stays in-process, warm data stays in Valkey, and event-driven invalidation keeps both layers coherent.- Pino +
pino-prettyfor structured logging. Fastest logger in the Node ecosystem, structured JSON in prod, human-readable in dev. - BetterAuth with passkey + SSO plugins. Handles the parts of auth I don’t want to reinvent (sessions, device management, SSO, WebAuthn), stays out of the way for the rest.
- MJML for transactional emails. Writing raw HTML email is a punishment I refuse to accept.
- cockatiel for resiliency patterns. Retries, timeouts, circuit breakers, bulkheads, fallbacks: the toolkit you reach for once you stop pretending the network is reliable. Cockatiel lets me declare the policies once and compose them, instead of hand-rolling ad-hoc try/catch everywhere.
- Go when the job calls for it: tiny static binary, native concurrency, predictable performance, fast startup, minimal memory footprint. Glowo’s uptime checker is written in Go because that’s exactly what a probe fleet needs. Bun is my default; Go is what I reach for when the workload itself asks for those properties.
Frontend stack
- React 19 as the view layer.
refas a regular prop,use(),useEffectEvent,<Activity>. The API finally feels like it was designed together. - Vite for dev + bundling. HMR that actually works, first-class TypeScript, ecosystem everyone’s already on.
- TanStack Query (server state) + TanStack Router + TanStack Start (for Glowo’s dashboard). The one library family I never have to fight: typed routing, cache invalidation, SSR, all designed as one system.
- Tailwind CSS v4 + Radix UI primitives + shadcn/ui-style composition (
cn, CVA,tailwind-merge). Utility-first scales, Radix handles the accessibility I’d get wrong, shadcn patterns let me own the components instead of fighting a design library. - Zustand for cross-component client state. Boring by design: pure state + setters, no side effects inside the store.
- React Hook Form + Zod resolvers. Uncontrolled by default (performant), validation and types come from the same schema.
- Lucide icons, Framer Motion animations, Sonner toasts. Each one is the quietly-correct default in its category.
CLI craftsmanship
- Ink (React for terminals) +
ink-gradient/ink-spinner. If I already know React, I can build a rich TUI without learning a new paradigm. - nest-commander to wire the CLI into NestJS. The CLI reuses the same DI container as the API: same services, same tests, zero duplication.
- @inquirer/prompts, ora, picocolors, figlet. The small stuff that makes a CLI feel crafted instead of stitched together.
Desktop app stack
When I need a real cross-platform desktop app, this is the combo I reach for, validated on Yhtua, my open-source 2FA manager.
- Tauri 2 as the shell. Native menus, auto-updater, code signing, and a single toolchain that cross-compiles to AppImage / deb / rpm / dmg / msi. Electron is heavier, slower to start, and ships a full Chromium runtime per app; Tauri is the sane alternative in 2026.
- Rust is the backend, by default (Tauri is Rust). That’s where everything that talks to the OS lives: filesystem, clipboard, keychain, native dialogs, auto-update, and anything sensitive like crypto. On Yhtua that meant the
ringcrate for AES-256-GCM, PBKDF2-SHA256 key derivation, HMAC-SHA256 backup integrity, and thekeyringcrate for platform-native secret storage. The kind of code I do not want to write in JS. - Bun as the runtime for the JS side, same as everywhere else in my stack. Install, dev, build, all fast.
- Nuxt 4 + Vue 3 for the webview UI. Yhtua predates my React-first phase and Vue + Nuxt was a great DX then. Honestly, any modern frontend framework works here, Tauri doesn’t care. Pick the one you’re fastest in.
Same philosophy as the server side: use the best tool for each concern, let Tauri bridge them.
Landing & docs
- Astro 6 for landing pages. Islands architecture, zero JS by default, ships the fastest marketing sites I’ve built.
- VitePress + Mermaid for docs. Markdown-first, Vue under the hood, fast search, diagrams inline without external tooling.
Data layer
- PostgreSQL as default, TimescaleDB extension for time-series (Glowo’s live checks). One database, one backup story. I’d rather push Postgres hard than add a second data store.
- Valkey as cache + queue. Redis-compatible, open-source governance, drop-in replacement.
Infra & deploy
- Docker + docker-compose for local + prod compose stacks. One
docker compose up= full stack, identical between my laptop and prod. - k3s + Helm for Kubernetes. Full Kubernetes API without the operational weight of upstream k8s. Helm charts keep deploys declarative.
- Traefik as edge proxy, HAProxy when I need L4. Traefik auto-configures from labels, HAProxy handles the raw TCP cases Traefik isn’t built for.
- pgBackRest for Postgres backups, PgBouncer for pooling. pgBackRest handles PITR, incremental backups, and remote retention properly; PgBouncer keeps connection counts sane under burst traffic.
- WireGuard for private networking. Fast, simple config, kernel-level on Linux.
Observability
- Grafana Alloy as the collector. OTel-friendly, replaces the Promtail + Grafana Agent zoo with one binary.
- Prometheus + Grafana for metrics and dashboards. The boring default that still wins.
- Loki for logs. Index by labels, not by content. Cheap to run, fast to query the cases I actually care about.
- ntfy for push alerts. Self-hosted, dumb-simple HTTP API, works on my phone without a SaaS subscription.
CI/CD
- GitLab CI self-hosted on my own runners. No per-minute billing, full control over the runner environment, and I already self-host GitLab.
- Pipeline shape: check → build → deploy, with SonarQube scanning in the check stage.
Secrets & monitoring of my own stack
- Varsafe for secrets in CI and prod. I built it, I dogfood it. Ephemeral injection via
varsafe run, nothing persisted to disk. - Glowo for uptime monitoring. Same dogfooding loop. Every service I deploy is monitored by the platform I also maintain. Real-world feedback is immediate.
The AI layer sitting on top
- Claude Code as daily driver. Slash commands, hooks, sub-agents, MCP servers, plan mode. The interface for delegating real engineering work has finally matured.
- Self-hosted GitLab runner with Claude for PR-level review. Catches issues before I merge, enforces conventions I’d otherwise have to nag about.
Closing Thoughts
None of this is impressive in isolation. Bun is fast, Postgres is Postgres, Tailwind is Tailwind. The unlock is the combination: a stack where every piece is boring, predictable, and well-documented. That’s precisely what makes AI useful on top of it. Claude can write a Kysely query, a NestJS provider, a Zod schema, a Helm chart, because these tools have clear conventions and years of public answers behind them. If I were on an exotic in-house framework, the same AI would be far less helpful.
The other pattern I keep noticing: the stack barely changed between side projects. Varsafe, Glowo, Yhtua, the Qobuz automation, a handful of landing pages, all lean on the same foundations. That’s not a failure of curiosity; that’s compounding. Each new project starts with the same muscle memory, the same lint config, the same deploy shape, so almost all the time goes to the part that’s actually new.
I don’t believe there’s a “best” stack. I believe there’s a stack you know well enough that AI feels like a teammate instead of a guesser, and everything else is negotiable. Find your version, boring on purpose, and let the AI do the heavy lifting on top.
Get notified when I publish something new. No spam, unsubscribe anytime.

















