Ops
Deployment

Deployment

How Eziseller ships to production: Next.js frontend on Vercel, Express backend on Azure App Service, managed PostgreSQL. Audience: new dev running their first deploy or debugging a failed one.

1. Overview

Eziseller is deployed as two independently-shipped artifacts against one shared database. The frontend (Next.js 14) builds and runs on Vercel, which rewrites /api/* to the backend URL. The backend (Express + Prisma) builds on GitHub Actions and is published to an Azure App Service Windows plan running under IIS via iisnode. The database is a managed PostgreSQL instance reachable from both. There is no shared infrastructure beyond the DB — the two halves can be deployed, rolled back, or scaled independently.

2. Architecture

The frontend is public-facing; the backend is only hit via Vercel's rewrite and by third-party webhooks (Razorpay, Meta). Azure runs a single slot (Production) — there is no staging slot configured.

3. Build processes

Frontend (Vercel):

  • npm ci at repo root, next build from package.json:L6.
  • NEXT_PUBLIC_* vars are inlined at build time; changing them requires a redeploy.
  • /api/:path* rewrite target is read from NEXT_PUBLIC_API_URL at runtime (next.config.js:L10-L17).

Backend (GitHub Actions → Azure):

4. Deploy flow

4.1 Backend (master → Azure)

  • ci always runs (PR + push). deploy only runs on push to master or workflow_dispatch.
  • PRs get type-check only — no preview env is provisioned for the backend.

4.2 Frontend (any push → Vercel)

Vercel's GitHub integration triggers on every push. PRs get a preview URL; pushes to master promote to the production domain. No custom workflow file — configuration lives entirely in the Vercel dashboard.

4.3 Database migrations

There is no automatic prisma migrate deploy step in either workflow. Migrations are applied manually before merging the PR that depends on them. See Gotchas.

5. Environments & env var split

SurfaceWhere it livesExamples
Frontend build-timeVercel project envNEXT_PUBLIC_API_URL, NEXT_PUBLIC_MAINTENANCE_MODE
Backend runtimeAzure App Service → Configuration → Application settingsDATABASE_URL, JWT_SECRET, FRONTEND_URL, CLOUDINARY_*, WHATSAPP_*, INSTAGRAM_*, RAZORPAY_*, MAINTENANCE_MODE
CI secretsGitHub repo → Settings → SecretsAZUREAPPSERVICE_PUBLISHPROFILE_A7031B9CEA0D413CA0F1D46F237500CB

Full env surface: .env.example. Anything prefixed NEXT_PUBLIC_ is embedded in the JS bundle and public — never put secrets there.

6. Key files

7. Env vars & config

VarRequiredPurposeWhat breaks
DATABASE_URLyes (BE)Postgres connection stringBackend crashes on boot
JWT_SECRETyes (BE)Signs access + refresh tokensAll auth fails; existing tokens invalidated on change
FRONTEND_URLyes (BE)CORS allowlistBrowser requests blocked by CORS
NEXT_PUBLIC_API_URLyes (FE build)Target of /api/* rewriteFrontend talks to localhost:3001 in prod
MAINTENANCE_MODEno (BE)"true" returns 503 on all /api/*Health endpoint reports MAINTENANCE
NEXT_PUBLIC_MAINTENANCE_MODEno (FE)Frontend maintenance pageUsers still see the app while API is down
NODE_ENVyes (BE)production in AzureAffects logging, cookie flags, error verbosity

8. Gotchas & troubleshooting

  • Azure 404 on every route after deployCause: web.config missing from the deployed artifact (e.g. changed build script dropped it) → Fix: confirm backend/web.config is at the package root uploaded to Azure. iisnode needs it to route to dist/server.js.
  • Cold starts: Azure App Service can idle the worker; the first request after a quiet period takes 5-15s while iisnode boots dist/server.js and Prisma warms its pool. Third-party webhooks that don't retry (rare) can miss this.
  • Migrations do not auto-run. The deploy workflow has no prisma migrate deploy step. If you ship code that references a new column before running the migration, the backend boots but every query touching that table throws. Run npm run db:migrate (or prisma migrate deploy) against the prod DB before merging the PR.
  • In-memory refresh-token store (system-overview): refresh tokens live in a Set in backend memory. Every Azure restart (deploy, crash, config change) forces all logged-in users to re-login. Do not scale the backend horizontally without moving this to Redis/Postgres — different instances will reject each other's refresh tokens.
  • node-cron schedulers double-fire under horizontal scale. The backend currently assumes exactly one instance. Scaling out without a cron lock will send duplicate subscription charges, duplicate emails, etc. See cron-and-jobs.md.
  • Two workflows, one app. Both backend-deploy.yml (from master) and testcode_Eziseller.yml (from testCode) publish to the same Eziseller Azure app, Production slot. A push to testCode will overwrite prod. Treat testCode as effectively disabled.
  • Maintenance mode is split. Setting MAINTENANCE_MODE=true on Azure returns 503 from the API, but the frontend keeps serving pages and users see a broken app. Flip NEXT_PUBLIC_MAINTENANCE_MODE=true on Vercel and redeploy in tandem.
  • NEXT_PUBLIC_* changes require a rebuild. They are inlined at build time; flipping them in the Vercel dashboard without redeploying has no effect.
  • CORS == FRONTEND_URL. If Vercel assigns a new production domain (custom domain change) and FRONTEND_URL on Azure is not updated, all browser calls die with CORS errors while server-side calls keep working — easy to miss in smoke tests.

9. Extension points

  • Preview envs: Vercel already provisions per-PR frontend previews. Point them at a dedicated staging backend (separate Azure slot + separate DB) via a preview-scoped NEXT_PUBLIC_API_URL.
  • Blue-green on Azure: add a Staging deployment slot to the Eziseller app, deploy there, swap on success. The publish profile in GH secrets would need replacing with a slot-specific one.
  • Auto-migrations: add a prisma migrate deploy step to backend-deploy.yml before azure/webapps-deploy, using a CI-only DATABASE_URL secret. Only safe once migrations are reliably additive.
  • Horizontal scale: requires moving refresh tokens to a shared store and adding a cron leader-lock (advisory lock in Postgres is cheapest) before raising the Azure instance count above 1.

10. Related docs