Blog/How to Monitor a NestJS API in Production: The Complete Guide
Guides

How to Monitor a NestJS API in Production: The Complete Guide

A step-by-step guide to adding production monitoring to a NestJS application. Covers request logging, latency percentiles, error tracking, and the tools worth using.

14 February 2025·7 min read

NestJS is one of the most widely used Node.js frameworks for building production APIs. Its structure encourages separation of concerns, decorators make routing clean, and the ecosystem around it — guards, interceptors, pipes — gives you a lot of power. What NestJS does not give you out of the box is any visibility into how your API performs in production.

Once you deploy, requests go in and responses come out. Without instrumentation, you have no idea whether your P95 latency is 50ms or 2 seconds, which routes are generating 500 errors, or what traffic volumes look like across different endpoints. This guide covers how to add that visibility.

Why NestJS Monitoring Needs Explicit Setup

Unlike some frameworks that bundle basic request logging by default, NestJS is unopinionated about observability. You choose what to measure and how. This is actually a feature — it means you can instrument at exactly the layer that makes sense for your architecture, whether that is at the middleware level, via an interceptor, or through a global filter.

What You Should Measure

  • Request duration per route — measured as a percentile distribution, not just an average, so slow outliers are visible
  • HTTP status codes — broken down by route so you can see which endpoints are generating errors
  • Request volume — per-route traffic counts over time so you can identify unusually busy or quiet endpoints
  • Unhandled exceptions — any error that reaches your global exception filter should be captured with context

Option 1: NestJS Interceptor for Request Metrics

NestJS interceptors sit around the request/response cycle and are the natural place to measure duration. An interceptor can record the start time before the handler runs, then calculate elapsed time in the tap() callback after the response is returned. This approach captures everything at the application layer including guards, pipes, and handler time.

Option 2: HTTP Middleware

For raw request/response timing at the Express or Fastify adapter layer, HTTP middleware applied via configure() in your AppModule works well. This approach captures total time including serialisation but sits outside NestJS-specific processing. It is simpler to set up and works with any underlying HTTP library.

Option 3: Use a Dedicated SDK

For most teams, the fastest path to production monitoring is a purpose-built SDK. Statvisor's Node.js SDK wraps your NestJS application with a single call and immediately begins tracking per-route latency percentiles, error rates, request volume, and status code distributions. There is no interceptor to write, no metric store to configure, and no dashboard to build — it works out of the box.

Statvisor supports NestJS running on both the Express and Fastify adapters. One line of setup, and your production API is fully instrumented with per-route P50, P95, and P99 latency alongside error tracking and traffic analytics.

Shipping to Production

Whichever approach you use, monitoring should be part of your deployment — not an afterthought. Instrument your API before your first production release, establish baseline latency and error rate values during initial low traffic, and set alerts that fire before users are affected. The cost of setting up monitoring once is trivially small compared to the cost of a degradation that went undetected for hours.

Ready to monitor your API in production?

Statvisor gives you latency percentiles, error rates, and request volume for every route — in minutes, not days.

Get started free →