Every JavaScript developer uses console.log. It's the fastest path from 'something is wrong' to 'I can see what's wrong'. In development, it's fine. In production, it's quietly costing you.
Not in a dramatic way. Your server won't crash because of it. But your throughput drops, your errors vanish without context, and when something breaks at 2am, you'll have nothing useful to look at.
console.log Blocks Your Event Loop
console.log writes to stdout synchronously. Node.js blocks the event loop on every single log call until the I/O operation finishes. One log per request is bad. Two is significantly worse.
The fix isn't to log less — it's to log differently. Async loggers like Pino buffer writes and flush in the background. In the same benchmark, Pino in async mode handled 1,310 req/s — 6.5× faster than console.log under identical conditions.
You Can't Search, Filter, or Structure It
console.log dumps plain strings. No log levels. No timestamps. No request IDs. No way to filter to just the errors, or just the events from one specific user. When production breaks, you're grepping through a wall of text hoping to spot the one line that matters.
Structured logging outputs JSON. Every entry has a consistent shape: level, timestamp, message, and whatever context you attach. That means you can filter, aggregate, and alert on your logs — instead of just reading them.
Switch to a Structured Logger
// npm install pino
import pino from "pino"
const logger = pino({ level: "info" })
// Instead of: console.log("User signed up", userId)
logger.info({ userId, plan: "pro" }, "User signed up")
// Instead of: console.error("Payment failed", err)
logger.error({ userId, err }, "Payment failed")Pino is the fastest structured logger in the Node.js ecosystem. It outputs newline-delimited JSON, runs asynchronously, and integrates with every log aggregation platform — Logtail, Datadog, Loggly, or your own ELK stack. Drop-in replacement for most codebases.
Logging Alone Won't Show You What's Broken
Structured logging is a big step forward. But logs are reactive — you only see what you explicitly wrote. If your /api/orders route is sitting at P95 latency of 3 seconds, no logger will tell you that unless you manually instrument every handler.
That gap is where application monitoring picks up. Auto-instrumented latency percentiles, error rates per route, and Web Vitals — without a log statement in sight.
Statvisor tracks P50/P95/P99 latency and error rates per route automatically — no per-handler instrumentation needed. Add one middleware line and every route is covered. You can also pipe structured app-level events through statvisor.log() to keep everything in one place.
// npm install @statvisor/sdk
import * as statvisor from "@statvisor/sdk"
// One line covers every route automatically
app.use(statvisor.express({ apiKey: process.env.STATVISOR_KEY }))
// Structured app-level logs that show up in your dashboard
statvisor.log("info", "User signed up", { plan: "pro", userId })
statvisor.log("warn", "Payment retry triggered", { userId, attempt: 3 })
statvisor.log("error", "Webhook verification failed", { raw: body })You get latency data you didn't have to ask for, error rates you don't have to compute, and structured log events — all in one dashboard. No YAML, no agents, no Datadog bill.
Ready to monitor your API in production?
Statvisor gives you latency percentiles, error rates, and request volume for every route — in minutes, not days.
Get started free →