Blog/Core Web Vitals and API Performance
Performance

Core Web Vitals and API Performance

Your LCP score is capped by your TTFB. Here's how API latency and chained backend requests silently tank your Core Web Vitals — and what to do about it.

29 March 2026·7 min read

Most Core Web Vitals guides tell you to compress images, defer scripts, and use a CDN. All good advice. But if your TTFB is 1.5 seconds, none of it matters — your LCP is already over budget before the browser paints a single pixel.

Backend latency and Core Web Vitals are directly connected. The link isn't subtle. It's mathematical: your LCP cannot be faster than your server's response time. If you're chasing a good CWV score without looking at your API, you're optimizing the wrong half of the stack.

What Core Web Vitals Actually Measure

≤2.5s
LCP threshold for 'good' — the time until the largest element in the viewport renders
web.dev/articles/lcp
≤800ms
TTFB threshold for 'good' — the server's response time before any content loads
web.dev/articles/ttfb
≤200ms
INP threshold for 'good' — interaction responsiveness after the page loads
web.dev/articles/inp
Dark server rack with glowing fiber optic cables representing backend infrastructure
Every millisecond your server spends thinking is a millisecond the browser spends waiting.

LCP and INP are the two Core Web Vitals that matter most for most apps. LCP measures when the page's primary content becomes visible — a hero image, a product name, a data table. INP replaced FID in March 2024 and measures how fast the page responds to user interactions after load. Both are directly affected by what your backend is doing.

TTFB Is the Hidden Cap on Your LCP

TTFB — Time to First Byte — is the time between the browser sending a request and receiving the first byte of a response. It's the clearest signal of server-side health. And it's the single biggest predictor of a bad LCP score.

42%
of mobile websites achieve good TTFB (≤800ms) in 2024 — most don't
HTTP Archive Web Almanac 2024
2,270ms
median p75 TTFB for origins with poor LCP — nearly guaranteeing a fail before anything loads
web.dev/blog/common-misconceptions-lcp
+700ms
median LCP increase when a page has two chained dependent API requests vs. zero
web.dev/blog/common-misconceptions-lcp

Google's own data makes this stark: sites with poor LCP have a median TTFB of 2,270ms at the 75th percentile. That number alone exceeds the 2.5s 'good' LCP threshold — and the browser hasn't even started downloading the actual content yet. Improving image compression won't save you here.

API Calls That Block LCP

Web analytics dashboard showing load time vs bounce rate — page load time above 2.5s correlates with sharply rising bounce rates
Load time and bounce rate are directly correlated. The dashboard doesn't lie.

Server-rendered pages (Next.js, Nuxt, Remix) are the most obvious case: the entire HTML is gated on your backend's response. But client-rendered apps are just as vulnerable. If your page shell loads fast and then fires a fetch to /api/product before rendering the hero, that API call is on the LCP critical path.

Chained requests make it worse. Fetch user session → fetch user preferences → fetch product data. Each waterfall adds directly to LCP. Google's field data shows a 700ms median increase for pages with two chained requests versus none. That's the difference between a 'good' score and a 'needs improvement' score for most apps.

INP and What Your API Does After Load

INP measures interaction responsiveness — how long it takes the browser to respond after a user clicks, taps, or types. JavaScript that blocks the main thread is the primary culprit, but slow API responses play a role too. If a button click triggers a slow /api/save or /api/search, the user sees a frozen UI until the response lands and the render completes.

Chrome usage data shows that 90% of a user's time on a page is spent after it loads — which is exactly what INP covers. Optimizing your load path and ignoring post-load API performance is fixing the first 10% of the user's experience.

Instrument Your Routes in One Line

Before you can fix slow routes, you need to see them. P95 and P99 latency per endpoint — not averages — is what you need. Averages hide the outliers. Your /api/checkout might average 180ms but spike to 2,400ms at the 99th percentile. That spike is your LCP problem.

typescript
import * as statvisor from "@statvisor/sdk";

// Express
app.use(statvisor.express({ apiKey: process.env.STATVISOR_API_KEY! }));

// Hono
app.use(statvisor.hono({ apiKey: process.env.STATVISOR_API_KEY! }));

// Next.js API routes — wrap individual handlers
export default statvisor.nextjs(handler, { apiKey: process.env.STATVISOR_API_KEY! });

Statvisor tracks P50/P95/P99 latency per route the moment you add the middleware — no config files, no agents, no dashboards to build. If a route is on your LCP critical path and spiking at the tail, you'll see it within seconds of the first real request.

The Fix Isn't Always the Frontend

Developer glasses reflecting colorful code on a dark monitor screen
The problem is usually visible if you know where to look.

Once you have per-route latency data, the optimizations become obvious. Here's where to start:

  • Find which routes sit on your LCP critical path — the ones called before or during first render — and prioritize those
  • Add database query logging if your p95 is high but your server CPU is low — slow queries are usually the culprit
  • Eliminate chained requests by combining data fetches on the server side, reducing round trips from 3 to 1
  • Cache expensive computations at the route level — even a 5-second TTL dramatically flattens p99 spikes
  • For INP, debounce search and autosave calls and return optimistic UI immediately while the request is in flight
  • Move non-critical API calls out of the LCP critical path using lazy loading or background fetch after paint

Only 43% of mobile websites pass all Core Web Vitals as of 2024. Most of the remaining 57% aren't failing because of image optimization or font loading — they're failing because their servers are slow and no one has looked. The data is there. You just need to instrument it.

Ready to monitor your API in production?

Statvisor gives you latency percentiles, error rates, and request volume for every route — in minutes, not days.

Get started free →