Blog/INP Explained: The Core Web Vital That Replaced FID
Performance

INP Explained: The Core Web Vital That Replaced FID

FID had a 93% pass rate. INP replaced it in March 2024 and the bar got harder. Here's what INP measures, why it's better, and how to actually fix a bad score.

8 April 2026·8 min read

First Input Delay was easy to pass — 93% of sites had a 'good' FID. Then Google replaced it with Interaction to Next Paint in March 2024. The mobile pass rate dropped to 74% overnight. If you haven't looked at INP since the switch, your Core Web Vitals score may not be what you think it is.

Browser DevTools open on a laptop showing web inspector performance panel
The Performance panel in DevTools will show your INP — but you need RUM to see it per interaction in production.

Why Google Replaced FID in March 2024

FID only measured the very first interaction on a page. Chrome usage data shows that 90% of a user's time on a page is spent after the initial load. FID was measuring the other 10%.

FID also had a narrower definition of 'interaction': it only captured the input delay — the gap between a user tapping and the browser starting to process it. It completely ignored processing time. A page could report a 5ms FID while freezing for 800ms after every button click. Nobody called that a win.

What INP Actually Measures

INP observes every click, tap, and key press across the entire session. It reports the worst single interaction (with minor outlier filtering: one interaction is ignored per every 50 interactions recorded). This means a page with one catastrophic 900ms click and fifty fast interactions won't hide that outlier — it'll surface it.

Unlike FID, INP measures the full pipeline from input to painted frame: input delay plus processing time plus presentation delay. That's the complete interaction cost as the user experiences it.

≤200ms
Good INP threshold — 75th percentile of page loads across mobile and desktop
web.dev/articles/inp
74%
of mobile sites had good INP in 2024, up from 55% in 2022 but still a harder bar than FID
HTTP Archive Web Almanac 2024
93% → 75%
FID good rate vs. INP good rate across Cloudflare's RUM dataset — the same sites, a different metric
blog.cloudflare.com/inp-get-ready-for-the-new-core-web-vital

The Three Components of Every Interaction

Every INP measurement is the sum of three phases. Knowing which phase is long tells you where to fix.

  • Input delay — time from user action until event callbacks start running. Caused by long tasks already occupying the main thread when the user acts. If a 300ms parse task is running when someone clicks, they wait 300ms before your handler even starts.
  • Processing time — time for your event handlers to complete. Heavy onClick logic, React re-renders triggered by state updates, CSS-in-JS recalculation, third-party listeners — all of this is processing time.
  • Presentation delay — time for the browser to render the next frame after handlers finish. Large DOM trees and layout thrashing (reading computed styles right after DOM mutations) bloat this phase significantly.
Analytics dashboard on a laptop showing performance graphs and metric charts
Aggregate INP scores from CrUX tell you your p75. RUM tells you which interactions are slow.

Who's Failing INP — It's Not Who You'd Expect

The 2024 Web Almanac has a counterintuitive finding: more popular sites perform worse on mobile INP than less popular ones. Only 53% of top-1,000 sites have good INP on mobile. Sites ranked in the top 10 million do better at 73%.

The hypothesis holds up: high-traffic sites ship more JavaScript. More JavaScript means more work on the main thread, longer tasks, slower interactions. When Google retroactively applied INP to CrUX data, Next.js sites saw a -10 percentage point drop in Core Web Vitals pass rates. The more JS you ship, the more INP punishes you.

The Main Culprits Behind a Poor INP Score

  • Long tasks — any main thread task exceeding 50ms blocks input processing. A 200ms task adds directly to input delay for anyone who clicks during it.
  • Heavy event handlers — state updates that cascade into large component re-renders, complex calculations in onChange, or synchronous writes to localStorage.
  • Large DOM sizes — Chrome's rendering cost scales with DOM depth. Beyond ~1,400 nodes, every layout and style recalculation gets measurably slower.
  • Third-party scripts — analytics, chat widgets, and ad scripts run on the same main thread. Their tasks eat directly into your interaction budget.

Measuring INP Where It Actually Matters: Production

Chrome DevTools shows INP in the Performance panel. CrUX gives you the p75 aggregate for your whole origin. Neither tells you which specific interactions are slow, on which pages, or for which users.

The browser's native `PerformanceObserver` API emits INP data you can capture per user, per interaction, per page. Statvisor's frontend SDK does this automatically — one line added to your layout and you get real INP scores in your dashboard, broken down by page.

tsx
// Next.js App Router — add to your root layout
import { StatvisorAnalytics } from "@statvisor/sdk/react";

export default function RootLayout({ children }: { children: React.ReactNode }) {
  return (
    <html>
      <body>
        {children}
        <StatvisorAnalytics frontendKey={process.env.NEXT_PUBLIC_STATVISOR_KEY!} />
      </body>
    </html>
  );
}

// Vanilla JS / other frameworks
import { initStatvisor } from "@statvisor/sdk/browser";
initStatvisor({ frontendKey: process.env.STATVISOR_FRONTEND_KEY });

Statvisor's frontend SDK auto-tracks INP, LCP, CLS, FCP, and TTFB using native browser APIs — no cookies, no external requests during load. Add one component and your real-user Core Web Vitals data starts flowing immediately.

Optimizations That Actually Move the Needle

Once you know which interactions are slow (and on which pages), you have a short list of fixes that reliably work:

  • Yield to the main thread — use `scheduler.yield()` (Chrome 129+) or `await new Promise(r => setTimeout(r, 0))` to break long tasks into smaller chunks and let the browser process input between them.
  • Don't block on network — return optimistic UI immediately after a click, then sync with the server in the background. The user perceives instant response.
  • Debounce onChange — don't run heavy logic on every keystroke. Debounce to 150–300ms and the main thread stays free while the user types.
  • Reduce DOM depth — audit components that render large trees unconditionally. Virtualize long lists. Keep rendered nodes under 1,400.
  • Audit third-party scripts — load analytics and chat widgets with `async`/`defer`. Where possible, move them to a Web Worker so they can't block your main thread interactions.

The redBus case study — published on web.dev — shows what real improvements look like. They reduced search page INP from ~870ms to ~360ms by cutting API payload size and moving form state out of Redux. The result: a 7% increase in overall sales. INP isn't a checkbox. It's a conversion metric.

Ready to monitor your API in production?

Statvisor gives you latency percentiles, error rates, and request volume for every route — in minutes, not days.

Get started free →