🛠️ Technical Guide · Performance SEO · Core Web Vitals

Core Web Vitals Guide: How to Improve LCP, INP & CLS to Rank Higher

Core Web Vitals are Google's official set of real-world performance metrics that measure how users actually experience a webpage — covering loading speed, responsiveness, and visual stability. Introduced as a ranking signal in 2021 and continuously refined since, they have become one of the most consequential technical factors in modern SEO. In 2026, with 47% of websites still failing at least one threshold, mastering Core Web Vitals is a genuine competitive advantage.

Core Web Vitals sit within the broader discipline of technical SEO — if you have not yet addressed crawlability, HTTPS, structured data, and site architecture, those foundations should be established before optimising for performance metrics.

This guide covers every dimension of Core Web Vitals: what each metric measures and why Google chose it, how scoring and percentile thresholds work, which tools to use for measurement, and — most importantly — the actionable, high-impact fixes for LCP, INP, and CLS that move you from "Poor" to "Good."

Key insight: To pass Core Web Vitals, at least 75% of real user visits to a URL must score "Good" on all three metrics. Improving your median score is not enough — you need to lift your worst-case users, not just your average.

What are Core Web Vitals?

Core Web Vitals are a subset of Google's broader Web Vitals initiative — a programme designed to provide unified guidance on the signals that matter most for delivering a great user experience on the web. While the broader Web Vitals set includes additional metrics like Time to First Byte (TTFB) and First Contentful Paint (FCP), Core Web Vitals are the three metrics Google has elected to use directly as ranking signals.

The three Core Web Vitals are:

  • LCP (Largest Contentful Paint) — measures loading performance: how quickly the largest visible element on the screen renders.
  • INP (Interaction to Next Paint) — measures responsiveness: how quickly the page reacts to every user interaction throughout the visit.
  • CLS (Cumulative Layout Shift) — measures visual stability: how much page content unexpectedly shifts around during loading.

These three metrics were selected because they each capture a distinct dimension of user frustration: waiting for content to appear, waiting for actions to register, and watching the page jump around unexpectedly. Together they paint a comprehensive picture of real-world page quality.

Metric What it measures Good ✅ Needs Improvement ⚠️ Poor ❌
LCP — Largest Contentful Paint Loading speed of the main content element ≤ 2.5s 2.5s – 4.0s > 4.0s
INP — Interaction to Next Paint Responsiveness to user input throughout the page ≤ 200ms 200ms – 500ms > 500ms
CLS — Cumulative Layout Shift Visual stability during and after loading ≤ 0.1 0.1 – 0.25 > 0.25

Why do Core Web Vitals matter for SEO?

Core Web Vitals became an official Google ranking signal with the Page Experience update in June 2021. Since then, their influence on rankings has grown steadily. In competitive SERPs where two pages offer comparable content quality, Core Web Vitals function as a tiebreaker — and the page with the better user experience wins.

The business case goes beyond rankings. Research from SparkToro analysing 150 million search queries found that pages with "Good" Core Web Vitals scores had a 31.4% higher click-through rate from organic search compared to pages with "Poor" scores. A 100ms improvement in LCP has been shown to increase conversions by 0.6–1.2%, according to agency data collected across hundreds of client sites.

There is also a compounding effect. Faster, more stable pages produce lower bounce rates, longer dwell times, and higher engagement rates — all signals that reinforce Google's confidence that the page is worth ranking. Investing in Core Web Vitals is therefore not purely a technical exercise; it is a direct revenue lever.

2026 data point: Sites that improved from "Poor" to "Good" Core Web Vitals saw an average 24.3% increase in organic traffic within 90 days, compared to just 8.2% for sites that moved only from "Poor" to "Needs Improvement." The "Good" threshold is worth targeting completely.

How does Core Web Vitals scoring work?

Each Core Web Vitals metric is scored at the 75th percentile of real user visits — meaning the score reported is the value that 75% of visits meet or beat. This design intentionally targets your worst-performing quarter of users, not your median experience. A fast experience for most users does not count if a significant fraction still receives a poor one.

To achieve an overall "Good" Core Web Vitals assessment for a URL, all three metrics — LCP, INP, and CLS — must individually reach the "Good" threshold at the 75th percentile. If any single metric fails, the URL's overall status is determined by its worst-performing metric, regardless of how strong the others are.

Google measures these scores at the URL level first, then at the URL group level (groups of structurally similar pages), and finally at the origin level (the entire domain) if there is insufficient data at lower levels. The scores you see in Google Search Console reflect aggregated real-user data from Chrome browsers — the Chrome User Experience Report (CrUX) dataset.

Field data vs. lab data: What counts for rankings?

Field data (also called real-user monitoring or RUM data) is collected from actual Chrome browser sessions visiting your site. It is aggregated into the CrUX dataset and is the only data source that affects Google rankings. Field data reflects the true distribution of user experiences across different devices, browsers, and network conditions.

Lab data is generated by simulating a page load under controlled conditions using tools like Lighthouse, PageSpeed Insights, or WebPageTest. It is reproducible and deterministic, making it excellent for diagnosing specific issues — but it does not directly affect your rankings. A perfect Lighthouse score does not guarantee good field data if real users are on slower devices or networks.

The practical implication: always cross-reference lab data findings with field data before prioritising fixes. An issue that appears in lab testing but does not show up in your CrUX data may have minimal real-world impact. Conversely, field data failures should be investigated even if your Lighthouse scores look healthy — slower users and devices may be experiencing problems your lab simulation does not capture.

LCP: Largest Contentful Paint explained

🟢 LCP — Largest Contentful Paint

Good: ≤ 2.5 seconds  |  Needs Improvement: 2.5s – 4.0s  |  Poor: > 4.0 seconds

LCP measures the time from when the page first starts loading to when the largest content element in the viewport has fully rendered. The LCP element is almost always one of: a <img> image, a <video> poster image, a block-level element containing text (like an <h1> or <p>), or a CSS background image loaded via url().

Google chose LCP because it closely correlates with when a user perceives the page as "loaded." Earlier metrics like First Contentful Paint (FCP) could be triggered by a small spinner or a single character — LCP targets what actually matters: the dominant visual content.

LCP is currently the hardest Core Web Vital to pass, with only 59% of mobile URLs achieving a "Good" score. The primary bottleneck is almost always image loading — specifically, large hero images that are discovered late, fetched from a distant server, and served in inefficient formats.

Key LCP sub-components to diagnose

  • Time to First Byte (TTFB): How quickly the server responds. High TTFB delays everything downstream.
  • Resource load delay: Time between receiving the first byte of HTML and the browser discovering the LCP resource.
  • Resource load duration: How long it takes to download the LCP resource itself.
  • Element render delay: Time between the LCP resource finishing download and the element actually rendering on screen.

INP: Interaction to Next Paint explained

🔵 INP — Interaction to Next Paint

Good: ≤ 200ms  |  Needs Improvement: 200ms – 500ms  |  Poor: > 500ms

INP officially replaced First Input Delay (FID) as a Core Web Vital in March 2024. It measures the latency of all user interactions on a page — clicks, taps, and keyboard inputs — throughout the entire page lifecycle. The INP score for a URL is the worst interaction latency recorded across the visit (with outliers removed for very long sessions).

FID only measured the delay before the browser could begin processing the first interaction — it ignored how long that processing actually took. INP measures the complete interaction cost: from the user's input to the next visual frame the browser paints in response. This makes INP a significantly stricter and more honest measure of whether a page feels responsive.

High INP scores are almost always caused by long tasks on the main thread — JavaScript execution that blocks the browser from responding to user input. Third-party scripts (analytics, advertising, chat widgets) are a common culprit, as are large, inefficient event handlers and frameworks that do excessive DOM manipulation on every interaction.

Understanding the INP interaction flow

  • Input delay: Time the browser waits before it can start processing the interaction (due to other tasks running).
  • Processing time: Time spent executing event handlers and rendering logic triggered by the interaction.
  • Presentation delay: Time between the browser finishing processing and actually painting the next frame to the screen.

CLS: Cumulative Layout Shift explained

🟠 CLS — Cumulative Layout Shift

Good: ≤ 0.1  |  Needs Improvement: 0.1 – 0.25  |  Poor: > 0.25

CLS measures the visual instability of a page — specifically, how much content unexpectedly jumps around during or after loading. The score is calculated from individual layout shift events, each scored by multiplying the fraction of the viewport the shifted element occupied by the distance it moved. CLS accumulates all such shifts throughout the page visit.

The user experience impact is tangible and often infuriating: a button moves just as you are about to tap it, causing an accidental click; a headline jumps down as an ad banner loads above it; a form field shifts away from the keyboard as a cookie banner appears. These moments damage trust and increase task failure rates.

The most common causes of CLS are images and iframes without declared dimensions, dynamically injected content inserted above existing content, web fonts that cause a Flash of Unstyled Text (FOUT) or Flash of Invisible Text (FOIT), and animations that use CSS properties (like top, left, margin) that trigger layout recalculation instead of the GPU-composited transform property.

How to measure Core Web Vitals

Use the right tool for each type of measurement:

Tool Data type Best for Free?
Google Search Console Field (CrUX) Site-wide overview; identifying failing URL groups Yes
PageSpeed Insights Field + Lab Per-page field data alongside lab diagnostics Yes
Lighthouse (Chrome DevTools) Lab Local debugging; identifying root causes Yes
Web Vitals Chrome Extension Field (live) Real-time scores while browsing your own site Yes
WebPageTest Lab Advanced waterfall analysis; filmstrip view Yes
CrUX Dashboard (Looker Studio) Field (CrUX) Historical trends across origin or URL level Yes
Screaming Frog + PSI Lab (bulk) Bulk PageSpeed analysis across large crawls Paid

Start with Google Search Console for a site-wide triage — it shows which URL groups are failing and which metric is the bottleneck. Then use PageSpeed Insights or Lighthouse on specific URLs to identify the exact technical causes. The Web Vitals Chrome Extension is invaluable for debugging CLS issues in real-time as you scroll and interact with a page.

How to improve LCP

LCP improvements typically require addressing image delivery, server response times, and render-blocking resources simultaneously. The following fixes are listed in approximate order of impact:

1. Preload the LCP image

Add a <link rel="preload"> tag in your <head> for the hero image. This tells the browser to fetch the LCP resource as early as possible, before it would normally be discovered in the HTML. This single change can reduce LCP by 0.5–1.0 seconds on many pages.

<link rel="preload" as="image" href="hero.webp" fetchpriority="high">
2. Serve images in WebP or AVIF format

Convert hero images, thumbnails, and featured images from JPEG or PNG to WebP (25–35% smaller) or AVIF (up to 50% smaller). Use TechOreo's free Image Converter to convert images in bulk without quality loss.

3. Reduce Time to First Byte (TTFB)

TTFB is the foundation of LCP. If your server is slow, everything else is delayed. Use a CDN to cache HTML responses at edge locations, enable server-side caching (Redis, Varnish), and choose a hosting provider with sub-200ms TTFB targets. Use the Server-Timing header to diagnose where server time is spent.

4. Eliminate render-blocking resources above the fold

CSS and synchronous JavaScript in <head> block the browser from rendering anything until they finish downloading and executing. Inline critical CSS (the styles needed to render the visible viewport), defer all non-critical JavaScript with defer or async attributes, and load non-critical stylesheets with media="print" onload="this.media='all'".

5. Do not lazy-load the LCP image

This is a common, costly mistake: adding loading="lazy" to the hero image. Lazy loading defers the fetch until the element is near the viewport — but the LCP image is the viewport. Only apply lazy loading to images below the fold. Use loading="eager" (the default) or fetchpriority="high" on the LCP image.

6. Use a Content Delivery Network (CDN)

A CDN distributes your static assets — images, CSS, JavaScript — across globally distributed edge servers. Users receive assets from the nearest node, dramatically reducing latency for users far from your origin server. Cloudflare, Fastly, and AWS CloudFront all offer free tiers suitable for most sites.

7. Optimise web fonts

Fonts can delay text-based LCP elements significantly. Use font-display: optional or font-display: swap to prevent invisible text. Preload your most critical font files. Self-host fonts rather than loading them from Google Fonts to eliminate a cross-origin request.

How to improve INP

INP improvements require identifying and reducing long tasks on the JavaScript main thread. The browser cannot respond to user input while it is busy executing long tasks — any task over 50ms is considered "long" by the browser and will cause perceptible input delay.

1. Audit and remove unnecessary third-party scripts

Third-party scripts — analytics, advertising tags, live chat widgets, A/B testing tools — are the single most common source of long tasks. Use the Chrome DevTools Performance panel to identify which scripts are consuming main thread time. Remove or defer any third-party script that is not essential to page function.

2. Break up long tasks with scheduler.yield()

If you have unavoidable long JavaScript tasks, use the Scheduler API to yield back to the browser between chunks of work. This allows the browser to process pending interactions before resuming the task:

async function processData(items) {
  for (const item of items) {
    processItem(item);
    await scheduler.yield(); // Yield to the browser after each item
  }
}
3. Minimise event handler complexity

Heavy event handlers that trigger complex DOM queries, synchronous layout reads (like offsetHeight or getBoundingClientRect() followed immediately by DOM writes) cause forced layout — the browser must recompute layout mid-task, significantly extending processing time. Batch DOM reads before DOM writes.

4. Use code splitting and lazy loading for JavaScript

Do not ship your entire JavaScript bundle on initial page load. Use dynamic import() to split your code into smaller chunks and load features only when they are needed. This reduces the amount of JavaScript that needs to parse and compile during page startup — reducing the risk of long tasks blocking early interactions.

5. Defer non-critical JavaScript

Any JavaScript that does not need to run during the critical rendering path should be deferred. Add defer to script tags so they execute after the HTML has been parsed. Use async for scripts with no dependencies that can run independently.

How to improve CLS

Most CLS issues are predictable and preventable. The root cause is almost always content that the browser did not know about in advance, causing it to re-flow the layout after the initial render:

1. Always define width and height on images and iframes

This is the single highest-impact CLS fix for most sites. When the browser knows an image's dimensions before it loads, it reserves the correct space in the layout — no shift occurs when the image appears. In HTML: <img width="800" height="450" src="...">. In CSS, use aspect-ratio: 16/9 as an alternative.

2. Avoid inserting content above existing content

Never inject dynamic content — cookie banners, notification bars, promotional popups, ad units — above existing page content after the initial render. If such elements are necessary, pre-allocate the space they will occupy using a placeholder with fixed dimensions, or display them in a way that overlays rather than displaces content (e.g. a fixed-position banner).

3. Use CSS transforms for animations

Animations that change CSS properties like top, left, margin, or padding trigger full layout recalculations and contribute to CLS. Use transform: translate() and opacity instead — these are composited on the GPU and do not affect page layout.

4. Stabilise font loading with font-display: optional

When a web font loads and swaps in after the fallback font, the different character widths cause text reflow and layout shifts. font-display: optional uses the fallback font if the web font is not available within a short timeout, eliminating FOUT-induced shifts at the cost of potentially showing the fallback font on first visit.

5. Use the Layout Instability API to pinpoint shifts

If PageSpeed Insights is not showing you where shifts originate, add the PerformanceObserver snippet in your browser console while interacting with the page to capture precise shift data:

new PerformanceObserver((list) => {
  list.getEntries().forEach(entry => {
    if (!entry.hadRecentInput) {
      console.log('CLS shift:', entry.value, entry.sources);
    }
  });
}).observe({type: 'layout-shift', buffered: true});

Core Web Vitals on mobile vs. desktop

Google measures Core Web Vitals separately for mobile and desktop users. The same three metrics apply to both, but mobile scores are consistently harder to achieve — mobile devices have slower processors, less RAM, and are more likely to be on variable or congested network connections.

Since Google uses mobile-first indexing, your mobile Core Web Vitals are the most critical to optimise. A page that scores "Good" on desktop but "Poor" on mobile will still be penalised in rankings. Prioritise your mobile performance analysis and test on mid-range Android devices, not just on a high-end iPhone or your development laptop.

Chrome DevTools' Device Mode with network throttling (Slow 4G or Fast 3G) provides a reasonable simulation of real mobile conditions. For even more accurate testing, use WebPageTest's real mobile device testing feature on a Moto G Power — one of the most commonly owned Android devices globally and Google's own testing baseline.

Core Web Vitals for e-commerce

E-commerce sites face specific Core Web Vitals challenges. Product pages typically have multiple high-resolution images, complex JavaScript for variant selectors and cart functionality, and numerous third-party tracking scripts — all of which combine to degrade performance.

The most impactful e-commerce-specific improvements are:

  • Lazy-load below-the-fold product images but never the primary product image (LCP element).
  • Implement a product image CDN with automatic WebP/AVIF format negotiation and on-the-fly resizing.
  • Audit your tag manager for unused pixels — marketing and retargeting scripts are a leading source of INP failures on product and category pages.
  • Reserve space for product reviews and UGC loaded via API calls — these are a common CLS source.
  • Test your checkout funnel separately — heavily scripted checkout pages often have the worst Core Web Vitals scores on a site.

Core Web Vitals for WordPress sites

WordPress powers a significant share of the web and has dedicated tooling for Core Web Vitals optimisation. The following plugins and settings address the most common WordPress-specific issues:

  • WP Rocket or LiteSpeed Cache: Full-stack performance plugins that handle caching, CSS/JS minification, lazy loading, and critical CSS generation with minimal configuration.
  • Imagify or ShortPixel: Bulk image compression and automatic WebP/AVIF conversion for your entire media library.
  • Perfmatters: Script manager for disabling unnecessary WordPress scripts (emojis, embeds, jQuery migrations) on a per-page basis.
  • GeneratePress or Kadence: Lightweight themes with clean, minimal code that score well on Core Web Vitals out of the box — avoid page builders that inject excessive CSS and JavaScript.
  • Disable or defer Gutenberg block library CSS if you are not using the block editor on the front end.

For WordPress specifically, the most common INP culprit is excessive JavaScript from page builders, slider plugins, or WooCommerce variant scripts running on the main thread. Conduct a JavaScript audit with Chrome DevTools Coverage tab to identify unused code before purchasing additional plugins.

How Core Web Vitals interact with other ranking factors

Core Web Vitals are one signal among hundreds in Google's ranking algorithm. The hierarchy, from most to least influential, is roughly: content relevance and quality, E-E-A-T signals (Experience, Expertise, Authoritativeness, Trustworthiness), backlinks and domain authority, Core Web Vitals and page experience, and other signals like mobile-friendliness and HTTPS. Excellent Core Web Vitals will not overcome thin content or a lack of authoritative backlinks — but poor Core Web Vitals can prevent great content from achieving its full ranking potential.

Pages ranking at position 1 are statistically 10% more likely to pass all three Core Web Vitals thresholds compared to pages at position 9. This correlation has strengthened year over year as Google continues refining how page experience signals influence rankings. The practical implication: in competitive niches where the top results all have strong content and authority, Core Web Vitals can be the decisive differentiator.

Core Web Vitals and conversion rates

The relationship between page speed and conversion rate is well-documented. Google's own data indicates that as page load time increases from one second to three seconds, the probability of a bounce increases by 32%. At five seconds, that probability rises to 90%. For every 100ms improvement in LCP, e-commerce conversion rates improve by 0.6–1.2%.

CLS failures are particularly damaging to conversions because they cause accidental clicks — a user intending to click one button hits a different one because the layout shifted. This causes frustration, increases task abandonment, and erodes brand trust. In e-commerce, accidental add-to-cart or checkout errors from CLS can directly inflate cart abandonment rates.

Technical speed optimisation is therefore not just an SEO task — it is a direct business investment with a measurable return on investment. Frame Core Web Vitals improvements to stakeholders in terms of conversion rate improvements and revenue per visitor, not just rankings and traffic.

How long before improvements show in rankings?

Google updates the CrUX dataset that powers Core Web Vitals scores on a rolling 28-day window. This means a fix you deploy today will not be fully reflected in your scores for approximately four weeks — the old data gradually phases out as new data accumulates. There is no way to accelerate this process; you cannot request a CrUX refresh.

Once your field data improves, ranking changes typically follow within one to two crawl cycles. For frequently crawled pages on authoritative domains, this can be as fast as one to two weeks after the CrUX data improves. For less frequently crawled or lower-authority pages, allow six to ten weeks from the time of implementation before assessing the ranking impact.

Core Web Vitals audit checklist

Use this checklist when auditing a site for Core Web Vitals issues:

Check Metric Tool
Hero image preloaded with fetchpriority="high" LCP PageSpeed Insights
LCP image served in WebP or AVIF format LCP Chrome DevTools Network tab
TTFB under 600ms for key URLs LCP WebPageTest / PageSpeed Insights
Render-blocking CSS/JS eliminated above the fold LCP Lighthouse
No loading="lazy" on the LCP image LCP Source code review
CDN in use for static assets LCP WebPageTest waterfall
No long tasks (>50ms) on main thread during key interactions INP Chrome DevTools Performance
Third-party scripts audited and unnecessary ones removed INP Chrome DevTools Coverage
JavaScript code-split and lazy-loaded where possible INP Lighthouse / Bundle analyser
All <img> and <iframe> elements have explicit dimensions CLS Source code review / Lighthouse
No dynamic content injected above existing content post-load CLS Web Vitals Extension / DevTools
Animations use transform and opacity only CLS Chrome DevTools Rendering
Fonts use font-display: swap or optional CLS PageSpeed Insights
Mobile field data checked in Google Search Console All Google Search Console
75th percentile thresholds met for all three metrics All PageSpeed Insights (Field data)

Frequently Asked Questions

Core Web Vitals are three real-world performance metrics defined by Google: Largest Contentful Paint (LCP) for loading speed, Interaction to Next Paint (INP) for responsiveness, and Cumulative Layout Shift (CLS) for visual stability. They form part of Google's Page Experience ranking signal and are measured using real Chrome user data aggregated in the CrUX dataset. All three must pass the "Good" threshold at the 75th percentile of real user visits for a URL to receive an overall "Good" assessment.

Yes. Core Web Vitals are a confirmed Google ranking signal introduced with the Page Experience update in June 2021. They function primarily as a tiebreaker: when two pages offer comparable content quality, the page with the better user experience wins. Pages with "Good" Core Web Vitals show a 31.4% higher click-through rate compared to "Poor" pages in controlled research, and agency data shows an average 24.3% increase in organic traffic for sites that improve from "Poor" to "Good."

Field data is real-world performance data collected from actual Chrome browser sessions and aggregated in the CrUX (Chrome User Experience Report) dataset. Only field data affects Google rankings. Lab data is a simulated page load under controlled conditions using tools like Lighthouse or PageSpeed Insights — it is excellent for diagnosing specific issues but does not directly affect your rankings. A perfect Lighthouse score does not guarantee good field data if your real users are on slower devices or networks.

LCP (Largest Contentful Paint) measures how quickly the largest visible content element — usually a hero image or main headline — renders on screen. Good LCP is under 2.5 seconds. The most impactful improvements are: preloading the LCP image with fetchpriority="high", serving images in WebP or AVIF format, using a CDN to reduce TTFB, eliminating render-blocking resources in the <head>, and ensuring the LCP image does not have loading="lazy" applied to it.

Interaction to Next Paint (INP) officially replaced First Input Delay (FID) as a Core Web Vital in March 2024. FID only measured the delay before the browser began processing the very first interaction on a page. INP is far more comprehensive: it measures the full latency of all clicks, taps, and keyboard interactions throughout the entire page session. Good INP is under 200 milliseconds. INP failures are almost always caused by JavaScript long tasks on the main thread blocking the browser from responding to user input.

The most common causes of a high CLS score are: images and iframes without declared width and height attributes (causing layout reflow when they load), dynamic content injected above existing page content (ads, cookie banners, promotional notifications), web fonts that cause a Flash of Unstyled Text (FOUT) when they load and swap in, and CSS animations that use layout-triggering properties (top, margin) instead of transform. The fix in most cases is to pre-allocate space for any content that arrives asynchronously.

Use Google Search Console as your starting point — its Core Web Vitals report shows real-world field data for all URL groups on your site, grouped by status and metric. For per-page diagnosis, use PageSpeed Insights (combines field and lab data) or run a Lighthouse audit in Chrome DevTools. The Web Vitals Chrome Extension gives you live LCP, INP, and CLS readings on any page you visit. For advanced waterfall analysis and real-device mobile testing, WebPageTest is the industry standard.

Google updates the CrUX dataset on a rolling 28-day window, so your scores will not reflect recent improvements for approximately four weeks after implementation. Once your field data improves, ranking changes typically follow within one to two crawl cycles — roughly one to eight weeks depending on how frequently Google crawls your pages. Allow six to ten weeks from implementation before making a final assessment of the ranking impact. Speed improvements that affect bounce rate and dwell time can produce indirect ranking benefits even sooner.

Yes. To receive an overall "Good" Core Web Vitals assessment, at least 75% of real user sessions to a URL must record a "Good" score for each of LCP, INP, and CLS individually. If any single metric fails at the 75th percentile threshold, the URL's overall status is determined by its worst-performing metric. This means you must improve the experience for your slowest users — not just your average user — which requires identifying and addressing the root causes of poor performance on real low-end devices and slow connections.

The same three metrics apply to both mobile and desktop, but Google measures and reports them separately. Mobile scores are significantly harder to achieve because mobile devices have slower processors, less RAM, and are often on variable network connections. Since Google uses mobile-first indexing, your mobile Core Web Vitals are the most critical scores to optimise. Always test on a mid-range Android device (not a high-end phone) using throttled network conditions to accurately simulate the experience of your median mobile visitor.

RS

Written by

Rohit Sharma

Rohit is the Technical SEO Specialist & AI Search Researcher at TechOreo with 13+ years of experience in technical SEO, Core Web Vitals, GA4, and AI-powered search. He has helped 150+ websites achieve measurable organic growth and is a recognised voice on GEO and AEO strategy in the post-AI search landscape.