Google ranks you partly on how fast your pages load. That’s been true for years, but the bar keeps moving. Most mobile sites still fail Core Web Vitals. Desktop does better, though not by much. DDoS attacks roughly doubled last year – Cloudflare saw massive spikes in volume. And users leave slow pages. These aren’t hypotheticals. They’re what architects deal with every day.
How Fast Is Fast Enough – and Who’s Measuring
Core Web Vitals are still the main way Google measures page speed. Three metrics, three thresholds – LCP, INP, and CLS. The HTTP Archive’s 2025 Web Almanac tracked how sites scored over the past three years. The results are mixed. Here’s where things stand:
| Metric | What It Tracks | “Good” Means | What Breaks It |
| Largest Contentful Paint (LCP) | How long until the biggest visible element shows up | Under 2.5 seconds | Slow servers, uncompressed images, bad CDN setup |
| Interaction to Next Paint (INP) | How fast the page reacts when you click or tap | Under 200 milliseconds | Heavy JavaScript, blocked main thread, lazy hydration |
| Cumulative Layout Shift (CLS) | Whether stuff jumps around while loading | Score below 0.1 | Late-loading ads, font swaps, injected banners |
Mobile CWV pass rates climbed from 36% in 2023 to 48% in 2025. Desktop went from 48% to 56%. Progress, sure. But that pace means roughly half of all websites still deliver a mediocre or bad first impression on phones. And phones are where most people browse.
A big deal in 2025 was browser coverage. Firefox shipped INP support in version 144 last October. Safari started working on both LCP and INP in its Technology Preview builds. DebugBear’s year-end review flagged CLS support for Safari and Firefox as a likely Interop 2026 target. Right now, Chrome’s CrUX dataset is the only authoritative source for real-user data – and it misses everyone on Safari or Firefox. That blind spot matters more than most teams admit.
Attacks Got Faster, Bots Got Smarter, and Manual Response Can’t Keep Up
Cloudflare published its 2026 Threat Report in early March. The data comes from a network handling roughly 20% of global web traffic. Some of the numbers are hard to ignore. The findings that hit architecture decisions hardest:
- DDoS volume more than doubled in 2025. Most attacks lasted under 10 minutes – way too fast for a human to react. Automated, always-on mitigation at the edge isn’t optional anymore.
- Bots now generate 94% of all login attempts on Cloudflare’s network. Of the human logins, 63% use credentials already leaked in earlier breaches. Think about that. Nearly every login you see is either a bot or a person reusing a stolen password.
- Phishing-as-a-Service operators are exploiting weak email authentication. 46% of emails analyzed failed DMARC checks. The barrier to running phishing campaigns has basically collapsed.
The takeaway isn’t subtle. WAFs, rate limiters, bot filters, and DDoS shields need to sit at the edge, not behind your app server. Multi-CDN setups are gaining traction for exactly this reason – 34% of enterprise teams are actively testing them. And 88% of companies now run hybrid or multi-cloud configs, which makes enforcing consistent security rules across providers genuinely difficult.
Nobody solved that last part cleanly yet. It’s messy. But pretending your origin server can handle it alone is worse.
Edge Rendering, Serverless Functions, and Why Everyone Ends Up in the Same Place
Over 40 million sites use CDNs. That number is old news. What’s newer is how much logic has moved to the edge – server-side rendering, auth checks, A/B tests, all running at CDN nodes instead of a central origin.
Three patterns keep coming up in architecture discussions this year:
- Edge-first rendering. Next.js, Remix, and Astro all support generating HTML at edge nodes instead of routing requests back to a central server. Your LCP improves because the HTML doesn’t have to travel far. The trade-off: debugging gets harder when your code runs in 200 locations.
- Serverless compute at the edge. Cloudflare Workers, Vercel Edge Functions, AWS Lambda@Edge. You write a function, and it runs wherever your user is. Cold starts are the weak spot. Execution time limits are the other one.
- Streaming and partial hydration. Instead of shipping a full page or an empty shell that JavaScript fills in, modern frameworks stream HTML in chunks and only hydrate the interactive parts. LCP gets better because content shows up sooner. INP gets better because there’s less JavaScript fighting for the main thread.
The iGaming sector adopted these patterns early – platforms serving real-time transactions across dozens of countries can’t afford slow loads or downtime. Casino sites face the same architectural demands around latency and uptime as any high-traffic e-commerce operation. Winbet bonus shows how online casino platforms handle those constraints at scale. SaaS dashboards, media players, and shopping carts all land on the same architecture eventually. The math on user drop-off forces everyone’s hand.

What Visitors See When Your Architecture Fails (or Works)
Nobody visits your site and thinks, “nice CDN configuration.” They notice three things: did it load fast, did the button work when they tapped it, and did the page jump around. That’s it. But the gap between good and bad is widening, and the cost of bad keeps climbing:
- Amazon found that every 100ms of added latency costs them 1% in revenue. One hundred milliseconds. You can’t feel that delay yourself, but conversion rates can.
- Hostinger’s load time report found customer satisfaction drops 16% after a three-second wait. And 44% of unhappy visitors tell other people about it.
- Mobile is where most of this plays out. Phones account for more than half of global web traffic, but they run slower processors on worse connections. A site that looks great on your MacBook can fail all three Core Web Vitals on a mid-range Android over 4G.
CLS is the sneaky one. Users don’t know the term, but they know the feeling – you’re about to tap a link and an ad loads above it, pushing everything down. Your finger hits the wrong thing. That moment destroys trust faster than a slow load does. The threshold is 0.1. The best sites aim for zero.
Architecture decisions aren’t abstract. They show up in bounce rates, in cart abandonment, in support tickets, and in whether someone comes back tomorrow. The teams that treat speed and stability as core product features – not items on a backlog – ship faster sites. And faster sites make more money. That’s not a theory. Amazon proved it. Google confirmed it. The data is there.