Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

The Performance Inequality Gap, 2026

Have we finally rounded the corner? A look at the device and network landscape.

Let's cut to the chase, shall we? Updated network test parameters for 2026 are:

Regarding devices, my updated recommendations are the Samsung Galaxy A24 4G (or equivalent) and the HP 14. The goal of these recommendations is to emulate a 75th percentile user experience, meaning a full quarter of devices and networks will perform worse than this baseline.

Plugging these parameters into the updated budget calculator, we can derive critical-path resource thresholds for three and five second page load targets. Per usual, we consider pages built in two styles: JS-light, where only 15% of critical-path bytes are JavaScript, and JS-heavy, comprised of 50% JavaScript:

Time JS-light (MiB) JS-heavy
Total JS Other Total JS Other
3 sec 2.0 0.3 1.7 1.2 0.62 0.62
5 sec 3.7 0.57 3.2 2.3 1.15 1.15

Note: Budgets account for two TLS connections.

Many sites initiate more early connections, reducing time available to download resources. Using four connections cuts the three-second budget by 350 KiB, to 1.5 MiB / 935 KiB. The five-second budget loses nearly half a megabyte, dropping to 3.2 / 1.9 MiB.

It pays to adopt H/2 or H/3 and consolidate connections.

These budgets are extremely generous. Even the target of three seconds is lavish; most sites should be able to put up interactive content much sooner for nearly all users.

Meanwhile, sites are ballooning. The median mobile page is now 2.6 MiB, blowing past the size of DOOM (2.48 MiB) in April. The 75th percentile site is now larger than two copies of DOOM. P90+ sites are more than 4.5x larger, and sizes at each point have doubled over the past decade. Put another way, the median mobile page is now 70 times larger than the total storage of the computer that landed men on the moon.

Median page weights are more than 2.5x larger for mobile sites than a decade ago, and sites at the 75th percentile are now 4x their 2015 weight.
Median page weights are more than 2.5x larger for mobile sites than a decade ago, and sites at the 75th percentile are now 4x their 2015 weight.

An outsized contributor to this bloat comes from growth in JavaScript. Mobile JavaScript payloads have more than doubled since 2015, reaching 680 KiB and 1.3 MiB at P50 and P75 (respectively). This compositional shift exacerbates latent inequality and hurts businesses trying to grow.

When JavaScript grows as a proportion of critical-path resources, the impact of higher CPU cost per byte reduces budgets. This coffin corner effect explains why image and CSS-heavy experiences perform better byte-for-byte than sites built with the failed tools of frontend's lost decade.

Indeed, the latest CrUX data shows not even half of origins have passing Core Web Vitals scores for mobile users. More than 40% of sites still perform poorly for desktop users, and progress in both cohorts is plateauing:

This is a technical and business challenge, but also an ethical crisis. Anyone who cares to look can see the tragic consequences for those who most need the help technology can offer. Meanwhile, the lies, half-truths, and excuses made by frontend's influencer class are in defence of these approaches are, if anything, getting worse.

Through no action of their own, frontend developers have been blessed with more compute and bandwidth every year. Instead of converting that bounty into delightful experiences and positive business results, the dominant culture of frontend has leant into self-aggrandising narratives that venerate failure as success. The result is a web that increasingly punishes the poor for their bad luck while paying developers huge salaries to deliver business-undermining results.

Nobody comes to work wanting to do a bad job, but low-quality results are now the norm. This is a classic case of under-priced externalities created by induced demand from developers and PMs living in a privilege bubble.

The interactive budget calculator has been updated and revised for 2026, allowing you to see the impact of networks, devices, connections, and JavaScript on site performance.
The interactive budget calculator has been updated and revised for 2026, allowing you to see the impact of networks, devices, connections, and JavaScript on site performance.

Embedded in this year's estimates is hopeful news about the trajectory of devices and networks. Compared with early 2024's estimates, we're seeing budget growth of 600+KiB for three seconds, and a full megabyte of extra headroom at five seconds.1

While this is not enough to overcome continued growth in payloads, budgets are now an order of magnitude more generous than those first sketched in 2017. It has never been easier to deliver pages quickly, but we are not collectively hitting the mark.

To get back to a healthy, competitive web, developers will need to apply considerably more restraint. If past is prologue, moderation is unlikely to arise organically. It's also unhelpful to conceive of ecosystem-level failures as personal failings. Yes, today's frontend culture is broken, but we should not expect better while incentives remain misaligned.

Browsers, search engines, and developer tools will need to provide stronger nudges, steering users away from bloated sites where possible, and communicating the problem to decision-makers. This will be unpopular, but it is necessary for the web to thrive.

This series has continually stressed that today's P75 device is yesterday's mid-market Android, and that trend continues.

The explosive smartphone market growth of the mid 2010s is squarely in the rearview mirror, and so historical Average Selling Prices (ASPs) and replacement dynamics now dominate any discussion of fleet performance.

Hardware upgrade timelines are elongating. Previous estimates of 18 months for replacement on average is now too rosy, with the median smartphone now living longer than two years. P75 devices may be nearly 3 years old, and TechInsights estimates a 23.7% annual replacement rate.

With all of this in mind, we update our target test device to the Samsung Galaxy A24 4G, a mid-2023 release featuring an SoC fabbed on a 6 nm process; a notable improvement over previous benchmark devices.

Readers of this blog are unlikely to have used a phone as slow as the A24 in at least a decade.
Readers of this blog are unlikely to have used a phone as slow as the A24 in at least a decade.

The A24 sold for less than the global Average Selling Price for smartphones at launch ($250 vs. $353). Because that specific model may be hard to acquire for testing, anything based on the MediaTek Helio G99 or Samsung Exynos 1330 will do just fine; e.g.:

Teams serious about performance should track the low-end cohort instead, sticking with previously acquired Samsung Galaxy A51's, or any late-model device from the Moto E range.2

For link-accurate network throttling, I recommend spending $3 for Throttly for Android. It supports custom network profiles, allowing you to straightforwardly emulate a 9/3/100 network. DevTools throttling will always be inaccurate, and this is the best low-effort way to correctly condition links on your primary test device.

Desktops are not the limiting factor, but it's still helpful to have physical test devices. You should not spend more than $250 (new) on a low-end test laptop. It should have a Celeron processor, eMMC storage, and run Windows. The last point is not an effort to sell more licences, but rather to represent the nasty effects of defender, NTFS, and parasitic background services on system performance. Something like the HP 14 dq3500nr.

Behold, the HP 14! A Celeron N4500 laptop, sporting a 4-core chip first released in 2021. This CPU packs less than a quarter as much cache as a late-model iPhone.
Behold, the HP 14! A Celeron N4500 laptop, sporting a 4-core chip first released in 2021. This CPU packs less than a quarter as much cache as a late-model iPhone.

Desktop network throttling remains fraught, and the best solutions are still those from Pat Meenan's 2016 article announcing winShaper.

What we see in our recommended test setups is an echo of the greatest influence of the past decade on smartphone performance: the spread of slow, ever-cheaper Androids with ageing chipsets, riding the cost curve downward, year-on-year.

The explosive growth of this segment drove nearly all market growth between 2011 and 2017. Now that smartphones have reached global saturation, flat sales volumes mirror the long-term trends in desktop device ownership:

At no point in the past dozen years has iOS accounted for more than 20% of new-device shipments. Quarter-by-quarter fluctuations have pushed that number as high as 25% when new models are released, but the effect never lasts.

Most phones — indeed, most computers — are 24+ month old Androids. This is the result of a price segmented market: a preponderance of smartphones sold for more than $600USD (new, unlocked) are iPhones, and the overwhelming majority of devices sold for less than that are slow Androids.

Source: IDC, Statista, and Counterpoint Research.
Not adjusted for inflation. Full year 2025 ASP is extrapolated from the first several quarters and may be revised upward.

The “i” in “iPhone” stands for “inequality.”

Global ASPs show the low-end isn't just alive-and-well, it's many multiples of high-end device volume. To maintain a global ASP near $370, an outsized number of cheaper Androids must be sold for every $1K (average) iOS device.

To understand how the payload budget estimate is derived, we need to peer deeper into the device and network situation. Despite huge, unpredictable shocks in the market (positive: Reliance Jio; negative: a pandemic), the market trends this series tracks have allowed us to forecast accurately.

75th+ percentile users are almost always on older devices, meaning we don't need to divine what will happen, just remember the recent past.

The properties of today's mobile devices define how our sites run in practice. From the continued prevalence of 4G radios, to the shocking gaps in CPU performance, the reality of the modern web is best experienced through real devices. The next best way to understand it is through data.

Per usual, I've built single and multicore CPU performance charts for each price point; we track four groups:

The last two cohorts account for more than 2/3 of new device sales:

The performance of JavaScript-based web experiences is heavily correlated with single-core speed, meaning that the P75 device places a hard cap on the amount of JavaScript that is reasonable for any website to rely on.

Depressingly, budget device CPUs have not meaningfully improved since 2022. But the nearly-identical SoCs in each year's device are getting cheaper. Reduced bill-of-materials costs mean declining retail prices for low-spec phones.

Meanwhile, the high end continues to pull away. As previewed in prior instalments of this series, top-end Androids are beginning to close the massive performance lead that Apple's A-series chips have opened up over the past decade. This is largely thanks to Qualcomm and MediaTek finally starting to address the cache-gap I have harped on since 2016:

Source: Wikipedia, vendor documentation, and author's caclulations.

Some cache sizings are estimates, particularly in the Android ecosystem, where vendor documentation is lacking. Android SoC vendors have a habit of implementing the smallest values ARM allows for a licensed core design.

The latest iPhone chip (A19 Pro) features truly astonishing amounts of cache. For a sense of scale, the roughly 50 MiB of L1, L2, and L3 cache in an iPhone 17 Pro provides 8.3 MiB of cache per core. This is more than double the per core cache of Intel's latest high-end desktop part, the 285K, which provides a comparatively skimpy 3.3 MiB combined cache per core.

The gobsmacking caches of A-series parts allow Apple to keep the beast fed, leading to fewer stalls and more efficient use of CPU time. This advantage redounds to better battery life because well-fed CPUs can retire work faster, returning to lower-power states sooner.

That it has taken this long for even the top end of the Android ecosystem to begin to respond is a scandal.

Source: GSMArena, Geekbench, and vendor documentation.
Geekbench 6 points per dollar at each price point over time. Prices are MSRP at date of launch.

If there's good news for buyers of low-end devices, it's that performance per dollar continues to improve across the board.

Frustratingly, though, low-end single core performance is still 9x slower than contemporary iPhones, and mid-tier devices remain more than 3.5x slower:

Multicore performance tells a similar tale. As a reminder, this metric is less correlated with web application performance than single-core throughput:

Source: GSMArena, Geekbench.
When Geekbench reports multiple scores for an SoC, recent non-outlier values are selected.

Performance per dollar looks compelling for the lower tier parts, but recall that they are 1/3 to 1/5 the performance:

Source: GSMArena, Geekbench, and vendor documentation.

What makes iPhones so bloody fast, while Androids have languished? Several related factors in Apple's chip design and development strategy provided a commanding lead over the past decade:

Android vendors, meanwhile, have spread their SoC development budgets in penny-wise, pound-foolish fashion. Even Google and Samsung's in-house efforts have failed to replicate the virtuous effects of alignment that Apple's discipline in CPU design have delivered.

Source: GSMArena, Wikipedia, and vendor documentation.

Feature sizes are a fudge below 10 nanometres, but marketing names usually reflect real increases in transistor density and frequencies, along with reductions in power use. High-end Android and iOS parts have generally been produced on comparable nodes, with Apple's lead lasting less than a year. But that's less than half the story. Android SoC vendors have avoided adding competitively sized caches, dedicating the same mm^2 on die to higher core counts and on-die radios. From a performance perspective, this has been catastrophic:

Source: GSMArena, Wikipedia, and vendor documentation.

Core counts are a headline fixture of device marketing, but even the cheapest phones have featured eight (slow, memory-starved) cores since 2019. Speed comes from other properties; namely appropriate cache sizing, memory latency, frequency scaling, and chip architecture. Apple's core-count restraint and focus on other aspects should have been a lesson to the entire industry long before 2024.

Smaller transistors also allow for higher peak frequencies, giving Apple a perennial advantage thanks to early access to TSMC's latest, smallest, power-sipping processes:

Source: GSMArena, Wikipedia, and vendor documentation.
Maximum advertised frequency of the fastest on-package core.

These trade-offs have allowed Apple to charge a premium for devices which no other vendor can justify:

Source: GSMArena, IDC, Statista, and Counterpoint Research.
Prices are new, unlocked MSRP at launch.

Global ASPs are conservative estimates; some research groups estimate values 10-15% higher, but the trends are consistent. The premium end continues to pull away in both price and performance, dragging ASPs slightly upward. Meanwhile, high-end devices continue to be outsold more than 3:1 by low-end phones that aren't getting faster. Those slow devices are, however, getting increasingly inexpensive, dropping below $100 in the last two years. That's 1/10th the price of the cheapest iPhone with Apple's fastest chip.

Thanks to market trends, the recent-spec iPhones many web developers carry don't even represent the experience of most iOS users.

It may seem incongruous that the ASP of iOS devices is bumping up against the $1K mark while most developed countries experience bouts of slow growth post-pandemic, along with well-documented cost-of-living crises among middle-class buyers.

One possible solution to this riddle is that Android sales remain strong, as Apple is cordoned into the segment of the market it can justify to shareholders with a 30% net margin. Another is growth in the resale market, particularly for iOS devices. The extended longevity of these phones thanks to premium components and better-than-Android software support lifetimes has created a vibrant and growing market below the $400 price point for used and refurbished smartphones.

This also helps to explain the flat-to-slightly-declining market for new smartphones, as refurbished devices accelerate past $40BN/yr in sales.

What does this mean? We should expect, and see, higher-than-first-sale volumes of iOS use in various aggregate statistics. Wealth effects have historically explained much of this, but the scale is growing. Refurbishment and resale are now likely to be driving growing discontinuity in the data:

Wikimedia reports more than 40% of mobile site visits come from iOS devices over the past decade, despite global sales ratios never breeching 20% annually.
Wikimedia reports more than 40% of mobile site visits come from iOS devices over the past decade, despite global sales ratios never breeching 20% annually.

Not only is an iPhone 17 Pro not real life in the overall market, it isn't even real life in the iOS market any more.

The situation on desktop is one of overwhelming stasis, modulo the Windows 11 upgrade cycle resulting from Windows 10's EOL at the end of 2025. That has driven an unusually strong one-off cycle of upgrades that will reverberate through the data in coming years.

IDC's recent sales analysis shows that only 20% of 'desktop' devices are fully wired, with performance of the vast majority subject to power and thermal limits arising from battery power.
IDC's recent sales analysis shows that only 20% of 'desktop' devices are fully wired, with performance of the vast majority subject to power and thermal limits arising from battery power.

This impetus to upgrade is cross-pressured by pricing headwinds. Economic uncertainty, tariffs, scrambled component pricing from AI demand for silicon of all sorts, and ever-longer device replacement cycles all mean that new PCs may not provide more than incremental performance gains. As a result, an increase in recent worldwide PC Average Selling Prices from ~$650 in our previous estimate to ~$750 in 2025 (per IDC) may not indicate premiumisation. In a globally connected economy, inflation comes for us all.

Overall, the composition and trajectory of the desktop market remains stable. Despite 2025's device replacement boomlet, IDC predicts stasis in “personal computing device” volumes, with growth bumping along at ±1-2% a year for the next 5 years, and landing just about where things are today. The now-stable baseline of ~410MM devices per year is predicted to be entirely flat into 2030.

Top-line things to remember about desktops are:

Put another way: if you spend a majority of your time in front of a computer looking at a screen that's larger than 10", you live in a privilege bubble.

Per previous instalments, we can use Edge's population-level data to understand the evolution of the ecosystem. As of late 2025, the rough breakdown looks like:

Device Tier Fleet % Definition
Low-end 30% Either:
<= 4 cores, or
<= 4GB RAM
Medium 60% HDD (not SSD), or
4-16 GB RAM, or
4-8 cores
High 10% SSD +
> 8 cores +
> 16GB RAM

Compared with the data from early 2024, we see some important shifts:

Because the user base is also growing, it's worth mentioning that the apparent drop in the low-end is a relative change. In absolute terms, the ecosystem is seeing a slower absolute removal of low-spec machines. This matches what we should intuitively expect from incremental growth of the Edge user base, which is heavily skewed to Windows 11.

Older and slower devices likely constitute an even larger fraction of the total market, but may be invisible to us. Indeed, computers with spinning rust HDDs, <= 4GB of RAM, and <= 2 cores still represent millions of active devices in our data. Alarmingly, they have dropped by less than 20% in absolute terms over the past six months. And remember, this isn't even the low end of the low end, as our stats don't include Chromebooks.

Building to the limits of ”feels fine on my MacBook Pro” has never been realistic, but in 2025 it's active malpractice.

The TL;DR for networks for 2026 is that the P75 connection provides 9Mbps of downlink bandwidth, 3Mbps upload, with 100ms of RTT latency.

This 9Mbps down, 3Mbps up configuration represents sizeable uplift from the 2024/2025 guidance of 7.2Mbps down, 1.4Mbps up, with 94ms RTT, but also a correction. 2024's estimate for latency was probably off by ~15%, and should have been set closer to 110ms.

Global or ambitious firms should target networks even slower than 9/3/100 in their design parameters; the previous 5/1/28 “cable” network configuration is still worth building for, but with an upward adjustment to latency (think 5/1/100). Uneven service has the power to negatively define brands, and the way to remain in user's good graces digitally is to build for the P95+ user. Smaller, faster sites that serve this cohort well will stand out as being exceptional, even on the fastest networks and devices.

Looking forward into 2026 and 2027, the hard reality is that networks remain slower than developers expect and will not improve quickly.

Upload speeds, in particular, remain frustratingly slow, as wider upload channels correlate with faster early-session downloads. Only users on the fastest 10% of networks see downlink:uplink bandwidth rising above 2.5:1 ratios, even under ideal conditions.3

Owing to physics, device replacement rates, CAPEX budgets of mobile carriers, inherent variability in mobile networks (vs. fixed-line broadband), and worldwide regulatory divergence, we should expect experiences to be heavily bottlenecked by networks for the foreseeable future.

Previous posts in this series have documented improvements, but as we have been saying since 2021, 4G is a miracle, and 5G is a mirage.

According to the GSMA's 2025 State of Mobile Internet Connectivity report, nominal 5G penetration coverage just crossed the 50% mark in 2025, and the rate of progress is predicted to slow as the attractive economics of dense rural areas give way to rural build outs. But coverage is not the same as service; many users will need newer, more expensive devices to unlock 5G speeds.
According to the GSMA's 2025 State of Mobile Internet Connectivity report, nominal 5G penetration coverage just crossed the 50% mark in 2025, and the rate of progress is predicted to slow as the attractive economics of dense rural areas give way to rural build outs. But coverage is not the same as service; many users will need newer, more expensive devices to unlock 5G speeds.

This will remain true for at least another three years, with bandwidth and latency improving only incrementally. Sites that want to reach their full potential, if only to beat the competition, must build to budgets that are inclusive for folks on the margins. When it comes to web performance, doing well is the same as doing good.

Bandwidth numbers are derived from Cloudflare's incredible Radar dataset.4 Looking at (downlink) bandwidth trends over the past year, we see stasis:

November 2024-November 2025 downlink bandwidth. Note that percentiles are inverted in this chart (we take their P25 as our P75 and vice versa). The trend line is remarkably stable; median and slower connections have not improved over the past year.
November 2024-November 2025 downlink bandwidth. Note that percentiles are inverted in this chart (we take their P25 as our P75 and vice versa). The trend line is remarkably stable; median and slower connections have not improved over the past year.

So where does the improvement in our estimate come from? Looking back to 2024, we see (predicted) gains emerge, but note their small absolute size:

November 2023-November 2024 downlink bandwidth. Connections representing the slowest quartile improved from ~6 to ~9 Mbps over the year, while median downlinks improved from ~12 to ~17 Mbps. They have not moved since.
November 2023-November 2024 downlink bandwidth. Connections representing the slowest quartile improved from ~6 to ~9 Mbps over the year, while median downlinks improved from ~12 to ~17 Mbps. They have not moved since.

The gap between P25 and P75 downlinks was 15 Mbps at the start of 2024 and has grown to 21 Mbps at the end of 2025; an increase of 40%. Meanwhile, bottom quartile are only 28% faster, improving from ~7 to 9 Mbps. In absolute terms, wealthier users saw 3x as much absolute gain.

The performance inequality gap is growing at the network layer too.

Latency across networks (RTTs) is improving somewhat, with a nearly 10% decrease at P75 over the past year, from ~110ms to ~100ms. Small improvements on faster links (P50, P25) look to be in the 5% range:

November 2024-November 2025 RTT. Percentiles align with our usual intuition, with P75 representing the value at which 25% of connections are slower.
November 2024-November 2025 RTT. Percentiles align with our usual intuition, with P75 representing the value at which 25% of connections are slower.

Given the variability at higher percentiles, we'll stick with a 100ms target for the 2026 calendar year, although we should expect slight gains.

Underlying these improvements are datacentre build outs in traditionally underserved regions, undersea cable completions, and cellular backhaul improvements from the 5G build out. Faster client CPUs and radios will also contribute meaningfully and predictably.5

All of these factors will continue to incrementally improve, and we predict another ~5% improvement (to 95ms) at P75 for 2027.

Gains will be modest for both bandwidth and latency over the next few years. The lion's share of web traffic is now carried across mobile networks, meaning that high percentiles represent users feeling the confounding effects of cellular radios, uneven backhaul, coverage gaps, and interference from the built environment. Those factors are hardest and slowest to change, involving the largest number of potential long tent poles.

As discussed in previous instalments, the world's densest emerging markets reached the smartphone tipping point by 2020, displacing feature phones for most adults. More recently, we have approached saturation; the moment at which smartphone sales are dominated by replacements, rather than first-time sales in those markets. Affordability and literacy remain large challenges to get the residually unconnected online. The 2025 GSMA State of Mobile Internet Connectivity report has a full chapter on these challenges (PDF), complete with rich survey data.

What we see in now-stable smartphone shipment volumes primes the pump for improvements in device performance. First-time smartphone users invest less in their devices (relative to income) as the value may be unclear; users shopping for a second phone have clearer expectations and hopes. Having lived the frustration of slow sites and apps, an incrementally larger set of folks will be willing to pay a bit more to add 5G modems to their next phone.

This effect is working its way down the price curve from the ultra-high-end in 2020 to the mid-tier in 2024, but it has yet to reach the low end. Our latest low-end specimen — 2025's Moto E15 — still features a 4G radio. Because of the additional compute requirements that 5G places on SoCs, we will expect to see process node and CPU performance increases as 5G costs fall far enough to impact the low-price band.

Today, devices with otherwise similar specs and a 5G radio still command a considerable premium. The Galaxy A16 5G was introduced at a 20% premium over the 4G version, mirroring the mid-market dynamic from 2022 and 2023 where devices were offered in “regular” and (pricier) 5G versions. It will take a transition down the fab node scale like we saw for mid-market SoCs in 2022 and 2024 to make 5G a low-end table-stakes feature. I'm not holding my breath.

Given the current market for chips of all types, we may be seeing low-spec SoCs produced at 12 nanometres for several more years, reprising the long-term haunting of Android by huge volumes of A53 cores produced at 28 nm from 2012 to 2019.

The budget estimates generated in this series may seem less relevant now that tools like Core Web Vitals provide nuanced, audience-specific metrics that allow us to characterise important UX attributes. But this assumption is flawed thanks to the effects of deadweight losses and the biases inherent in those metrics.

Case-in-point: last year CWV deprecated FID and replaced it with the (better calibrated) INP metric. This predictably dropped the CWV pass rates of sites built on desktop-era JavaScript frameworks like React, Angular, and Ember:

2024's Frontend Sadness Index shows that CWV scores for sites based on legacy JS frameworks like React and Angular not only started off in trouble, but dropped more as INP replaced FID.
2024's Frontend Sadness Index shows that CWV scores for sites based on legacy JS frameworks like React and Angular not only started off in trouble, but dropped more as INP replaced FID.

RUM data, in isolation, undercounts the opportunity costs of slow and bloated experiences. Users have choices, and lost users do not show up in usage-based statistics.

A team I worked with this year saw these effects play out directly in their CrUX data:

Form-factor ratios can show what use statistics for high-growth sites obscure.
Form-factor ratios can show what use statistics for high-growth sites obscure.

This high-profile, fast-growing site added nearly 100 KiB of critical-path JavaScript per month from January to June. The result? A growing share of visits from desktop devices, and proportionally fewer mobile users every month. Once we began to fumigate for dead code and overeager preloading, mobile users returned.

These effects can easily overcome other factors, particularly in the current era of JavaScript bundle hyper-growth:

Median page weights are more than 2.5x larger for mobile sites than a decade ago, and 4x larger at the 75th percentile.
Median page weights are more than 2.5x larger for mobile sites than a decade ago, and 4x larger at the 75th percentile.

Growth in JavaScript bytes over the wire mirrors that of overall content despite the continuing Performance Inequality Gap crisis.
Growth in JavaScript bytes over the wire mirrors that of overall content despite the continuing Performance Inequality Gap crisis.

Perhaps the most important insight I spotted while re-immersing myself in the data for this post were the implications of these charts from the RUM Archive:

The RUM Archive reports that SPAs are, on average, only generating a one (1) soft navigation per hard navigation, undermining the case for SPAs.

The top-line takeaway is chilling: sites that are explicitly designed as SPAs, and which have intentionally opted in to metrics measurement around soft-navigations are seeing one (1) soft-navigation for every full page load on average.

The rinky-dink model we discussed last year for the appropriateness of investing in SPA-based stacks is a harsh master, defining average session performance as the sum of interaction latencies, including initial navigation, divided by the total number of interactions (excluding scrolling):

L avg = latency ( navigation ) + i = 1 I latency ( i ) N

If the RUM Archive's data is directionally correct, at an ecosystem level, N=~2 for both mobile and desktop. Sessions this shallow make a mockery of the idea that we can justify more up-front JavaScript to deliver SPA technology, even on sites with reason to believe it would help.

In private correspondence, Michal Mocny shared an early analysis from data collected via the Soft Navigations API Origin Trial. Unlike the Akamai mPulse data that feeds the RUM Archive, Chromium's data tracks interactions from all sites, not only those that have explicitly opted-in to track soft navigations, providing a much wider aperture. On top-10K origins, Chrome is currently observing values for N between 1.6 and 2.2, depending on how the analysis is run, or 0.8-1.1 additional soft navigations per initial page load.

It's difficult to convey the earth-shattering magnitude of these congruent findings. Under these conditions, the amount of JavaScript a developer can justify up-front to support follow-on in-page navigations is de minimis.6

This should shake our industry to the bone, driving rapid reductions in emitted JavaScript. And yet:

Growth in JavaScript bytes is driving growth in overall payloads.
Growth in JavaScript bytes is driving growth in overall payloads.

This series has three main goals:

  1. Provide a concrete set of page-weight targets for working web developers.

  2. Arm teams with an understanding of how the client-side computing landscape is evolving.

  3. Show how budgets are constructed, giving teams tools to construct their own estimates from their own RUM and market data.

This is not altruism. I want the web to win. I began to raise the alarm about the problems created by a lack of adaptation to mobile's constraints in 2016, and they have played out on the same trend-line I feared. The web is now decisively losing the battle for relevance.

To reverse this trend, I believe several (currently unmet) conditions must be fulfilled:

But the web is not winning mobile. Apple, Google, and Facebook nearly extinguished the web's potential to disrupt their cosy arrangement. Preventing the web from breaking out — from meaningfully delivering app-like experiences outside an app store — is essential to maintaining dominance. But some are fighting back, and against the odds, it's working.

What's left, then, is the subject of this series. Even if browser competition comes to iOS and competitors deliver the features needed to make the web a plausible contender, the structure of today's sites is an impediment to a future in which users prefer the web.

Most of the world's computing happens on devices that are older and slower than anything on a developer's desk, and connected via networks that contemporary “full-stack” developers don't emulate. Web developers almost never personally experience these constraints, and over frontend's Lost Decade, this has created an out-of-touch privilege bubble that poisons the products of the teams that follow the herd, as well as the broader ecosystem.

That's bleak, but the reason I devote weeks to this research each year isn't to scold. The hope is that actionable targets can help shape choices, and that by continuing to stay engaged with the evolution of the landscape, we can see green shoots sprout.

If we can hold down the rate of increase in critical-path resource growth it will give hardware progress time to overtake our excesses. If we make enough progress in that direction, we might even get back to a place where the web is a contender in the minds of users.

And that's a future worth working for.

FOOTNOTES

  1. I really do try to avoid being an unremitting downer, but the latest device in our low-cost cohort — the Motorola E15 — is not an improvement in CPU benchmarks from last year.

    More worrying, no device in that part of the market has delivered meaningful CPU gains since 2020. That's five years of stasis, and a return to a situation where new low-end devices are half as fast as their mid-tier contemporaries. Even as process node improvements trickle down to the $300-350 price bracket, the low end is left further and further behind.

    As the wider Android ecosystem experienced from 2015-2020, devices with the same specs are getting cheaper, but not better. This allows them to open new markets and sell in massive numbers, helping to prop up overall annual device sales, even as devices last longer and former growth markets (India, Indonesia, etc.) hit smartphone saturation. This is reflected in the low-end models finally sinking below $100USD new, unlocked, at retail. But to hit this price-point, they deliver performance on par with 2019's mid-tier Galaxy A50; a phone whose CPU was fabbed on a smaller process node than today's latest low-end phones.

    Services trying to scale, and anyone trying to build for emerging markets, should be anchoring P90 or P95, not P75. For serious shops, the target has not moved much at all since 2019.

    This reality alone is enough to justify rejection of frameworkist hype-shilling, without even discussing the negative impacts of JS-first stacks on middle-distance team velocity and site maintenance costs.

  2. Because low-end and medium-tier devices were so similar until very recently, this differentiation wasn't necessary. But progress in process nodes does eventually trickle down. The mid-tier began to see improvement away from the (utterly blighted) 28 nm node in 2017, a mere 4 years after the high-end decamped for greener pastures. The low-end, meanwhile, was trapped in that register until 2020, nearly a decade after 28nm was first introduced. Since then, the mid tier has tracked process node scaling with a 2-3 year delay, while the low end has gotten stuck since 2021 at 12 nm.

    The failure to include meaningful amounts of cache in Android SoCs levelled out low-end and medium tier performance until 2023, but transistor shrinkage and architecture improvements at ~$350 are now decoupling the performance of these tiers once again, and we should expect to see them grow further apart in coming years, creating a worrying gap between P90+ and P75 devices.

  3. Cloudflare's worldwide bandwidth data only provides downlink estimates, and so to understand the downlink:uplink bandwidth ratio, I turned instead to their API and queried CF's speed test data, which provide fuller histograms.

    These tests are explicitly run by users, meaning they occur under synthetic conditions and likely with a skew towards best-case network performance. They also report maximums over a session, rather than loaded network behaviour, which explains the divergence between the higher speed values reported there than the more realistic "Internet Quality Index" dataset we use for primary bandwidth and latency analysis.

    The data we can derive from it is therefore much rosier, but it does give us a sense for downlink/uplink ratios (bitrates are in Mbps):

    %-ile Download Upload Ratio
    20th 15 5 3
    25th 25 10 2.5
    30th 10 30 3
    40th 20 50 2.5
    median 85 35 2.4
    60th 125 50 2.5
    70th 220 85 2.6
    75th 280 100 2.8
    80th 345 140 2.5
    90th 525 280 1.9

    Because the data is skewed in an optimistic direction (thanks to usage biases towards wealth, which correlates with high-performance networks), we pick a 3:1 ratio in our global baseline.

    Despite variance in the lower percentiles, it is reasonable to expect tough ratios in the bottom quartile given the build-out properties of various network types. These include:

    • Asymmetries in cable and DSL channel allocations.
    • Explicit frequency/bandwidth allocation in cellular networks.
    • Radio power lopsidedness vs. the base stations they connect to, particularly for battery-powered devices.

    Even new networks like Starlink are spec'd with 10:1 or greater ratios. Indeed, despite being "fast", the author's own home fixed-line connection has a ratio grater than 30:1. We should expect many such discrepancies up and down the wealth spectrum.

    A 4:1 or 5:1 ratio is probably justified, and previous estimates used 5:1 ratios for that reason. Lacking better data, going with 3:1 is a judgement call, and I welcome feedback on the choice.

  4. Why am I relying on Cloudflare's data?

    Google, Microsoft, Amazon, Fastly, Akamai, and others obviously have similar data (at least in theory), but do not publish it in such a useful and queryable way. That said, these estimates are on trend with my priors about the network situation developed from many sources over the years (including non-public datasets).

    There is a chance Cloudflare's data is unrepresentative, but given their CDN market penetration, my primary concern is that their data is too rosy, rather than too generous. Why? Geographic and use-based bias effects.

    The wealthy are better connected and heavier internet users, generating more sessions. Better performance of experiences increases engagement, so we know CF's data contains a bias towards the experiences of the affluent. This potentially blinds us to large fraction of the theoretical TAM and (I think) convincingly argues that we should be taking a P90 value instead of P75. However, we stick with P75 for two reasons:

    • It would be incongruent to cite P90 this year without first introducing it in previous installations.
    • A lack of explicitly triangulating data from the current network environment makes it challenging to judge the magnitude of use-based biases in the data.

    Thankfully, Cloudflare also produces country-level data. We can use this to cabin the scale of potential issues in global data. Here, for instance, are the P75 network situations for a few populous geos that every growth-oriented international brand must consider in descending downlink speed:

    Geo @ P75 Down Up RTT Pop (MM)
    UK 21 7 34 69
    USA 17 5.5 47 340
    Brazil 12 4 60 213
    Global 9 3 100
    Indonesia 6.4 2.1 75 284
    India 6.2 2.1 85 1,417
    Pakistan 4 1.3 130 241
    Nigeria 3.1 1 190 223

    Underweighting the less-affluent is a common bias in tech, and my consulting experience has repeatedly reconfirmed what Tammy Everts writes about when it comes to the opportunities that are available when sites push past performance plateaus.

    There is no such thing as “too fast”, but most teams are so far away from minimally acceptable results that they have never experienced the huge wins on the other side of truly excellent and consistent performance. Entire markets open up when teams expand access through improved performance, and wealthy users convert more too.

    It's this reality that lemon vendors have sold totally inappropriate tools into, and the results remain shockingly poor.

  5. As we mentioned in the last instalment, improvements in mid-tier and low-end mobile SoCs are delivering better network performance independent of protocol and spectrum improvements.

    Modern link-layer cell and Wi-Fi stacks rely heavily on client-side compute for the digital signal processing necessary to implement advanced techniques like MIMO beam forming.

    This makes the device replacement rates doubly impactful, even within radio generations and against fixed channel capacity. As process improvements trickle down (glacially) to mid-tier and low-end SoCs, the radios they contain also get faster processing, improving latency and throughput, ceteris paribus.

  6. The RUM Archive's soft-to-hard navigations ratio and the early data from the Chromium Soft Navigations Origin Trial leave many, many questions unanswered including, but not limited to:

    • What's the distribution?
      • Globally: do some SPA-premised sites have many more or many fewer soft-navigations? Are only a few major sites pushing the ratios up (or down)?
      • Locally: can we characterise user's sessions to understand what fraction trigger many soft-navigations per session?
    • Do other data sources agree?
      • Will the currently-running Origin Trial for Soft Navigations continue to agree as the reach grows?
      • Can other RUM vendors validate or refute these insights?
    • What about in-page changes not triggering URL updates?
      • How should infinite-scrolling be counted?
      • We should expect Chromium histogram data to capture more of this vs. the somewhat explicit instrumentation of mPulse, driving up soft-navigations per hard navigation. Do things stay in sync in these data sets over time?

    Given the scale of the mystery, a veritable stampede of research in the web performance community should follow. I hope to see an explosion of tools to guide teams toward the most appropriate architectures based on comparative data within their verticals, first-party RUM data about session lengths, distribution mono/bi/tri/multi-modality of sessions, and other situated factors.

    The mystery I have flicked at in the past is now hitting us smack in the face. Will we pay attention?

The App Store Was Always Authoritarian

High-modernism as handmaiden to autocracy is depressingly predictable.

Eric Prouzet
Eric Prouzet

And now we see it clear, like a Cupertino sunrise bathing Mt. Bielawski in amber: Apple will censor its App Store at the behest of the Trump administration without putting up a fight.

It will twist words into their antipodes to serve the powerful at the expense of the weak.

To better serve autocrats, it will talk out both sides of its mouth in ways it had previously reserved for dissembling arguments against threats to profits, like right-to-repair and browser choice.

They are, of course, linked.

Apple bent the knee for months, leaving many commentators to ask why. But the reasons are not mysterious: Apple wants things that only the government can provide, things that will defend and extend its power to extract rents, rather than innovate. Namely, selective exemption from tariffs and an end to the spectre of pro-competition regulation that might bring about real browser choice.

Over the past few weeks, Tim Apple got a lot what he paid for,1 with the full weight of the US foreign and industrial policy apparatus threatening the EU over DMA enforcement. This has been part of a full-court press from Cupertino. Apple simultaneously threatened the EU while rolling out fresh astroturf for pliant regulators to recline on. This is loud, coordinated, and calculated. But calculated to achieve what? Why is the DMA such a threat to Apple?

Interoperability.

The DMA holds the power to unlock true, safe, interoperability via the web. Its core terms require that Apple facilitate real browser engine choice, and Apple is all but refusing, playing games to prevent powerful and safe iOS browsers and the powerful web applications they facilitate. Web applications that can challenge the App Store.

Unlike tariffs, which present a threat to short-term profits through higher costs and suppression of upgrades in the near term, interoperability is a larger and more insidious boogeyman for Apple. It could change everything.

Apple's profits are less and less attributable to innovation as “services” revenue swells Cupertino's coffers out of all proportion to iPhone sales volume. “Services” is code for rent extraction from captive users and developers. If they could acquire and make safe apps outside the App Store, Apple wouldn't be able to take 30% from an outlandishly large fraction of the digital ecosystem's wealthiest players.

Apple understands browser choice is a threat to its rentier model. The DMA holds the potential for developers to finally access the safe, open, and interoperable web technologies that power most desktop computing today. This is a particular threat to Apple, because its class-leading hardware is perfectly suited to running web applications. All that's missing are browsers that aren't intentionally hobbled. This helps to explain why Apple simultaneously demands control over all browser technology on iOS while delaying important APIs, breaking foundational capabilities, and gaslighting developers about Apple's unwillingness to solve pressing problems.

Keeping capable, stable, high-quality browsers away from iOS is necessary to maintain the App Store's monopoly on the features every app needs. Keeping other software distribution mechanisms from offering those features at a lower price is a hard requirement for Cupertino's extractive business model. The web (in particular, PWAs) present a worst-case scenario.

Unlike alternative app stores that let developers decouple distribution of proprietary apps from Apple's App Store, PWAs further free developers from building for each OS separately, allowing them to deliver apps though a zero-cost platform that builds on standards. And that platform doesn't feature a single choke point. For small developers, this is transformative, and it's why late-stage Apple cannot abide laws that create commercial fairness and enable safe, secure, pro-user alternatives.

This is what Apple is mortgaging its brand (or, if you prefer, soul) to prevent: a world where users have a real choice in browsers.

Horrors.

Apple is loaning its monopoly on iOS software to yet another authoritarian regime without a fight, painting a stark contrast: when profits are on the line, Cupertino will gaslight democratic regulators and defy pro-competition laws with all the $1600/hr lawyers Harvard can graduate. And when it needs a transactional authoritarian's help to protect those same profits, temporarily2 lending its godlike power over iOS to censor clearly protected speech isn't too high a price to ask. Struggle for thee, but not for me.

The kicker is that the only alternative for affected users and developers is Apple's decrepit implementation of web apps; the same platform Cupertino serially knee-caps to deflect competition with its proprietary APIs.

It is no exaggeration to say the tech press is letting democracy down by failing to connect the dots. Why is Apple capitulating? Because Apple wants things from the government. What are those things? We should be deep into that debate, but our reportage and editorial classes cannot grasp that A precedes B. The obvious answers are also the right ones: selective protection from tariffs, defanged prosecution by the DOJ, and an umbrella from the EU's democratic, pro-competition regulation.

The Verge tiptoed ever so close to getting it, quoting letters that former Apple executives sent the company:3

I used to believe that Apple were unequivocally ‘the good guys,’” Hodges writes. “I passionately advocated for people to understand Apple as being on the side of its users above all else. I now feel like I must question that.”

— Wiley Hodges,
"The Verge"

This is a clue; a lead that a more thoughtful press and tech commentariat could use to evaluate the frames the parties deploy to their own benefit.

The tech press is failing to grasp the moral stakes of API access. Again and again they flunk at connecting boring questions of who can write and distribute programs for phones to urgent issues of power over publication and control of devices. By declining to join these threads, they allow the unearned and increasingly indefensible power of mobile OS vendors to proliferate. The urgent question is how that power can be attenuated, or as Popper put it:

We must ask whether…we should not prepare for the worst leaders, and hope for the best. But this leads to a new approach to the problem of politics, for it forces us to replace the question: "Who should rule?" by the new question: "How can we so organize political institutions that bad or incompetent rulers can be prevented from doing too much damage?"

— Karl Popper,
"The Open Society and Its Enemies"

But the tech press does not ask these questions.

Instead of questioning why Apple's OS is so fundamentally insecure that an App Store is necessary, they accept the ever-false idea that iOS has been relatively secure because of the App Store.

Instead of confronting Apple with the reality that it used the App Store to hand out privacy-invading APIs in undisciplined ways to unscrupulous actors, it congratulates Cupertino on the next episode of our nightly kayfabe. The links between Apple's monopoly on sensitive APIs and the growth of monopolies in adjacent sectors are rarely, if ever, questioned. Far too often, the tech press accepts the narrative structure of Apple's marketing, satisfying pangs of journalistic conscience with largely ineffectual discussions about specific features that will not upset the power balance.

Nowhere, e.g., in The Verge's coverage of these letters is there a discussion about alternatives to the App Store. Only a few outlets ever press Apple on its suppression of web apps, including failure to add PWA install banners and essential capabilities. It's not an Apple vs. Google horse-race story, and so discussion of power distribution doesn't get coverage.

Settling for occasionally embarrassing Apple into partially reversing its most visibly egregious actions is ethically and morally stunted. Accepting the frame of "who should rule?" that Cupertino reflexively deploys is toxic to any hope of worthwhile technology because it creates and celebrates the idea of kings, leaving us supine relative to the mega-corps in our phones.

This is, in a word, childish.

Adults understand that things are complicated, that even the best intentioned folks get things wrong, or can go astray in larger ways. We build institutions and technologies to protect ourselves and those we love from the worst impacts of those events, and those institutions always model struggles over power and authority. If we are lucky and skilled enough to build them well, the results are balanced systems that attenuate attempts at imposing overt authoritarianism.

In other words, the exact opposite of Apple's infantilising and totalitarian world view.

Instead of debating which wealthy vassals might be more virtuous than the current rulers, we should instead focus on attenuating the power of these monarchical, centralising actors. The DMA is doing this, creating the conditions for interoperability, and through interoperability, competition. Apple know it, and that's why they're willing to pawn their own dignity, along with the rights of fellow Americans, to snuff out the threat.

These are not minor points. Apple has power, and that power comes from its effective monopoly on the APIs that make applications possible on the most important computing platform of our adult lives.

Protecting this power has become an end unto itself, curdling the pro-social narratives Apple takes pains to identify itself with. Any reporter that bothers to do what a scrappy band of web developers have done — to actually read the self-contradictory tosh Apple flings at regulators and legislators around the world — would have been able to pattern match; to see that twisting words to defend the indefensible isn't somehow alien to Apple. It's not even unusual.

But The Verge, 404, and even Wired are declining to connect the dots. If our luminaries can't or won't dig in, what hope do less thoughtful publications with wider audiences have?

Apple's power and profits have made it an enemy of democracy and civic rights at home and abroad. A mealy-mouthed tech press that cannot see or say the obvious is worse than useless; it is an ally in Apple's attempts to obfuscate.

The most important story about smartphones for at least the past decade has been Cupertino's suppression of the web, because that is a true threat to the App Store, and Apple's power flows from the monopolies it braids together. As Cory Doctorow observed:

Apple's story – the story of all centralized, authoritarian technology – is that you have to trade freedom for security. If you want technology that Just Works(TM), you need to give up on the idea of being able to override the manufacturer's decisions. It's always prix-fixe, never a la carte.

This is a kind of vulgar Thatcherism, a high-tech version of her maxim that "there is no alternative." Decomposing the iPhone into its constituent parts – thoughtful, well-tested technology; total control by a single vendor – is posed as a logical impossibility, like a demand for water that's not wet

— Cory Doctorow,
"Plenty of room at the bottom (of the tech stack)"

Doctorow's piece on these outrages is a must-read, as it does what so many in the tech press fail to attempt: connecting patterns of behaviour over time and geography to make sense of Apple's capitulation. It also burrows into the rot at the heart of the App Store: the claim that anybody should have as much power as Apple has arrogated to itself.

We can see clearly now that this micro-authoritarian structure is easily swayed by macro-authoritarians, and bends easily to those demands. As James C. Scott wrote:

I believe that many of the most tragic episodes of state development in the late nineteenth and twentieth centuries originate in a particularly pernicious combination of three elements. The first is the aspiration to the administrative ordering of nature and society, an aspiration that we have already seen at work in scientific forestry, but one raised to a far more comprehensive and ambitious level. “High modernism” seems an appropriate term for this aspiration. As a faith, it was shared by many across a wide spectrum of political ideologies. Its main carriers and exponents were the avant-garde among engineers, planners, technocrats, high-level administrators, architects, scientists, and visionaries.

If one were to imagine a pantheon or Hall of Fame of high-modernist figures, it would almost certainly include such names as Henri Comte de Saint-Simon, Le Corbusier, Walther Rathenau, Robert McNamara, Robert Moses, Jean Monnet, the Shah of Iran, David Lilienthal, Vladimir I. Lenin, Leon Trotsky, and Julius Nyerere. They envisioned a sweeping, rational engineering of all aspects of social life in order to improve the human condition.

— James C. Scott,
"Seeing Like A State"

This is also Apple's vision for the iPhone; an unshakeable belief in its own rightness and transformative power for good. Never mind all the folks that get hurt along the way, it is good because Apple does it. There is no claim more central to the mythos of Apple's marketing wing, and no deception more empowering to abusers of power.4

Apple claims to stand for open societies, but POSIWID shows that to be a lie. It is not just corrupted, but itself has become corrupting; a corrosive influence on the day-to-day exercise of rights necessary for democracy and the rule-of-law to thrive.5

Apple's Le Corbusierian addiction to control has not pushed it into an alliance with those resisting oppression, but into open revolt against efforts that would make the iPhone an asset for citizens exercising their legitimate rights to aid the powerless. It scuttles and undermines open technologies that would aid dissidents. It bends the knee to tyranny because unchecked power helps Cupertino stave off competition, preserving (it thinks) a space for its own messianic vision of technology to lift others out of perdition.

If the consequences were not so dire, it would be tragically funny.

Let's hope our tech press find their nerve, and a copy of “The Open Society and Its Enemies," before we lose the ability to laugh.

I spent a dozen and change years at Google, and my greatest disappointment in leadership over those years was the way the founders coddled the Android team's similarly authoritarian vision.

For the price of a prominent search box on every phone,6 the senior leadership (including Sundar) were willing to sow the seeds of the web's obsolescence, handing untold power to Andy Rubin's team of Java zealots. It was no secret that they sought to displace the web as the primary way for users to experience computing, substituting proprietary APIs for open platforms along the way.

With the growth of Android, Play grew in influence, in part as cover for Android's original sins.7 This led to a series of subtler, but no less effective, anti-web tactics that dovetailed with Apple's suppression of web apps on iOS. The back doors and exotic hoops developers must jump through to gain distribution for interoperable apps remains a scandal.

But more than talking about Google and what it has done, we should talk about how we talk about Google. In specific, how the lofty goals of its Search origins were undercut by those anti-social, anti-user failures in Android and Play.

It's no surprise that Google is playing disingenuous games around providing access to competitors regarding web apps on Android, while simultaneously pushing to expand its control over app distribution. The Play team covet what Apple have, and far from exhibiting any self-awareness of their own culpability, are content to squander whatever brand reputation Google may have left in order to expand its power over software distribution.

And nobody can claim that power is being used for good.

Google is not creating moral distance between itself and Apple, or seeking to help developers build PWAs to steer around the easily-censored channels it markets, and totally coincidentally, taxes.8 Google is Apple's collaborator in capitulation. A moral void, trotting out the same, tired tactic of hiding behind Apple's skirt whenever questions about the centralising and authoritarian tendencies of App Store monopolies crop up. For 15 years, Android has been content to pen 1-pagers for knock-off versions of whatever Apple shipped last year, including authoritarian-friendly acquiescence.

Play is now the primary software acquisition channel for most users around the world, and that should cause our tech press to intensify scrutiny of these actions, but that's not how Silicon Valley's wealth-pilled technorati think, talk, or write. The Bay Area's moral universe extends to the wall of the privilege bubble, and no further. We don't talk about the consequences of enshittified, trickle-down tech, or even bother to look hard at it. That would require using Android and…like…eww.

Far from brave truth-telling, the tech press we have today treats the tech the other half (80%) use as a curio; a destination to gawp at on safari, rather than a geography whose residents are just as worthy of dignity and respect as any other. And that's how Google is getting away with shocking acts of despicable cowardice to defend a parallel proprietary ecosystem of gambling, scams, and shocking privacy invasion, but with a fraction of the negative coverage.

And that's a scandal, too.

FOOTNOTES

  1. Does anyone doubt that Tim Apple's wishlist didn't also include a slap-on-the-wrist conclusion to the US vs. Apple?

    And can anyone safely claim that, under an administration as nakedly corrupt as Donald Trump's, Apple couldn't buy off the DOJ? And what might the going rate for such policy pliability be?

    That we have to ask says everything.

  2. It hopes.

  3. I don't know Wiley Hodges, but the tone of his letter is everything I expect from Apple employees attempting to convince their (now ex-)bosses of anything: over-the-top praise, verging on hagiography, combined with overt appeals to the brand as the thing of value. This is how I understand Apple to discuss Apple to Apple, not just the outside world. I have no doubt that this sort of sickly sweet presentation is necessary for even moderate criticism to be legible when directed up the chain. Autocracies are not given to debate, and Apple is nothing if not internally autocratic.

    His follow-up post is more open and honest, and that's commendable. You quickly grasp that he's struggling with some level of deprogramming now that he's on the outside and can only extend the benefit of the doubt towards FruitCo as far as the available evidence allows. Like the rest of us, he's discovering that Apple is demanding far more trust than its actions can justify. He's rightly disappointed that Apple isn't living up to the implications of its stated ideals, and that the stated justifications seem uncomfortably ad hoc, if not self-serving.

    This discomfort stems from the difference between principle and PR.

    Principles construct tests with which we must wrestle. Marketing creates frames that cast one party as an unambiguous hero. I've often joked that Apple is a marketing firm with an engineering side gig, and this is never more obvious than in the stark differences between communicated choices and revealed outcomes.

    No large western company exerts deeper control over its image, prying deep into the personal lives of its employees in domineering ways to protect its brand from (often legitimate) critique that might undermine the message of the day. Every Apple employee not named "Tim" submits to an authoritarian regime all day, every day. It's no wonder that the demands of power come so easily to the firm. All of this is done to maintain the control that allows Marketing to cast Apple's image in a light that makes it the obvious answer to "who should rule?"

    But as we know, that question is itself the problem.

    Reading these posts, I really feel for the guy, and wish him luck in convincing Apple to change course. If (as seems likely) it does not, I would encourage him to re-read that same Human Rights Policy again and then ask: "is this document a statement of principle or is it marketing collateral?"

  4. The cultish belief that "it is good because we do it" is first and foremost a self-deception. It's so much easier to project confidence in this preposterous proposition when the messenger themselves is a convert.

    The belief that "we should rule" is only possible to sustain among thoughtful people once the question "who should rule?" is deeply engrained. No wonder, then, that the firm works so dang hard to market its singular virtue to the internal, captive audience.

  5. As I keep pointing out, Apple can make different choices. Apple could unblock competing browsers tomorrow. It could fully and adequately fund the Safari team tomorrow. It could implement basic features (like install banners) that would make web apps more viable tomorrow. These are small technical challenges that Apple has used disingenuous rhetoric to blow out of all proportion as it has tried to keep the web at bay. But if Apple wanted to be on the side of the angels, it could easily provide a viable alternative for developers who get edited out of the App Store.

  6. Control over search entry points is the purest commercial analogue in Android to green/blue messages on iOS. Both work to dig moats around commodity services, erecting barriers to switching away from the OS's provider, and both have been fantastically successful in tamping down competition.

  7. It will never cease to be a scandal that Android's singular success metric in the Andy Rubin years was “activations.” The idea that more handsets running Android was success is a direct parallel to Zuckian fantasies about “connection” as an unalloyed good.

    These are facially infantile metrics, but Google management allowed it to continue well past the sell-by date, with predictably horrendous consequences for user privacy and security. Play, and specifically the hot-potato of "GMS Core" (a.k.a., "Play Services") were tasked with covering for the perennially out of date OSes running on client devices. That situation is scarcely better today. At last check, the ecosystem remains desperately fragmented, with huge numbers of users on outdated and fundamentally insecure releases. Google has gone so far as to remove these statistics from its public documentation site to avoid the press asking uncomfortable questions. Insecurity in service of growth is Android's most lasting legacy.

    Like Apple, Andy Rubin saw the web as a threat to his growth ambitions, working to undermine it as a competitor at every step. Some day the full story of how PWAs came to be will be told, but suffice to say, Android's rabid acolytes within the company did everything they could to prevent them, and when that was no longer possible, to slow their spread.

  8. Don't worry, though, Play doesn't tax developers as much as Apple. So Google are the good guys. Right?

    Right?

Previously