Why does a 5431 character story about Atari’s 2 KB game Pong need a 3.08 MB download to be read? An environmental plea for readability, and for more static web sites

Jo Christian Oterhals
6 min readJan 22, 2022

Not too long ago, someone on Twitter shared a story about the creation of Atari’s classic video game Pong — The Lies that Powered the Invention of Pong — IEEE Spectrum. I love stories about the dawn of home computing, so I curiously opened the link on my phone.

A screen shot of Atari’s ground-breaking game, Pong.

Have in mind that Pong were a game without a CPU or memory in the modern sense of the words. Later versions of Pong were ported to traditional computer systems, and could run on computers with as little as 2 KB, but the original Pong’s game logic was embedded in the circuitry.

The screen was equivalent to about 200x200 pixels with 2 colors, white and black. Had the contents of the screen been rendered out and stored digitally – which of course it neither was nor could, given the resources— it would have taken just under 10 KB uncompressed. But the console didn’t use pixel graphics; it would have been too resource hungry. It used “timing circuits used to encode the video signal and turn it on or off depending on its location” [source]. Its cleverness when it comes to conserving resources, is beyond impressive.

Since the article basically was all about conserving resources, I were even more puzzled than I normally am by one thing: After a cascade of full screen ads and modal accept-cookie windows, I finally got to the 5431 character long story. And then only to be interrupted after two mere paragraphs — 812 characters — by a button labeled “Keep reading ⬇”.

If we go back in time to the days before unlimited mobile plans and fast 4G/5G, the reason for these buttons were to save users download size and time and therefore also money. The smaller the download was, the better and cheaper it was for the user. But today it’s not as if postponing a download of 4619 characters — which all even are from the space saving lower 7 bits of the ASCII character set — saves the user measurable download time nor size. The full text of the article is equivalent to a mere 0.16 % of the total download.

With the rise of arguably bloated Javascript libraries such as React, resources seem to concern absolutely no one. The size of content is dwarfed (by orders of magnitude) by the size of libraries. It seems as if the “read more” buttons are there just because of old habits; no one seems to question why they’re there anymore.

What we’re left with in 2022 is a functionality that most of all is a hindrance: Readers visits web pages for one thing only, the article. And that one things is the only thing we limit and split. Everything else downloads and renders uninterrupted. Isn’t it strange?

But bandwidth usage is not the only a resource waste here. Take images as an example.

The screen shot I use above is a 24-bit 403x242 pixels rendering of a 2-bit equivalent game screen. Uncompressed in memory it needs 286 KB RAM. That’s almost 29 times the size compared to the original Pong system. Had the person making the screen shot reduced its bit depth to 2, the image would have taken less than 24 KB of my computer’s RAM.

If I open the article on my PC —it sports a 4 core i7 CPU and 16 GB memory, a veritable super computer compared to almost any previous computer system — a single-tab instance of Google Chrome uses a whopping 19 % of my CPU while rendering this simple layout. Simultaneously, Chrome’s total memory usage increases 253 MB, from 323 MB when idle to 576 MB, when rendering the article. That’s well over half a gigabyte spent laying out a few lines of text and a photo.

Compared to the minimal gain from splitting articles in two, this resource usage seems crazy to me.

I’ve been a web developer for 24 years. I’m a dinosaur that come from the era before even database driven web sites became popular. So I know that things are better now than then. Thus I have little nostalgia regarding how we did things before. Old timers, like me, that worked in a world of dial-up Internet, were self taught masters of none — except for three thing that we were experts at by necessity:

  1. Reduction of graphics sizes
  2. Keeping HTML size as small as possible by keeping it as simple as possible
  3. Using various cache strategies to render database driven pages to static html before they were served

I worked at an online newspaper, and I remember how we the first 5–6 years tried to keep the extremely information dense front page below 100 KB at any given time. This is of course not possible today, where increasing screen resolutions demands bigger images and functionality demands stuff like videos and animation. But how much do you really need?

Using some of these tricks of yore, the article referred in the beginning — even including today’s increased demand of visual elements and interactive features — could have been a less than a 0.5 MB document if some thought was spent on optimization.

But it’s not. Let’s take a look at the article’s photo as an example. It is a 24-bit 2580x3441 pixels image. If we compare this to laptop screens, that size is ridiculous. Most laptop screens are around 1920x1080 pixels. Even newer ones, such as the Macs’ retina displays, is scaled down by the operating system to 1920x1080 or less anyway. When the article renders on a full-screen instance of Chrome on such a display, the client still scales the image down to 953x625. In other words, the client discards 93 % of the data that originally was transferred from the server.

Had there been done more optimization remotely — a one-time reduction of size, adapted to various devices, and cached server-side — we’d reduce download size by breathtaking amounts. We’d also reduce the total CPU and energy usage by astonishing amounts in addition. But giving the client responsibility for scaling and rendering, we not only transfer too much data, but we have to spent significant CPU load and RAM usage doing the exactly same scaling on thousands or millions (in the case of popular web sites) of client computers. And just to add insult to injury, this strategy uses 14 times more bandwidth than necessary.

A similar story could be told about CPU usage if we pre-generated web page elements, instead of having React on the client side receive an overload of data and render out some of it.

Something can — and should — be done about this.

And here’s why these things matter for the environment.

We talk a lot about energy usage and the corresponding cost of Bitcoin mining. The environmental impact on global warming for instance, is real and measurable.

But I see very little talk about the web or networked apps when it comes to environment issues.

Moving rendering, calculations and more, from servers to the client side, has saved many a web site owner lots of money on energy and the number of servers necessary. But multiplying rendering cost by millions by having the clients take care of it, has an enormous environmental impact as well.

So, yes, dynamic sites are great. But making more of them a little more static, sounds like a good idea to me. The environment will thank us, and maybe even our smart phones’ battery life will improve a little as well.

And that is a win-win for everyone if there ever was one.

--

--

Jo Christian Oterhals

Norwegian with many interests. Programming being one of them.