1. 49
  1.  

  2. 11

    I’m very skeptical of the numbers. A fully charged iPhone has a battery of 10-12 Wh (not kWh), depending on the model. You can download more than one GB without fully depleting the battery (in fact, way more than that). The 2.9 kWh per GB is totally crazy… Sure, there are towers and other elements to deliver the data to the phone. Still.

    The referenced study doesn’t show those numbers, an even their estimation of 0.1 kWh/GB (page 6 of the study) is taking into account a lot of old infrastructure. In the same page they talk about numbers of 2010, but even then the consumption using broadband was estimated as 0.08 kWh/GB and only 2.9 kWh for 3G access. Again, in 2010.

    Taking into account that consumption for 2020 is totally unrealistic to me… It’s probably a factor of at least 30 times less… Of course, this number will go down as well as more efficient transfers are rolled out, which seems to be happening already, at an exponential rate.

    So don’t think that shaving a few kbytes here and there is going to make a significant change…

    1. 7

      I don’t know whether the numbers are right or wrong, but I’m very happy with the alternative direction here, and another take at the bloat that the web has become today.

      It takes several seconds on my machine to load the website of my bank, a major national bank used by millions of folks in the US (Chase). I looked at the source code, and it’s some sort of encrypted (base64-style, not code minimisation style) JavaScript gibberish, which looks like it uses several seconds of my CPU time each time it runs, in addition to making the website and my whole browser unbearably slow, prompting the slow-site warning to come in and out, and often failing to work at all, requiring a reload of the whole page. (No, I haven’t restarted my browser in a while, and, yes, I do have a bunch of tabs open — but many other sites still work fine as-is, but not Chase.)

      I’m kind of amazed how all these global warming people think it’s OK to waste so many of my CPU cycles on their useless fonts and megabytes of JavaScript on their websites to present a KB worth of text and an image or two. We need folks to start taking this seriously.

      The biggest cost might not be the actual transmission, but rather the wasted cycles from having to rerender complex designs that don’t add anything to the user experience — far from it, make it slow for lots of people who don’t have the latest and greatest gadgets and don’t devote their whole machine to running a single website in a freshly-reloaded browser. This also has a side effect of people needing to upgrade their equipment on a regular basis, even if the amount of information you require accessing — just a list of a few dozen of transactions from your bank — hasn’t changed that much over the years.

      Someone should do some math on how much a popular bank contributes to global warming with its megabyte-sized website that requires several seconds of CPU cycles to see a few dozen transactions or make a payment. I’m pretty sure the number would be rather significant. Add to that the amount of wasted man-hours of folks having to wait several seconds for the pages to load. But mah design and front-end skillz!

      1. 3

        Chase’s website was one of two reasons I closed my credit card with them after 12 years. I was traveling and needed to dispute a charge, and it took tens of minutes of waiting for various pages to load on my smartphone (Nexus 5x, was connected to a fast ISP via WiFi).

        1. 2

          The problem is that Chase, together with AmEx, effectively have a monopoly on premium credit cards and travel rewards. It’s very easy to avoid them as a bank otherwise, because credit unions often provide a much better product, and still have outdated-enough websites that simply do the job without whistling at you all the time, but if you’re into getting the best out of your travel, dealing with the subpar CPU-hungry websites of AmEx and Chase is often a requirement for getting certain things done.

          (However, I did stop using Chase Ink for many of my actual business transactions, because the decline rate was unbearable, and Chase customer service leaves a lot to be desired.)

          What’s upsetting is that with every single redesign, they make things worse, yet the majority of bloggers and reviewers only see the visual “improvements” in graphics, and completely ignore the functional and usability deficiencies and extra CPU requirements of each redesign.

      2. 9

        Sure, there are towers and other elements to deliver the data to the phone. Still.

        Still what? If you’re trying to count the total amount of power required to deliver a GB, then it seems like you should count all the computers involved, not just the endpoint.

        1. 4

          “still, is too big of a difference”. Of course you’re right ;-)

          The study estimates the consumption as 0.1 kWh in 2020. The 2.9 kWh is an estimation in 2010.

          1. 2

            I see these arguments all the time about “accuracy” of which study’s predictions are “correct” but it must be known that these studies are predictions of the average consumption for just transport, and very old equipment is still in service in many many places in the world; you could very easily be hitting some of that equipment on some requests depending on where your data hops around! We all know an average includes many outliers, and perhaps the average is far less common than the other cases. In any case, wireless is not the answer! We can start trusting numbers once someone develops the energy usage equivalent of dig

          2. 3

            Yes. Let’s count a couple.

            I have a switch (an ordinary cheap switch) here that’ll receive and forward 8Gbps on 5W, so it can forward 3600000 gigabytes per kWh, or 0.0000028kWh/GB. That’s the power supply rating, so it’ll be higher than the peak power requirement, which is in turn will be higher than the sustained, and big switches tend to be more efficient than this small one, so the real number may have another zero. Routers are like switches wrt power (even big fast routers tend to have low-power 40MHz CPUs and do most routing in a switch-like way, since that’s how you get a long MTBF), so if you assume that the sender needs a third of that 0.1kWh/GB, the receiver a third, and the networking a third, then… dumdelidum… the average number of routers and switches between the sender and receiver must be at least 10000. This doesn’t make sense.

            The numbers don’t make sense for servers either. Netflix recently announced getting ~200Gbps out of its new hardware. At 0.03kWh/GB, that would require 22kW sustained, so probably a 50kW power supply. Have you ever seen such a thing? A single rack of servers would would need 1MW of power.

            1. 1

              There was a study that laid out the numbers, but the link seems to have died recently. It stated that about 50% the energy cost for data transfer was datacenter costs, the rest was spread out thinly over the network to get to its destination. Note that datacenter costs does not just involve the power supply for the server itself, but also all related power consumption like cooling, etc.

              1. 2

                ACEEE, 2012… I seem to remember reading that study… I think I read it when it was new, and when I multiplied the numbers in that with Google’s size and with a local ISP’s size, I found that both of them should have electricity bills far above 100% of their total revenue.

                Anyway, if you change the composition that way, then at least 7000 routers/switches on the way, or else some of the switches must use vastly more energy than the ones I’ve dealt with.

                And on the server side, >95% of the power must go towards auxiliary services. AIUI cooling isn’t the major auxiliary service, preparing data to transfer costs more than cooling. Netflix needs to encode films, Google needs to run Googlebot, et cetera. Everyone who transfers a lot must prepare data to transfer.

          3. 4

            I ran a server at Coloclue for a few years, and the pricing is based on power usage.

            I stopped in 2013, but I checked my old invoices and monthly power usage fluctuated between 23.58kWh and 18.3kWh, with one outlier at 14kWh. That’s quite a difference! This is all on the same machine (little Supermicro Intel Atom 330) with the same system (FreeBSD).

            This is from 2009-2014, and I can’t go back and correlate this with what the machine was doing, but fluctuating activity seems the most logical response? Would be interesting if I had better numbers on this.

            1. 2

              With you on the skeptic train: would love to see where this estimate:

              Let’s assume the average website receives about 10.000 unique visitors per month

              it seems way high. We probably will be looking to a pareto distribution, and I don’t know if my intuition is wrong, but I’ve the feeling that your average wordpress site sees way way lower visitors than that.

              Very curious about this now, totally worth some more digging

            2. 11

              Whether the estimate of energy consumption is accurate or not, I applaud the conscious effort to reduce carbon emissions. Even if the actual reductions are 10x less, they still look very much worthwhile. This is the kind of thinking needed across every industry. Reducing energy consumption is an important part of the equation in the transition to renewable energy within the context of tiny & dwindling remaining carbon budgets.

              1. 8

                It’s commendable that someone wants to make every change possible to reduce their carbon foodprint even after making a lot of useful (on a personal level) life choices.

                But disregarding for a moment the absolutely absurd numbers all over, the climate crisis is just never going to be fixed by a bunch of developers cutting down a few KB and MB here and there. That’s a nice thought but no, it’s not going to make no difference whatsoever.

                If truly huge international companies (the real culprit of this situation) aren’t changing their ways (or, in some cases, business model), alone we are not going to even make a dent on the issue.

                It’s governments on a global scale that need to be made (read ‘forced’) to act on this situation and when that happens, you’ll see how quickly you can do when this happens. But as long as US chickens out every single time, you can’t expect a lot of other countries to do “the good thing”.

                And when US owns it up and does something about it, you’ll see that a couple of extra KB on a website is really nothing.

                1. 6

                  This has never crossed my mind before, thank you for getting me (and others) thinking about it :)

                  Now I wonder what the power overhead of interpreting PHP is over a language that gets turned into native code AOT.

                  1. 6

                    You may enjoy this article (Lobsters discussion) on the energy usage of various languages. PHP isn’t the best but it isn’t the worst, either.

                    1. 5

                      That’s awesome - I’m glad it was of value! Me neither. I stumbled upon a number that said a GB of data costs about 5 kWh to transfer (about half of it was datacenter, rest spread out across the network) and it blew my mind. If my home network was that inefficient it would mean an hour of streaming House of Cards in Ultra HD is just as bad as spending that same time in a moving gasoline car…. Luckily, that number seems way too high nowadays and fixed broadband connections are a lot more efficient.

                      And yeah, Rasmus Lerdorf did a talk a few years ago talking about the CO2 savings if the entire planet updated to PHP7. Here’s a link to the relevant section.. TLDR: at 100% PHP 7 adoption, 7.5B kg less CO2 would be emitted.

                      1. 5

                        I stumbled upon a number that said a GB of data costs about 5 kWh to transfer

                        Anyone else think this sounds wildly implausible? The blog in question estimates 2.9 kWh based on 3G, and even that seems absurd imo.

                        Per here 2017 IP traffic amounted to 1.5ZB, or about 171,100,000 GB per hour. A 5kWh per GB that would work out to ~7500 TWh, or about 1/3 of 2017’s energy consumption being spent on data transfer alone.

                        This would be more energy than we spend on all transportation of people and goods, worldwide combined (about 26% of energy consumption worldwide, per the EIA here).

                        It’s hard to directly find data breaking down energy usage by segment to the point where you could directly pin a number on “IP data transfer” by itself (which in and of itself raises questions about where this 5 kWh came from), but just looking at the breakdowns I can find, 5 kWh doesn’t seem to pass the smell test.

                        1. 6

                          That was my reaction too, but I think the main thing is that this study was old (2007 or so). It was this study, although the link seems to have died very recently.

                          I actually just found another study that seems more up to date and seems credible: Electricity Intensity of Internet DataTransmission. Main line (according to me):

                          This article derives criteria to identify accurate estimates over time andprovides a new estimate of 0.06 kWh/GB for 2015. By retroactively applying our criteria toexisting studies, we were able to determine that the electricity intensity of data transmission(core and fixed-line access networks) has decreased by half approximately every 2 yearssince 2000 (for developed countries), a rate of change comparable to that found in theefficiency of computing more generally.

                          1. 2

                            Oh nice, thanks for the link! That makes a lot more sense to me just squinting at power consumption of various parts of the chain.

                    2. 5

                      Is this motherfuckingwebsite.com clocking in at 5 kB in total really that bad in comparison? I don’t think so.

                      You’ve got the number wrong:

                          <!-- yes, I know...wanna fight about it? -->
                          <script>
                            (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
                            (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
                            m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
                            })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
                          
                            ga('create', 'UA-45956659-1', 'motherfuckingwebsite.com');
                            ga('send', 'pageview');
                          </script>
                      

                      This is something I fail to understand. He doesn’t even use CSS to prevent the text from spreading over the entire width of the screen, but then happily references a JS blob that amounts to 44 KB to spy on the users. Káže vodu, pije víno..

                      1. 3

                        Haha, shoot. The script was triple blocked: first by uBlock, then by uMatrix, then by my Pi Hole. So it did not show up on my browser’s Network tab…

                        And yes, I’m with you there. 44 kB… Nearly 10 times the size of the rest of that site. For what I think is vanity.

                        1. 5

                          What I found particularly amusing about this was the comment. It’s like if someone saw him get into his private airplane a few minutes after he gave an emotional talk about why people should take extreme measures to lower their carbone footprint and he just stayed there with a guilty face: “I know, I know…”.

                          It creates an illusion that there’s nothing we could do, practically speaking. We could theoretically build better websites, but not even the strongest advocates do actually bother.

                          I don’t actually think using GA is that bad. I don’t like it on a personal/ideological level, but I’m not fanatical about it and can see myself using it too in some scenarios. Here, it’s all about the contrasts: no styling, the page that’s to be perceived as ugly and boring by many; everything is as minimalistic as it can be. And then bum, let’s load 44 KB worth of some JS blob.

                          Had the objective been to criticize the worst-of-worst bloated websites that take hundreds of milliseconds to load on a modern computer with a decent connection for seemingly no reason and demonstrate that things can be simpler (on something people could actually imagine using; such as a news portal or magazine, which commonly contain the most bloatware), then it wouldn’t be such a big deal to add some extra 40 KB. But taking extreme measures only to throw any advantage away a few seconds later doesn’t make much sense.

                          Oh, and an interesting article of yours (I forgot to mention).

                      2. 4

                        Thanks for sharing You Dont Need JavaScript. Really great examples to build off of.

                        1. 3
                          1. 2

                            This and the Low Tech Solar sustainability post have me wondering where the decisions are that most affect the footprint of modern tech.

                            For example, are some datacenters far better than others because of power source or efficiency? (Like, I know parts of Oregon are popular to build DCs in because of cheap hydroelectric power, and obviously hyperscalers push for good PUE.) There’s a case that CDNs might save energy simply by sending static data less distance; does that end up working out? What’s the breakdown of importance of “machine on” time vs. server CPU time vs. bandwidth vs. client resource use? Is off-peak load (e.g. nightly jobs) ever environmentally “cheaper” due to the mechanics of power generation (some power plants won’t turn “down” past a certain point)?

                            I’m still not sure if it would end up being among the more significant levers I have on the environment–moving me tends to be more expensive than moving bits, and my little apartment’s stove/heaters can match some fully-loaded servers in power consumption. I’d still be really interested to know it.

                            1. 2

                              I’ve always been very skeptical about CO2 emissions caused by data transfers when they are expressed in gram per byte. The reason is that the drive for bigger networks is capacity and that doesn’t corresponds linearly to transferred size.

                              In other words, the networks are made larger in order to accomodate for the peak usages. I haven’t found the methodologies for the numbers used everywhere.

                              A rough comparison would be driving from one place to another: emissions will depend at least on the type of road, the type of car, the car speed. For data I’ve seen some numbers that depend on the transmission medium to the end user: dsl, fiber, mobile; however these numbers are magnitudes apart and are therefore hard to believe.

                              Don’t get me wrong though: I’m all for reducing consumption at many levels but I’m also looking at numbers that I can believe, i.e. which are at least expressed in units that make sense.