1. 11

    I’m very skeptical of the numbers. A fully charged iPhone has a battery of 10-12 Wh (not kWh), depending on the model. You can download more than one GB without fully depleting the battery (in fact, way more than that). The 2.9 kWh per GB is totally crazy… Sure, there are towers and other elements to deliver the data to the phone. Still.

    The referenced study doesn’t show those numbers, an even their estimation of 0.1 kWh/GB (page 6 of the study) is taking into account a lot of old infrastructure. In the same page they talk about numbers of 2010, but even then the consumption using broadband was estimated as 0.08 kWh/GB and only 2.9 kWh for 3G access. Again, in 2010.

    Taking into account that consumption for 2020 is totally unrealistic to me… It’s probably a factor of at least 30 times less… Of course, this number will go down as well as more efficient transfers are rolled out, which seems to be happening already, at an exponential rate.

    So don’t think that shaving a few kbytes here and there is going to make a significant change…

    1. 7

      I don’t know whether the numbers are right or wrong, but I’m very happy with the alternative direction here, and another take at the bloat that the web has become today.

      It takes several seconds on my machine to load the website of my bank, a major national bank used by millions of folks in the US (Chase). I looked at the source code, and it’s some sort of encrypted (base64-style, not code minimisation style) JavaScript gibberish, which looks like it uses several seconds of my CPU time each time it runs, in addition to making the website and my whole browser unbearably slow, prompting the slow-site warning to come in and out, and often failing to work at all, requiring a reload of the whole page. (No, I haven’t restarted my browser in a while, and, yes, I do have a bunch of tabs open — but many other sites still work fine as-is, but not Chase.)

      I’m kind of amazed how all these global warming people think it’s OK to waste so many of my CPU cycles on their useless fonts and megabytes of JavaScript on their websites to present a KB worth of text and an image or two. We need folks to start taking this seriously.

      The biggest cost might not be the actual transmission, but rather the wasted cycles from having to rerender complex designs that don’t add anything to the user experience — far from it, make it slow for lots of people who don’t have the latest and greatest gadgets and don’t devote their whole machine to running a single website in a freshly-reloaded browser. This also has a side effect of people needing to upgrade their equipment on a regular basis, even if the amount of information you require accessing — just a list of a few dozen of transactions from your bank — hasn’t changed that much over the years.

      Someone should do some math on how much a popular bank contributes to global warming with its megabyte-sized website that requires several seconds of CPU cycles to see a few dozen transactions or make a payment. I’m pretty sure the number would be rather significant. Add to that the amount of wasted man-hours of folks having to wait several seconds for the pages to load. But mah design and front-end skillz!

      1. 3

        Chase’s website was one of two reasons I closed my credit card with them after 12 years. I was traveling and needed to dispute a charge, and it took tens of minutes of waiting for various pages to load on my smartphone (Nexus 5x, was connected to a fast ISP via WiFi).

        1. 2

          The problem is that Chase, together with AmEx, effectively have a monopoly on premium credit cards and travel rewards. It’s very easy to avoid them as a bank otherwise, because credit unions often provide a much better product, and still have outdated-enough websites that simply do the job without whistling at you all the time, but if you’re into getting the best out of your travel, dealing with the subpar CPU-hungry websites of AmEx and Chase is often a requirement for getting certain things done.

          (However, I did stop using Chase Ink for many of my actual business transactions, because the decline rate was unbearable, and Chase customer service leaves a lot to be desired.)

          What’s upsetting is that with every single redesign, they make things worse, yet the majority of bloggers and reviewers only see the visual “improvements” in graphics, and completely ignore the functional and usability deficiencies and extra CPU requirements of each redesign.

      2. 9

        Sure, there are towers and other elements to deliver the data to the phone. Still.

        Still what? If you’re trying to count the total amount of power required to deliver a GB, then it seems like you should count all the computers involved, not just the endpoint.

        1. 4

          “still, is too big of a difference”. Of course you’re right ;-)

          The study estimates the consumption as 0.1 kWh in 2020. The 2.9 kWh is an estimation in 2010.

          1. 2

            I see these arguments all the time about “accuracy” of which study’s predictions are “correct” but it must be known that these studies are predictions of the average consumption for just transport, and very old equipment is still in service in many many places in the world; you could very easily be hitting some of that equipment on some requests depending on where your data hops around! We all know an average includes many outliers, and perhaps the average is far less common than the other cases. In any case, wireless is not the answer! We can start trusting numbers once someone develops the energy usage equivalent of dig

          2. 3

            Yes. Let’s count a couple.

            I have a switch (an ordinary cheap switch) here that’ll receive and forward 8Gbps on 5W, so it can forward 3600000 gigabytes per kWh, or 0.0000028kWh/GB. That’s the power supply rating, so it’ll be higher than the peak power requirement, which is in turn will be higher than the sustained, and big switches tend to be more efficient than this small one, so the real number may have another zero. Routers are like switches wrt power (even big fast routers tend to have low-power 40MHz CPUs and do most routing in a switch-like way, since that’s how you get a long MTBF), so if you assume that the sender needs a third of that 0.1kWh/GB, the receiver a third, and the networking a third, then… dumdelidum… the average number of routers and switches between the sender and receiver must be at least 10000. This doesn’t make sense.

            The numbers don’t make sense for servers either. Netflix recently announced getting ~200Gbps out of its new hardware. At 0.03kWh/GB, that would require 22kW sustained, so probably a 50kW power supply. Have you ever seen such a thing? A single rack of servers would would need 1MW of power.

            1. 1

              There was a study that laid out the numbers, but the link seems to have died recently. It stated that about 50% the energy cost for data transfer was datacenter costs, the rest was spread out thinly over the network to get to its destination. Note that datacenter costs does not just involve the power supply for the server itself, but also all related power consumption like cooling, etc.

              1. 2

                ACEEE, 2012… I seem to remember reading that study… I think I read it when it was new, and when I multiplied the numbers in that with Google’s size and with a local ISP’s size, I found that both of them should have electricity bills far above 100% of their total revenue.

                Anyway, if you change the composition that way, then at least 7000 routers/switches on the way, or else some of the switches must use vastly more energy than the ones I’ve dealt with.

                And on the server side, >95% of the power must go towards auxiliary services. AIUI cooling isn’t the major auxiliary service, preparing data to transfer costs more than cooling. Netflix needs to encode films, Google needs to run Googlebot, et cetera. Everyone who transfers a lot must prepare data to transfer.

          3. 4

            I ran a server at Coloclue for a few years, and the pricing is based on power usage.

            I stopped in 2013, but I checked my old invoices and monthly power usage fluctuated between 23.58kWh and 18.3kWh, with one outlier at 14kWh. That’s quite a difference! This is all on the same machine (little Supermicro Intel Atom 330) with the same system (FreeBSD).

            This is from 2009-2014, and I can’t go back and correlate this with what the machine was doing, but fluctuating activity seems the most logical response? Would be interesting if I had better numbers on this.

            1. 2

              With you on the skeptic train: would love to see where this estimate:

              Let’s assume the average website receives about 10.000 unique visitors per month

              it seems way high. We probably will be looking to a pareto distribution, and I don’t know if my intuition is wrong, but I’ve the feeling that your average wordpress site sees way way lower visitors than that.

              Very curious about this now, totally worth some more digging

            1. 10

              I strongly disagree with the entire section deriding technical debt.

              One of the best things you can do for yourself as a software developer is running your own business. I don’t mean a consultancy business where you’re essentially an employee that sends invoices every month. I mean something more like a product business.

              When you’re building a business, you don’t know what features are going to bring money in. It is so well-known that we all don’t know what we’re doing that the startup in-joke is to put “pivot” explicitly somewhere on the roadmap. I will by no means defend Agile as I think that’s nonsense too, but the aversion to technical debt is (I think) the most likely cause of this supposed “divide between business and programmers”.

              Who cares if the feature you implement is beautifully designed and backed by a comprehensive suite of automated tests, if that feature ends up not making any money and is scrapped anyway? Professional software development doesn’t happen in a vacuum; the code you write is an investment, and you and/or the business should expect to see a return on it. Technical debt is an excellent way to limit potential losses from investing in a feature/system which turns out to not be profitable.

              Debt is not a bad thing. Debt is what allows you to live in your house before you can actually pay for all of it.

              1. 2

                It doesn’t hurt to go back to the source when the “technical debt” metaphor is discussed: a lot of the “not all debt is bad” discussion is exactly what Ward said - http://wiki.c2.com/?WardExplainsDebtMetaphor. I have two points here

                1. Managed debt is not a bad thing, but you probably wouldn’t be able to get a mortgage on a house if you were already maxed out on your credit cards, and you can’t take on more debt when all your income is going on paying off your minimum payments

                2. sometimes the metaphor breaks down because whereas financial debt is fairly predictable (it’s x% of the principle, compounded) the cruft in your codebase may have unknowable effects on your development speed. Say for example you left an XSS attack in your frontend, it won’t slow you down at all adding future new features if nobody finds it - but as soon as they do, it might finish you off completely. The analogue here is not with debt, more with unhedged options, or with uninsured risks. You gave yourself an extra £500/year for pizza by not insuring your car against theft, but now you have no car and you can’t get to work.

                1. 1

                  That is true. But it’s also true that not every stage of a company is searching for a business model. Companies stabilise and need to care about existing systems. And that include cleaning up technical debt. Of course, only of the parts that bring money…

                  The problem I see with Agile practices as understood by many business is that they tend to be a “new features all the time, no need to clean up” way beyond the point that’s reasonable. Because having a solid system that can scale and decrease maintenance costs is less glamorous than the new initiative to tweak a awful mess once more “to have a quick win”. So it ends up taking way more that it should.

                  Profesional cooks clean as they cook for best operation, but if they are in a big run, at least they clean after the fact and before making another dish…

                  1. 1

                    Although I agree, that’s where contracts and generative testing can help. The overhead on programmer’s end is low enough that code thrown away isn’t a big deal. Same for other automated analyses.

                  1. 2

                    Do not try to expose too many different ideas. Simplify your message and repeat it over several times with different words. If it relates to something else (a GiHub repo, etc) don’t spend time getting deep into that, just give a brief reference (“and this can be checked in url blablabla). Summarise. If they are interested in dig deeper, they’ll go to the linked stuff.

                    1. 3

                      The IDE approach (single tool integrating all the operations) and the VIM approach (VIM just to edit a file, other tools in Unix to perform other operations) are different. It’s ok to prefer one or the other. It’s OK to change from one another. It’s fine, really.

                      Learn what you want and your tools and change them if they don’t suit your flow. Evolve and change your opinion. Enjoy.

                      1. 2

                        Whoever attempts floating-point calculation with JavaScript is, of course, in a state of sin.

                        John von Neumann, probably.

                        1. 1

                          What language should one use for floating point calculations?

                          1. 6

                            TCL, of course, because EIAS!

                            1. 3

                              In most languages there are libraries to deal with precise numbers. For example, you can use Decimal in Python. https://docs.python.org/3/library/decimal.html

                              But it may not be as fast or have other inconveniences (for example, defining explicitly the precision)

                              1. 2

                                There are some good examples in the list @HugoDaniel posted in this thread.

                                Here’s something interesting for JS: http://mikemcl.github.io/decimal.js/

                                1. 1

                                  If you want it to work correctly, then probably SPARK, C w/ tooling, or Gappa w/ language of your choosing. If performance isn’t an issue, there’s a pile of languages with arbitrary-precision arithmetic plus libraries for those without it. I’d say that’s the options.

                                  Meanwhile, there’s work in formal methods on Exact-Real Arithmetic that could give us new options later. There was an uptick in it in 2018. I’m keeping an distant eye on it.

                              1. 1

                                The module sh is a fantastic tool for these kind of scripts. https://github.com/amoffat/sh