1. 1

    Would an application like TurboTax qualify as an expert system? It’s designed with the input of tax experts. Logic that determines which tax breaks you should take might be an example of forward chaining.

    If not, what are contemporary examples of expert systems? Is it just the knowledge base that separates them from standard applications?


    Wikipedia says expert systems are:

    “An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts.”

    Based on that it seems like the inference engine is like a query optimizer in a database operating on a set of facts in the knowledge base. Still a bit abstract for me.

    1. 2

      “Would an application like TurboTax qualify as an expert system?”

      That’s a great example! Yeah, I think so. I doubt it uses the tech of expert systems from back in the day. It does perform their function as you illustrated. It encodes the tax experts’ knowledge and heuristics.

      “If not, what are contemporary examples of expert systems? Is it just the knowledge base that separates them from standard applications?”

      The key attributes were the knowledge base, asking users questions to get the facts they need, producing an answer, and (very important) being able to explain that answer. The expert systems could tell you the facts and rules that went into making a decision. It could still be an expert system without that. Computer configurators were a good example where user might not need to know why specific hardware was chosen. Many of the techs could do that. Today, I think it would be just any system with a knowledge base, getting info from people, and giving them expert-grade answers.

      The LISP companies are still active in areas like that. You might find nice examples in success stories of Franz or LispWorks.

      1. 2

        You can think of an expert system of the most basic kind a set of rules and a inference engine. Take this set of rules, for example:

        1. If cold ⇒ wear jacket
        2. if warm ⇒ wear t-shirt
        3. it outside there are less than 22°C ⇒ it is cold
        4. the average temperature in Siberia in December is -20°C

        The inference engine will analyze all the rules and pieces of information in its knowledge base and put together all that make sense. In our case

        • 4 + 3 + 1 ⇒ wear jacket
        • 4 + 3 + 2 ⇒ do NOT wear a t-shirt

        Then, when I will ask the expert system “I am going to Siberia in December, what should I wear there?” it will go through the results generated from the inference engine and tell you “wear a jacket”.

        In an pure expert system the knowledge is supposed to be stored only in the knowledge base, for example as a rule or as a fact. Nowhere in the code there should be an hint at the knowledge itself.

        Would an application like TurboTax qualify as an expert system? It’s designed with the input of tax experts. Logic that determines which tax breaks you should take might be an example of forward chaining.

        Some small part of TurboTax may qualify as an expert system, but overall one expects the code of TurboTax to have a lot of domain knowledge, there are going to be classes for Currency, Fee, as well as hard-coded functions that calculate the VAT or find the minimum among many calculated possible outcomes. So, in general, TurboTax would not be qualified as an expert system.

        Well, if the current trendy buzzword were “expert system”, then it would absolutely be advertised as an expert system. But for 2019 TurboTax is still a “machine-learning” app with “artificial intelligence” ;)

        1. 1

          In an pure expert system the knowledge is supposed to be stored only in the knowledge base, for example as a rule or as a fact. Nowhere in the code there should be an hint at the knowledge itself.

          I think this is the key bit of information for me. It seems like many (most?) applications have hard coded conditional logic informed by subject matter experts. However, the logic living inside of an expert system can be changed at runtime or at least without recompilation or a new deployment differentiates it. Through the use of a generic (?) inference engine you can apply the raw knowledge base to the problem.

          I can see both how this sort of system is enticing and how it would be difficult to maintain. It seems like these systems would suffer from incomplete knowledge and odd corner cases.

          I wonder if it is possible to merge expert systems with deep learning. Expert systems seem like they can make discrete decisions quickly based on expensive expert information. Deep learning seems to do better with fuzzy, more approximate knowledge from lots of cheaper information sources.


          Interesting example: You want to create an application to identify birds. You know that there aren’t enough pictures of rare birds to build a deep learning model to correctly identify very similar species. (Darwin’s finches, perhaps?) Instead you train a model to annotate a picture with the bird features and pass those as a query to an expert system. The expert system contains descriptions of the birds. (If beak thick then x. If beak curved then y.)

          Darwin’s Finches

          1. 2

            Interesting example: You want to create an application to identify birds. You know that there aren’t enough pictures of rare birds to build a deep learning model to correctly identify very similar species. (Darwin’s finches, perhaps?) Instead you train a model to annotate a picture with the bird features and pass those as a query to an expert system. The expert system contains descriptions of the birds. (If beak thick then x. If beak curved then y.)

            Your example still fits the classical expert systems. Your deep learning classifier would produce a certain piece of information (the “birdness” feature-set) that would be entered as a fact in the knowledge base. The expert system would then proceed as normal. The system does not care where the knowledge comes from.

            I wonder if it is possible to merge expert systems with deep learning. Expert systems seem like they can make discrete decisions quickly based on expensive expert information. Deep learning seems to do better with fuzzy, more approximate knowledge from lots of cheaper information sources.

            Expert systems have been extended to use fuzzy values (“between 30 and 45”), fuzzy logic (“if young then give proportionally less then medicine”) and uncertainties (“this person is 85, according to this source that we do not trust much, or 78, according to this much trusted record”). There is a plethora of literature on this subject and a couple of big systems in active use. However the (first) AI winter took its toll on the reputation of everything that has to do with expert systems. I expect a resurgence as soon as people will become disillusioned with the current ML-based AI claim (the second AI winter) and want to “look inside the black box”.

            1. 1

              Your deep learning classifier would produce a certain piece of information (the “birdness” feature-set) that would be entered as a fact in the knowledge base.

              Instead of using the deep learning to create facts, I was thinking that it could be used for queries. In this example, the user would send in a photo of a bird. The deep learning model would examine the photo to extract bird features from the picture (color, beak shape, size, etc.) and these features would form the basis of the query to the expert system.

              Your example still fits the classical expert systems.

              I didn’t mean to imply that this was not an expert system. Rather I was thinking about how the two systems might interact.

      1. 16

        Let us not forget this: http://www.nohello.com/

        1. 6

          Personally speaking, I say “hi” and “hello” very often on IRC but I never expect an answer to these greetings. I answer to other’s greetings very infrequently. I’m only notified with explicit mentions so I’m not annoyed by short sympathetic messages. I really expect other people to do the same.

          1. 2

            I agree with the main point of nohello.com, but is this really true?

            Typing is much slower than talking.

            I think I type faster than I talk (when not using a phone).

              1. 1

                Totally agree, but using Blogger makes this weird on mobile. It relies on the desktop index page to show the full article, but mobile shows a confusing generated summary.

                1. 2

                  Personally I love Lit css. It’s work of art :)

                  1. 2

                    While it’s small, it’s not a “classless” CSS framework so it’s not quite in the same ballpark of all these other frameworks. I think the whole point of the other frameworks and OP is that you just add them and you’re done.

                    1. 1

                      ah, yes you are right. While most stuff will work without classes, you need at least container for good spacing.

                  2. 1

                    Another alternative is Marx https://mblode.github.io/marx/.

                  1. 6

                    My message of choice I’d display at random (1/1000 probability):

                    If you are reading this, you have been in a coma for almost 20 years because of a car accident. We are trying a new technique. We do not know where this message will end up in your dream, but we hope we are getting through. Please wake up.

                    1. 2

                      That. Is. Genius.

                      1. 2

                        That is also the main plot device in ******* (no spoilers), a novel by Philip K. Dick.

                    1. 13

                      The result is a ~13MB image that contains only those files required to run NGINX, and nothing else. No bash, to UNIX toolset, no package manager…

                      So, an executable running in a process… </sarcasm>

                      1. 8

                        Just like Docker images should be built, IMO.

                        A binary, it’s runtime dependencies, and mount your config and files through volumes. Nothing else.

                        I roll my eyes back every time I see an image bloated with full-blown Ubuntu or Debian in it.

                      1. 2

                        “self-describing” and “binary” are two orthogonal concepts.

                        Also, “self-describing” is not a boolean thing: you can have protocols that describe their format in every message (e.g. JSON) or less frequently (e.g. at the beginning of a connection).

                        Also, protocols like ASN.1 [1] can provide format descriptions and tools that generate parsing code - which is usually way faster than parsing BSON/bencode-like formats because you don’t need to read the whole blob sequentially.

                        [1] https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One

                        1. 0

                          Also, protocols like ASN.1 [1] can provide format descriptions and tools that generate parsing code - which is usually way faster than parsing BSON/bencode-like formats because you don’t need to read the whole blob sequentially.

                          In theory. In practice ASN.1 parsers are famously known to be impossible to get right, secure or fast, mostly because of the quite byzantine specifications.

                          Something more focused, but still IDL-based, like FlatBuffers can provide the same benefits of ASN.1 with a smaller and simpler (generated) parser.

                          1. 1

                            ASN.1 parsers are famously known to be impossible to get right

                            I agree, that’s why I wrote “protocols like”.

                        1. 4

                          Debian’s glibc has been fixed, updated and released 77 times in the last 4 years (counting from version 2.21-1). I’m kinda happy that I did not had to redownload 77 times the 6 GB that live in my /usr directory.

                          ASLR is also nice to have.

                          1. 13

                            I was always sad that everyone seemed to do their best to ignore and work around prototype-based programming in JS. Instead a million different libraries were written to add pseudo class-based inheritance.

                            For example, people always said “don’t modify the root objects!” That was one of the great powers of prototype-based programming: you could modify the root objects, like String and Array and give them new methods that suddenly everyone could use. Dynamic inheritance hierarchies, one-off domains of objects with a barely-distinct prototype object to delegate to… That was all frowned upon by “good style”, meaning a very powerful feature was left by the wayside.

                            The new operator in JS was also really elegant in what it did, but people always worked around it to make it something that it wasn’t.

                            (new executes a creation function but makes the this operator return the newly-created object…the creation function is free to return a different object, the new object, or anything else.)

                            Self was the first prototype-based programming language, but the most beautiful one (IMHO) was Io. I remember playing with Io when it first came out (like 15 or 16 years ago) in a hotel room on a business trip and ending up staying up all night after it had “clicked” for me and just exploring the language.

                            1. 5

                              I agree with you that it’s a shame more people don’t take time to understand how prototypes work in JS. I have mixed feelings on altering the native object prototypes (like Array, String, etc), mostly because I’ve seen it cause real issues in the past (particularly with Prototype.js back around 2007). In isolation, it’s not a big deal, but when libraries start doing it, it can lead to collisions and unexpected behaviors. That’s the main reason I recommend people stay away from modifying them.

                              JS is a really fun language when you get to know it, the problem is that people just don’t want to get to know it. It’s not exactly like what they already know and love, so it just sucks.

                              1. 2

                                I have mixed feelings on altering the native object prototypes (like Array, String, etc), mostly because I’ve seen it cause real issues in the past

                                True, but it’s such a powerful feature! Namespacing is a problem in a lot of languages; it could be worked around by prefixing or whatever. I wonder what a “real” namespace mechanism would look like when combined with prototype-based programming.

                                JS is a really fun language when you get to know it,

                                Again, true, but some of the rough edges people complain about really are pretty rough. (sort sorting by string representation, objects being treated as dictionaries, but with keys based off of toString, the various weak typing for comparisons, etc…)

                                It’s been a while since I’ve used JavaScript heavily (ES 3 was the last version I would claim to know well), so I don’t know how much has changed. Stuff with futures and promises and async stuff is all unknown to me (I only know what I’ve picked up from reading other people talk about it).

                                1. 3

                                  Use Symbol objects for your method names and you can have fully separated namespaces for methods to your heart’s content. :)

                                  1. 3

                                    “Any sufficiently complicated C or Fortran [or JavaScript] program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.”

                                    1. 3

                                      Wait it said bug ridden. Let’s see if we can respin it:

                                      “Any sufficiently-complicated, fast, Common Lisp program contains ah-hoc, informally-specified, bug-ridden extensions to Jitawa Lisp.”

                                    2. 1

                                      Symbols are primitives, not objects, though. Nitpicking aside, I’ve implemented a library for generic programming based on the approach of extending builtin prototypes with symbol properties: https://github.com/slikts/symbola

                                    3. 3

                                      I have mixed feelings on altering the native object prototypes (like Array, String, etc), mostly because I’ve seen it cause real issues in the past

                                      True, but it’s such a powerful feature! Namespacing is a problem in a lot of languages; it could be worked around by prefixing or whatever. I wonder what a “real” namespace mechanism would look like when combined with prototype-based programming.

                                      More programming languages should implement something like Ruby’s refinements (scope-limited monkey patching): http://www.virtuouscode.com/2015/05/20/so-whats-the-deal-with-ruby-refinements-anyway/

                                  2. 2

                                    Self and Io are two languages I really like to play around with. I wrote a post a while back comparing prototype programing style between them.

                                  1. 2

                                    I have always assumed that something like evilginx was already existing and “table stakes” for any decent phisher. Maybe I overestimated Mallory.

                                    1. 3

                                      Not really ready-made tools but some good overview of the theory behind audio segmentation:

                                      Maybe a NN/GRU-based approach like what has been integrated in Opus 1.3 may be a good practical starting point: https://people.xiph.org/~jm/opus/opus-1.3/

                                      1. 1

                                        Neat, thanks for the links!

                                      1. 31

                                        If you really cared about the environment you would just drop the static website on an existing shared host. A low volume website like this uses next to 0 power when running on an existing server. Think about the resources that had to be mined to make those solar panels and hardware when the hardware is doing next to nothing.

                                        1. 17

                                          I was thinking the exact same thing, wondering why you would build a server to host some static pages. They even state that it needs a 10W router, completely dwarfing any potential savings from the solar panels.

                                          That’s also what rubs me the wrong way about some eco activism. Yes, let’s do initiatives that sound nice but ultimately achieve next to nothing: banning straws, replacing plastic bags with tote bags (which lo and behold might actually be worse), etc. When the big savings are more like: drive less cars, use more public transport, do proper thermal insulation in your house to reduce heating/AC, fly less, buy new phones and electronics less often.

                                          1. 11

                                            I’d assume that the 10W router is already a sunk cost, since they presumably want to be able to get onto and use internet somehow and already have the connectivity for it. If that’s the case, then I see maybe $70(usd) for the panel, $11 for the battery, $30 for the controller, $64 for the server = $175, and it consumes zero additional power. Assuming the gear lasts 10 years, that’s $17.50 a year to have the pages online.

                                            Web hosting costs what, maybe $2-$10 a month? Get it on the cheap and that’s still $24 a year to have someone host it.

                                            So for less money, they get to have the fun of building a little server, learning how to power it and host it with their power paid for up front. They can say they’re supporting the development of renewable energy technology, and get to brag about how your web page isn’t adding to co2 levels for the next decade. Plus they get to write about it. That seems like a solid win for a sustainability magazine.

                                            1. 2

                                              I’d assume that the 10W router is already a sunk cost, since they presumably want to be able to get onto and use internet somehow and already have the connectivity for it.

                                              Okay, fair enough. But in this case buying a more power efficient router is not a good idea, since the current router already exists, and will only break even in a few years.

                                              Web hosting costs what, maybe $2-$10 a month? Get it on the cheap and that’s still $24 a year to have someone host it.

                                              It is not about the cost, it is about the ecological impact. Yes, you might save wrt shared hosting, but not because shared hosting is ecologically unfriendly, it is because some hosters charge a lot. On the other hand I can host static content on GitHub pages for free, so hosting the content my own server is infinitely more expensive and at the same time less ecologically friendly.

                                              get to brag about how your web page isn’t adding to co2 levels for the next decade.

                                              This is entirely my point. The production of the battery, panels etc generated way more CO2 than to just put it on shared hosting. How much hardware do you need to produce to put one more site on shared hosting? I’d argue way less than producing all the hardware required to serve your content from a dedicated server. Consider the opposite scenario: replacing shared hosting with one server per site would undoubtedly be less eco-friedly. Especially considering the large fleet of small servers would idle most of the time, whereas one larger server could use the resources way more efficiently.

                                              1. 3

                                                I agree on almost everything you mention, but I’m not 100% convinced that a fleet of small servers would be less energy efficient.

                                                Keep in mind 99.9% of typical servers are power hungry Intel xeons cpu’s. Low cost hosters typically use hardware longer to save costs (older machines use a lot more power). Plus due to the fact hosters put everything in datacenters, you should also take cooling into account. A lot of electricity is simply wasted via heat. The current setup requires no active cooling, uses a low power CPU, only has a single power supply and it’s an arm so it will clock down when not in use.

                                                I’m not saying you are wrong, but I think the difference will probably be a lot smaller than you would think.

                                                1. 2

                                                  But in this case buying a more power efficient router is not a good idea, since the current router already exists, and will only break even in a few years.

                                                  Maybe. It depends on the cost of the electricity and the cost of the replacement. Say the 10W router is new and could be reasonably expected to have 10 years of life in it. If it is powered from the grid, it hasn’t been paid for completely yet- it still needs power for the next 10 years. At an average electrical rate of $0.14/KWh in US, that is going to cost (10Wh * 24h * 365d *10y / 1000 = 876KWh * $0.14/KWh )= $122.64. A 1W router would use a tenth of the power, or about $12.26. So, if he could buy that 1W router today for less than ($122 - $12)= $110, it would start to make sense from an economic standpoint. It would make more sense if the cost of power in Barcelona is higher. With $0.25 power, the break point would be closer to $200.

                                                  The production of the battery, panels etc generated way more CO2 than to just put it on shared hosting. How much hardware do you need to produce to put one more site on shared hosting? I’d argue way less than producing all the hardware required to serve your content from a dedicated server.

                                                  I’d like to read that argument. How many of this guy’s solar servers do you think a typical grid powered, shared hosting server would have to replace to have the same overall carbon footprint for a ten year period?

                                                  1. 2

                                                    I thought we were talking about the ecological impact, not the cost to replace the router? In any case the electricity used is in my opinion less relevant in ecological calculation, since you can very well produce energy from renewable sources (whether Barcelona does this or not is a different question, but Denmark and Norway have a big share of wind and hydro power).

                                                    I’d like to read that argument. How many of this guy’s solar servers do you think a typical grid powered, shared hosting server would have to replace to have the same overall carbon footprint for a ten year period?

                                                    So, let’s do some over the envelope calculcations performance numbers: the older E2680 does 38259221 Dhrystone, the Raspberry Pi about 2800. While integer performance is possibly not the most accurate statistic, single board computers usually have just as bad IO performance as CPU as networking, so I’m just picking this. This means one older Xeon server at full capacity (because of course you want to run it at full capacity, to leverage the most out of the connection, the cooling, the cost of hardware) can replace rougly 13664 Rapsberry Pis running at full capacity. But as noted, the Raspberry Pi hosting one site so it is probably not running at capacity, rather at 10% at best, so a Xeon server can replace even more mostly-idling Raspberry Pis. Even with higher energy use of the Xeon, I’d be very surprised if the cost of production and use of one Xeon server (on hydro power for example) would be higher than the manufacture of multiple thousand Raspberry Pis, solar panels, batteries etc.

                                                    1. 2

                                                      Here’s how the back of my envelope looks…

                                                      To answer the question, “How many ‘little solar servers’ would a ‘big grid server’ have to replace to have the same carbon footprint over a period of years?” Figuring it out requires an energy cost estimate for the general manufacture of electronics, an operating energy cost, the carbon producing fraction of the energy sources used, and a timeframe.

                                                      Here’s some variables-

                                                      m = fraction of cost of electronics manufacture that is directly due to energy consumption.

                                                      r = cost of energy in kWh.

                                                      Eg = fraction of non-carbon producing energy sourced from the grid

                                                      Es = fraction of non-carbon producing energy sourced from the sun

                                                      y = the operating timeframe in years.

                                                      Variables for a server:

                                                      S = cost of the server in dollars.

                                                      Si = power consumption of server at idle in Watts.

                                                      Some calculations for a server:

                                                      Se = carbon kWh used to manufacture the server = (S*m)/r * (1-Esource)

                                                      So = yearly carbon kWh used to operate the server = (Si24365/1000) * (1-Esource)

                                                      Sl = lifetime carbon kWh = Se + (So * y)

                                                      Now some assumptions:

                                                      m= 0.30. I don’t really know what m is. I’m guessing its like 30%, but he lower it is, the less energy it takes to manufacture the hardware.

                                                      r= $0.14/kWh. That’s a reasonable average in the US.

                                                      Eg = 0.18 . That’s the percentage of ‘renewable’ energy in the us.

                                                      Es = 1.00. Power from solar is carbon free.

                                                      The little server “s”:

                                                      s = $175. an estimate from the parts the guy used.

                                                      si = 1W. From what the author claims. I’m assuming that the servers are almost always at idle.

                                                      The big server “S”:

                                                      S = $2000. An estimate for a rack mountable server with an E2680 processor.

                                                      Si = 80W. The processor idles at a lower wattage, but this is probably reasonable for a full system.

                                                      I’m assuming that both servers are produced with grid energy, and the big server is going to source its operating power from the grid.

                                                      I run those numbers, and Sl = 10032kWh, sl = 374kWh. The big server has a carbon footprint about 27 times bigger than the little server over a ten year period.

                                                      So..

                                                      If the choice is to buy either a big grid server or a little solar server to run the site, definitely go with the little server to optimize for carbon. The big server won’t be more efficient until it can replace 26 other small solar sites.

                                                      If considering VPS hosting on a big server with like 20-30 other hosts on it, its probably not more efficient than hosting on little solar servers, but its close.

                                                      If the big server is running unbalanced shared hosting, with 100’s of hosts per server, then for sure go with the big server. Its a win for the environment!

                                                      Sorry for all the math, and feel free to play with the assumptions….

                                                      1. 1

                                                        Sorry for the late response, but I would like to thank you for running the numbers, I appreciate it, it is one of the reasons why I enjoy being on the site.

                                                        I don’t have better numbers (nor an idea how to figure out how to adjust the variables to fit the real world), so I can only go as far as to say that 27 shared hosting sites sounds like a low number. Considering e.g. how many Github Pages there are, I assume they have more than that amount per-server.

                                              2. 15

                                                Eco activism usually skips the “talk to your local/regional/national govt and companies” level. Most of the meaningful positive impacts come from regulations and corporate changes, not personal behavioural initiatives, so people should be focusing their efforts there and only afterwards spending time on personal changes.

                                                1. 10

                                                  It’s difficult to argue for others to live up to ideals that you don’t act on yourself. Acting individually and pushing for regulatory changes are anything but mutually exclusive.

                                                  1. 3

                                                    I agree from the principle point of view, but disagree from a human psychology perspective.

                                                    There is only a finite amount of effort people make. If that finite effort goes to marginally effective things, people just feel good about themselves and stop there, without really making an impact. Yes, in theory people could do both, but when it comes to practice people don’t. This is why it’s so important to start with what’s more effective and then if someone has excess energy, move down the list.

                                                    1. 1

                                                      so if i support a carbon tax, i have to voluntarily donate a % of my gas budget to the government?

                                                      1. 1

                                                        Kind of. If you support regulations for limiting CO2 emissions, it would indeed be a good idea to also consider how you could lower your own carbon footprint. Even if you don’t believe that will make a difference, you might still find it difficult convince others (including politicians) about the urgency of, say, acting against global heating, if you are not yourself prepared to bike or use public transport more, or eat less or no animals.

                                                        But, well, there could of course be different ideas behind supporting carbon tax as it can in effect be a pay-to-pollute scheme designed to favour large corporations that can afford to buy quotas over local farmers who can’t.

                                                    2. 3

                                                      You can definitely do both. Also, personal changes serve a valuable awareness purpose that directly plays into larger political goals. You can totally multi-track this and cover a lot of ground.

                                                      SUVs don’t buy themselves, and it contributes something. Not every action needs to cut more than 1% of worldwide emissions.

                                                      Though personal stuff can sometimes backfire (as seen here), so it’s important to be smart about things.

                                                  2. 4

                                                    That and the shared host can run on solar and still get five nines by geographically distributing the servers.

                                                    1. 3

                                                      This. Off and on I’ve costed out providing dedicated servers using low power processors, but the amount of compute and ram per watt has never panned out out compared to building a bigger server and subdividing it.

                                                      1. 3

                                                        If you really cared about the environment you would just drop the static website on an existing shared host. A low volume website like this uses next to 0 power when running on an existing server.

                                                        I wonder what the per-request and per-website energy consumption values of NeoCities are.

                                                        I could not find any hard figures; probably they are both close to zero watt.

                                                        1. 2

                                                          But that is not as fun.

                                                          1. 2

                                                            No, not at all. This website is very cool but not a good example for what you would do if you wanted to minimize environmental impact. I have been reading the other content on the website and the author has done some really interesting stuff that is impactful like powering his home office from a few solar panels on his apartment window as well as converting his device chargers to DC so there is no DC -> AC -> DC for most devices

                                                          2. 1

                                                            Indeed. A lowendspirit box for $3 year has to be more sustainable, especially if you put the funds that would have otherwise gone into self-hosting into an environmental project like planting trees

                                                            1. 4

                                                              I tested and my dirt cheap VPS can handle about 17,000 requests per second for a static blog with hugo. Even though that vps probably still is better for the environment, its still super wasteful to have a whole OS and web server dedicated to a blog that probably usually gets about 300 users per day. The most efficient setup would probably be a shared nginx server where everyone just drops their files in over ssh. The per user cost of hosting a static blog is pretty much nothing which is why so many of these services exist for free.

                                                          1. 4

                                                            The global menu bar issue is something I deeply care about. Linux has a very good global menu bar, but people either are moving away from it or do not know that it is available. Just use Ubuntu 16.04 and see in action.

                                                            Since 2014 Ubuntu modified GTK and Qt so that all desktop applications made use of a global menu bar. Everything works now perfectly. Ubuntu 16.04 with the default Unity desktop is a very usable desktop. All my non-techy acquaintances like it. These modifications have, however been refused upstream, because they do not fit the GNOME 3 paradigm (most of which I like).

                                                            I really do not understand why people are against global menus. They are better, scientifically proven better. And they save a lot of vertical space, that in modern super-wide monitors is a precious resource.

                                                            Why doesn’t the global menu bar receive the love it deserves?

                                                            1. 2

                                                              They are better, scientifically proven better

                                                              Citation needed

                                                              1. 3

                                                                From Fitt’s law [1] and Steering Law [2] comes that global menu bars are much easier to access.

                                                                Fitt’s law tells you that global menu bars are better because they can be reached by moving the cursor to an infinitely big target [3]. In other words, you can throw your mouse pointer somewhere up and it will surely and easily reach global menu bar.

                                                                Steering Law tells you navigating along/inside a vertical or horizontal tunnel is hard if the tunnel is thin (hello badly implemented JS menus that disappear when you move to a submenu). In the case of a global menu bar navigating it is easy because it is infinitely tall, just push your cursor slightly up.

                                                                Global menu bars are easier to access, but are they faster to access? This a good question, because, on average, the global menu bar is farther away than the local menus. It turns out that, on average, they are equally fast to access. [4] Windows requires more aiming precision (slower) but less travel distance (faster). MacOS requires less aiming precision (faster) but more travel distance (slower).

                                                                All things being equal, simplicity should always preferred, because it means that more people can fruitfully use a system, for example people with disabilities.

                                                                1. P.M. Fitts: The information capacity of the human motor system in controlling the amplitude of movement. J. Exp. Psychol. 47, 381–391 (1954)
                                                                2. J. Accot, S. Zhai: Beyond Fitts’ law: Models for trajectory-based HCI tasks. In: CHI 1997: Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 295–302. ACM, New York (1997)
                                                                3. A. Cockburn, C. Gutwin, S. Greenberg: A predictive model of menu performance. In: CHI 2007: Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 627–636. ACM, New York (2007)
                                                                4. E. McCary, J. Zhang. GUI Efficiency Comparison Between Windows and Mac. In: HIMI 2013: Human Interface and the Management of Information. Information and Interaction Design pp 97-106, Springer (2013)
                                                                1. 1

                                                                  This makes sense, thank you for the detailed response.

                                                              2. 2

                                                                how does it play with focus modes other than click-to-focus? e.g. in focus-follows-mouse, if you have to move your cursor through another window en route to the global bar, it would rebind to the new application.

                                                                1. 2

                                                                  Focus-follows-mouse has a delay before switching applications. Move across fast, no app switching. Or go around (fairly easy with non-overlapping windows).

                                                                  1. 1

                                                                    I haven’t tried these global menus in Linux, as an Enlightenment user, but how long ia the delay and is it configurable?

                                                                    I’d tie it to motion, because I appreciate my desktop being fast and all kinds of stalls annoy me. I’d imagine this to be very true if I have to touch a pointer device.

                                                                    1. 1

                                                                      It appears to be a hard-coded 25ms delay, at least in GNOME shell. Others may implement it differently.

                                                              1. 4

                                                                For those interested in silent computing, the website Silent PC Review has a lively forum. Sadly, it has been years since the owner contributed any of his extensive reviews of CPUs, PSUs and cases.

                                                                1. 4

                                                                  http://www.fanlesstech.com is another neat blog about silent computing.

                                                                1. 3

                                                                  I use my own pw. Unixy and similar to pass (a wrapper over GPG), but with no information leaking and single-file DBs.