1. 27

    Write less bloated software. Less CPU cycles, less memory, less power needed. Write code that sucks less.

    Write less bloated websites. Keep them small, cut the Javascript-bloat and tracking. With fewer requests the load on the network is less and visitors wait less.

    In the real world, try to arrange home office if possible. It saves resources and time spent while commuting.

    1. 33

      What about less time spent with (re-)compiling binaries to change a option one your configuration? ;^)

      1. 16

        While I want to believe this because I love optimizing the living hell out of databases I hack on, I’m unconvinced that by writing more efficient software it will cause anything other than induced demand. People still have the same number of highway lanes, just smaller cars using them. Maybe the CPU p-states drop a little more often in the short-term, but usage expands to capacity as with so many resources. We use resources we have until we cannot use more.

        The only solution to using less power is to limit the power that can be used. Optimizations can happen after this vital step is achieved. Until then, we will just maintain utilization.

        Doing this in any sort of meaningfully large-scale way will require governments to properly capture the externalities associated with various forms of power production over time.

        When I make programs more efficient, I’m not really helping to encourage governments to do that, and I don’t think I’m helping things in this respect.

        1. 3

          Induced demand is definitely a thing, but I could imagine that if you write a fast alternative to some popular website AND it somehow got many users, it would be a net win.

          For example, whenever I go to weather.com I’m shocked at how slow and bloated it is. If someone wrote a really low latency version of it, and somehow ranked on Google (big if I know), then it might save power overall.

          Basically I think it’s better overall to attract demand for the efficient thing rather than the inefficient thing. Although I think you would have to be a very rich person to fund this kind of thing, because the market clearly doesn’t pay for lack of bloat in web pages and native phone apps.

          Or another alternative – if someone here actually works at weather.com, or nytimes.com, cnn.com, I bet you could really make a dent in the power bill without affecting any functionality :) Easier said than done I know.

          1. 4

            Or another alternative – if someone here actually works at weather.com, or nytimes.com, cnn.com, I bet you could really make a dent in the power bill without affecting any functionality :)

            Are you looking for this? https://lite.cnn.io/en ;)

            1. 1

              there’s also https://text.npr.org/

        2. 7

          On a somewhat related note, use languages that are more efficient, powerwise if possible. https://thenewstack.io/which-programming-languages-use-the-least-electricity/

          1. 1

            You can perfectly travel to work without having a too negative impact by cycling or public transport.

          1. 24

            I thought this was satire. Kubernetes is a bloated mess. What does it help when the space shuttle is made out of styrofoam and duct tape?

            1. 1

              I flagged this as spam. I like and respect some of the things that came out of grsecurity/PaX. However, this blog post mostly seems like a way to promote the product.

              1. 8

                Gonna disagree pretty strenously on that one. While they do sell a product, the post is a good breakdown, with actual code listings. I hope others don’t follow your example

                1. 3

                  I agree with you here. And I prefer this kind of advertising over yet another bollocks node.js-startup that creates blogs to recruit people. I swear to god, something dies inside of me every time I read something along the lines of “Our young and fresh startup is looking for new SOAP heroes. Apply now using our REST API!”

                  1. 1

                    I’d prefer no advertising but that’s unrealistic.

                  2. 2

                    That’s fine. I think it would be a good breakdown without the product plug and the “but we offer this service to our customers” nonsense.

                    1. 2

                      fair enough. its a find line to be sure

                  3. 2

                    This feels like an ad, but with a technical mindset. I dislike their attitude the most. They maybe correct, but they come over as assholes. Oh look how great we are and how bad the kernel team is…

                    1. 2

                      True, there’s certainly an element of that, but honestly I was pleasantly surprised at how much less snipey and insulting this post was than most things I’ve seen from the PaX/grsec team (I feel like they’re usually worse in that regard).

                      1. 2

                        Yeah, if you ever read anything that grsecurity/PaX folks write it’s always the same thing. Everyone else is stupid and not doing what they’re supposed to be doing (or stealing their code and not giving credit to them) and everything they do is the proper and only way to do it. I still like some of the things they do but this attitude will always be a problem.

                      2. 2

                        Also I’m not completely clear when they noticed it. I hope at the latter end of this story, and then reported it. But by interspersing “we did x” in between all the “they did y” this makes me read “we noticed and just didn’t tell them”.

                    1. 6

                      Sounds cool, but why not support the open standard Vulkan instead? Metal is just a proprietary Apple-interface and you are playing in their hands by increasing their mindshare.

                      I may sound like Richard Stallman here, but Vulkan is the first chance in years to stop the DirectX/OpenGL/… madness between the big operating systems and unite under one banner.

                      1. 8

                        Gosh. Yeah you’re so right and it’s definitely something I’ve been struggling with. The reason I chose Metal was because I do iOS development for money, and I figured it would be helpful. That said, if I want to be serious about graphics programming work, I should learn Vulcan.

                        I appreciate your comment! I completely agree with you, and am constantly frustrated by my repeated failure to leave the apple ecosystem.

                        1. 4

                          You sound a bit like Stallman. :)

                          Increasing Apple’s mindshare isn’t a bad thing. I spend my days working on an Apple laptop, using an Apple phone. Cross platform is nice, but it comes at a cost of your time: not only do you have to be familiar with all platforms’ individual APIs, but you also have to know the overarching cross platform API. It’s usually not worth it, and devs have no obligation to try it.

                          1. 6

                            You do sound like Richard Stallman, IMHO. Why not comment on the actual project, instead of saying “you’re wasting your time, you should be working on what I consider to be a better thing”?

                            If you’re targeting the iOS and/or MacOS platforms, Metal is the better choice. If you’re looking to run your code on other platforms too, then Vulkan may be the better choice - but not all code must be cross platform…

                            1. 4

                              He may sound like Stallman but that is not a bad thing as Stallman has been proven right many times over. This does imply that in the end it is better to forego on some temporary convenience - in this case tailoring a project to a proprietary graphics stack - for a solution which has better long-term prospects. While this is inconvenient to those who bought in to the proprietary product (in this case that means Apple products of any sort) it provides an impetus to those who have not yet chosen a path to avoid those products. Once enough people choose that option the manufacturer will have to react by opening up the product in some way or risk a slide into irrelevance.

                              Unfortunately this usually takes a long time, often longer than the effective life time of the product series…

                              1. 2

                                Nobody said the developer is wasting his/her time. What was said is clear:

                                Vulkan is the first chance in years to stop the DirectX/OpenGL/… madness between the big operating systems and unite under one banner.

                                Metal may be a better choice, for technical reasons (speed, documentation, platform consistency) but Vulkan may be a better choice, for social, moral, and potentially technical (portability, compatability, ecosystem) reasons.

                              1. 2

                                Coincidentally, the aircraft captain passed away yesterday. May he rest in peace.

                                1. 9

                                  I’ve been looking into https://commento.io/ , but I’m not sure if I want to bother with comments at all or not.

                                  What’s your plan for anti-spam?

                                  1. 3

                                    Shouldn’t get too much because it’s custom, so people have to manually spam it. I have a rate limit of one post per 2 minutes, which prevents brute force spam, a comment minimum of 20 characters, and no website link or HTML allowed for link spam. I’ve gotten 100 or so comments since I put it up, and none were link spam or anything you’d see on WordPress. Just a few people sending junk strings through.

                                    So at the moment I’m just reviewing them all once a day and seeing if there’s anything I need to delete, but it’s been a lot less spammy than WP ever was even with anti-spam plugins.

                                    1. 11

                                      Spam tools are smart. They know to try to add more than 20 characters into a <textarea> and figure that the name-field has no limits. So, expect your site to be “crawled” some day, which is done a lot. Any <form> found is scrutinized intelligently. The 2-minute cooldown of your blog would turn this spam-offensive into a DoS on your behalf.

                                      I try to avoid captchas as much as possible and strive to include at least one field with a strict input policy instead. I never had problems there. If that is not possible, consider adding a simple captcha, which can serve as a “strict input field”.

                                      To give an example, the question “What feels wet on your skin when you go outside and it is present?” has the obvious answer “rain”. No current AI would be able to answer this simple question, and we are far from solving it through AI.

                                      Captchas as classification problems is just a means for Google to train their NN’s. Please mention that if you ever write an article about captchas, as this is a fundamental problem and skewed the entire captcha-landscape. There are much simpler and less annoying captcha methods. Oh, and they don’t track you. ;)

                                      1. 12

                                        I would be really curious to see a modern spam tool. Maybe I should infiltrate a spam gang or something.

                                        From my experience with MediaWiki, it seems like they are very easily extensible with any custom form filling logic and the cost of setting up a targeted attack is quite low.

                                        At the time of events, the wiki I ran wasn’t super popular, maybe a hundred visitors a day or so. The old reCAPTCHA became useless against automated attacks, so I made a simple QuestyCaptcha plugin with a small number of options, like “what OSI layer a router operates on?”. To my surprise, it was broken. We’ve been facing a targeted attack, and to make it economically viable, the spam machine had to be easily configurable for that. But, QuestyCaptcha is a popular MW module and someone probably made a spam plugin that makes it as easy as adding question-answer pairs in the config.

                                        I wrote a custom domain-specific captcha that asked the user to enter the broadcast address of a random network, with prefix length range that makes it trivial to do a mental calculation. That is easy to break without any AI of course, but it requires actually writing some code specially for one site. For a while, things finally went quiet. But then it was broken too.

                                        We gave up and added Akismet, which had an absurdly large false positive rate and made the wiki nearly impossible to edit, so we gave up on that too and switched to manual account registration. Unsurprisingly, the wiki died.

                                        The successor uses github plus pull requests plus automatic deployment to readthedocs, and the activity there is higher than it’s ever been on the wiki, but it still feels like a spectacular defeat of Web 2.0

                                        1. 2

                                          I’ll also add that some use humans in the process. Mechanical Turks, folks that solve them to access “free” sites with illegal content, etc. They have piles of people solving piles of CAPTCHA’s free or dirt cheap.

                                          1. 4

                                            There are even services that provide an API for solving capthcas, listing average response time and number of workers online. If an API for programmatic exploitation of humans is not the cyberpunk antiutopia science fiction writers warned us about, I don’t know what it is. ;)

                                        2. 3

                                          The 2-minute cooldown of your blog would turn this spam-offensive into a DoS on your behalf.

                                          What do you mean here?

                                          1. 1

                                            Some sites(HN maybe?) have a cooldown period before anyone can respond to a given comment. Sounds like your limiting process is per user so it shouldn’t result in a DoS.

                                        3. 2

                                          Interesting. After my experience with Wordpress and MediaWiki, I started to think of comment spam as an intractable problem unless you have a lot of resources to throw at it. Maybe I should give comments a try again.

                                          1. 1

                                            I personally think it’s only a problem if your blog gets popular. So anyone should weigh their odds. Otherwise, simplistic captcha is relatively easy to implement. For example there’s a python captcha lib I once used to generate a set of 1000 images from strings. If anyone trained an AI on it, my plan was to just regenerate the images on a schedule. No one ever tried.

                                            If I did it again I would require a verified email to post comments; deter spam and build an email list for referral marketing. Why not? Everyone else does.

                                        4. 2

                                          I’d never heard of commento before, but I just checked it out and decided to move my blog’s comments over to it. It was really easy to migrate from Disqus. It took like 10 minutes total. I already like it much, much better than Disqus.

                                        1. 3

                                          Great work! This is how a comment section should look like. It reminds me of the (g)olden days where you didn’t have to sell your soul, link your social media or do three email-validations to post a comment.

                                          In a way, these centralized comment-tools (Disqus,…) inhibit the free flow of information by forcing you to link your comment to your online identity in some way. There are unpopular opinions in any field and they will be silenced that way.

                                          By running your own comment section, anonimity is truly given and you can still fully control it.

                                          1. 2

                                            Yeah. The problems that come along with anonymity are present too, unfortunately.

                                          1. 10

                                            To shed some light on this, this subdomain is used more or less internally to keep track of the patches for our different tools. Many people upload their patches on our site but often are not invested enough to keep them updated, mostly because it is a bit of work to test them by hand for each revision, so one of us had the great idea to automate the process and turn it into this backend. This is also an engagement for people who are looking to contribute but are not yet too trained in C or other tools. We now can point them to this domain and they can try to fix the “red” patches.

                                            The subdomain name “gunther” stems from Mats Söderlund’s stage name “Günther”. His music was more of an inside joke (but I think it’s great!) and we had this subdomain even before the patches-tool was there as a joke. It was then later used to “test” this patch tool and then we just stuck with it. There are no personal subdomains on suckless.org and this isn’t one either.

                                            1. 1

                                              What do you make of Bitreich’s contention: “Suckless failed?” It’s at the top of their manifesto. :)

                                              1. 2

                                                Bitreich goes a different way, and I respect them for that. Back when Bitreich was formed, there were some problems in our organization that we adressed, but Bitreich goes to some extremes we wouldn’t want to follow and some of their points of criticism, frankly, still remain.

                                                I don’t lose sleep over the Bitreich manifesto, though. In a way, I understand that they wouldn’t change it now. They are a more purist approach to the suckless idea, and thus they must reflect suckless in their manifesto to define their path.

                                                From an absolute perspective, even suckless is considered “elitist”. I have the firm belief that our ideas can benefit many areas, and it’s legitimate to face some tradeoffs in that process to carry an idea to more people, increasing the “mindshare”. Bitreich cancels the goal of popularity from their target and prioritizes purism. To give an example, their website is only accessible via Gopher and not via HTTP. One can thus consider them “elitist” in regard to suckless. In an absolute perspective, it remains to be seen how many people even get to hear from their ideas, which is a shame, as they have many good ideas and write excellent software.

                                                1. 4

                                                  Can you elaborate on the nature of those problems (in suckless)? Can you recommend in particular any Bitreich ideas or writings, if you have any good ones in mind, as distinct from what a suckless position/idea/implementation would be?

                                                  1. 1

                                                    Over the years we got quite substantial donations, and we decided in 2015 to form a legal entity (non-profit of course) to be able to receive them “legally”. It also helped us remove a single point of failure and let the servers and etc. run in the name of the entity rather than a single person.

                                                    Back then we discussed the role of the entity and had the idea that every contributor became a member of it, so all the processes within suckless became “ legally backed”. However, we unwillingly left some key people out in that process and I made some personal mistakes myself that I since have learnt from. To cut it short, it motivated some people to found another project with a different structure.

                                                    We discussed the issues with them and made changes. For example, we still have the legal entity but don’t try to bloat it into something that it isn’t. Legal representation is good, just in case, but it is unflexible. We now have a group of admins that all have equal power, and they make all the decisions and have the keys to everything. I may be first chair of the legal entity, but I am not the “leader” of suckless and any idea is first discussed with the admins, as it should be.

                                                    However, the split was already done and Bitreich had their own game set up in the meantime, and some (rightfully) didn’t forgive my mistakes. They also host conferences and have their own projects. They are doing fine and have cool ideas! As far as I’m concerned, there is no animosity between us and we benefit from each other.

                                                    To answer your question: Best take a look at their projects on their git to get an idea about what their interests and goals are compared to suckless. The coding standards themselves don’t differ as far as I know. One focus is tools for Gopher, but there is more than that to explore. :)

                                                    1. 2


                                                      1. 2

                                                        You are always welcome.

                                            1. 6

                                              I like the idea of Gopher to be a suckless alternative to the more and more complex web. However, I wonder why they didn’t implement a new, simple protocol from scratch. Gopher suffers from a lot of legacy and had virtually no usage apart from sporadic support in some terminal web browsers.

                                              The tradeoff is critical in my opinion and if Gopher takes off a lost chance to simplify the protocol drastically.

                                              1. 13

                                                There is a new protocol being developed called Gemini that is about as simple as gopher, but 1) includes status codes (not found, redirect, okay, temporary error, permanent error) 2) uses MIME types when delivering content and 3) is exclusively served up via TLS. It’s not finalized yet, but there are at least three gemini servers running that I know of.

                                                1. 5

                                                  The complexity of the web does not come from HTTP, so why focus on the protocol? The issue is more about HTML+CSS+Javascript. Why not build a web browser which only accepts Markdown instead HTML?

                                                  1. 1

                                                    Yeah, HTTP is pretty simple. HTML 3.2 was pretty straight-forward. Even Dillo runs it. I figure a subset of HTML mixed with a non-Turing Complete subset of CSS1-3 could be fine. If scripting, make it optional, sandboxed, and native like Juice Oberon.

                                                    1. 1

                                                      Yeah. I see nothing wrong with HTTP. In fact, it was actually pretty fun to develop an HTTP-server (see quark). Gopher is fighting an uphill battle of course. For the whole benefit of the web, I think it makes more sense to encourage simplicity. Switching over to a completely new technology does not sound realistic, especially when you can’t serve ads with it.

                                                      1. 1

                                                        Not being able to serve ads is the point of Gopher. At least that’s the impression I get from proponents of the protocol here on Lobste.rs and elsewhere.

                                                  1. 5

                                                    I’m glad to see how far Blender has come in the last years. I started working with it with version 2.45 back in 2008 and had a longer break since 2016. Back in the day, it was always said that Blender 3.0 was supposed to be a full-fledged program that could live up to any commercial software easily, and it was far far in the future. Just like GIMP, it’s not difficult to recommend it to anybody now. This used to be different just a few years ago and Blender 3.0 is just a few years ahead of us.

                                                    1. 4

                                                      I would just launch the client code and keep the server code locked for now. Then, in a few months, you can see how much traction it has gotten and release the server code then. The advantage is that during the months it has been launched you are able to further refactor the code and improve it.

                                                      1. 6

                                                        I heavily disagree. Promises of “it might be open source later” are often worse than nothing; I certainly distrust them and am unlikely to use a project that says such things without backing them up.

                                                      1. 4

                                                        See here for a link to the diff (on freshbsd.org), if one is interested.

                                                        Just as a suggestion to the OpenBSD developers: OpenBSD development is one of the few things I actually care about reading the commits of. Thus, I find it sad that the ‘official’ commit log like the one mentioned above published on the mailing list on marc.info does not provide a direct link to the diff. I used to be subscribed to the commits mailing list, but saw no reason to stay subscribed for this reason. But maybe I am just missing something.

                                                        I am sure providing direct links to diffs with the commits (or put them inline) would improve engagement and increase code-review drastically.

                                                        1. 2

                                                          Agreed. This mirror on GitHub stays up to date and it’s where I went to read the commit: https://github.com/openbsd/src/commit/707316f931b35ef67f1390b2a00386bdd0863568

                                                        1. 16

                                                          It’s great to see that this topic gets such a popular coverage in the WP. It’s one part to increase the mindshare within the developer community, another to increase it on the ‘consumer’ end.

                                                          1. 2

                                                            Consumers don’t care, though. They have said as much since the beginning.

                                                            1. 4

                                                              Most don’t. There’s a subset that do. I’ve also found you can get more on something if you put it in a way they understand with an easy solution. I’ve gotten a few people on Firefox that way. Social apps like Signal they mostly ignore. There’s potential to increase the niche with that kind of awareness.

                                                              You’re right that don’t care is the default, though. The default that brings in the most money, too.

                                                          1. 11

                                                            A really good writeup I’ll keep bookmarked. There is just one thing I don’t agree with:

                                                            When you’re beginning with any language/library/framework, check their CoC; they will protect you from being harassed for not immediately getting what is going on instead of blocking you from telling them what you think.

                                                            I’m mentioning this ’cause a lot of people complain about CoC, but they forget that they allow them to join in any project without being called “freaking noob” or “just go read the docs before annoying us”.

                                                            Also, remember that most people that are against CoCs are the ones that want to be able to call names on everyone.

                                                            There are many small and big communities that choose not to have a CoC, for good reasons. It’s a bit of a stretch to imply that they were all toxic. To be honest, I have seen much more drama and toxicity in CoC-“guarded” communties. It provides an opportunity for people who can’t contribute anything in code/documentation to the project, but want to be a part of it, to become “CoC-lawyers” and go around policing people on mailing lists and general discussions for minor things.

                                                            But I don’t want to start an off-topic discussion about CoC’s here, as it is a complex topic and has many facets. If you are happily using a CoC for your project, godspeed to you.

                                                            1. 1

                                                              I’m not a heavy user of floating point arithmetic, but I’ve been bitten by it enough times to be interested in learning about alternatives. One basic thing that’s always bothered me about Unums/Posits is that there’s three fields (the regime, the exponent, and the fraction) that all vary in size inside a bit vector that also varies in size, and it’s not very well explained how those sizes are chosen, or affect one another.

                                                              Reading between the lines of the paper that @FRIGN linked, I think the answer is something like:

                                                              • Posits are a pattern for representing numbers, and the pattern can be tuned for specific applications
                                                              • Posit(E,N), where N > 0, E <= (N - 1), is a specific instance of the Posit pattern, where N is the total storage size in bits and E is the (maximum?) number of bits used for the exponent
                                                              • In any Posit based calculation, all Posits must share the same values for E and N (or at least E; perhaps N can be filled out with padding, like sign-extension for signed integers?)
                                                              • When presented with N bits representing a Posit:
                                                                • the first bit is the sign bit
                                                                • the next bits are the regime, which ends at the first inverted bit as described in the paper
                                                                • if there are fewer than E bits remaining after the sign bit and regime, they represent the exponent (but is it the most or least significant bits of the exponent?)
                                                                • if there are E or more bits remaining, the next E bits are the exponent
                                                                • any remaining bits represent the fraction (with an implicit 1 bit at the beginning)
                                                              1. 3

                                                                The IEEE 754 floating-point numbers have a fixed size exponent and mantissa. There are some tricks to still squeeze out some precision near zero using subnormal numbers (see Chapter 2 of my thesis for a complete introduction), but in general it’s a relatively “rigid” structure. Another point is a high amount of waste with regard to NaN representations (there are a lot, see Table 2.1 on page 8), ranging between 0.05 and up to 3.12%.

                                                                Not to become too technical, but the revolutionary idea behind posits is the following: Posits skews the idea of an exponent a bit using a concept of the “regime”, and you end up with no wasted representations. There is no concept of NaN with posits, and instead an interrupt is called. I think this is a cool idea, as NaN represents an “action” rather than a value, which is bad design.

                                                                I agree with your sentiment that Gustafson’s paper is hard to read when it comes to these things. This is why I chose to build a theory for Type-2 Unums in my thesis, as the introduction of them was equally difficult with the slides presented back then. Everyone has their strengths and weaknesses. I really like Gustafson’s visualizations, but it lacks in formality. Maybe I’ll come around to writing a paper with a formal introduction at some point, but there actually has been a refinded published version of Gustafson’s paper here.

                                                                For a critical look at posits, I recommend this paper.

                                                              1. 5

                                                                The talk is a bit old, and given I was involved in the research back in 2016/17, I’d like to share some information here.

                                                                There are three versions of Unums. Type-1 Unums were described in the book by Gustafson, as referenced in the talk. Type-2 Unums were a set-theory-based approach I studied in my bachelor’s thesis. Probably influenced by the results of it Gustafson went back to an approach based on the Type-1 Unums called Type-3 Unums (“posits”) a few months later which were more hardware-friendly and closer to IEEE 754 floating-point numbers. Refer to Gustafson’s paper for the posit arithmetic (Type-3 Unums), as the other types are not as important.

                                                                The paper is very well-written and will most probably convey more information in 40 minutes of reading time than watching this 40-minute talk.

                                                                1. 1

                                                                  Thank you for both links! I have Gustafson’s book on my shelf, and it’s fascinating… but I haven’t yet put any effort into following it up. My intention was to see how much of a hardware implementation I could build. I will definitely study the posit paper before I start down that path.

                                                                1. 12

                                                                  It’s your own fault if you don’t switch to Firefox at this point as a privacy-conscious person. I know, it has its flaws, but all other alternatives are webkit-derivatives and I find it to be an excellent browser.

                                                                  If we let Firefox die, it will be the end of the open web as we know it. The standards would still mostly be open, but the software you use to access it won’t.

                                                                  1. 2

                                                                    Counterpoint: pick your favorite webkit-based browser project, and contribute whatever you can to keep that codebase out of Google’s exclusive control. We have more degrees of freedom than just consumer choice.

                                                                    (I use Firefox myself, but I don’t think a world with only one privacy-respecting browser would be an improvement on what we have now. Also, I’m not sure that Mozilla is a basket I would want all my eggs in.)

                                                                  1. 1

                                                                    Many people will probably not like what they see in these slides because of their political position. It was very refreshing to see a scientific “formalization” of both ethical approaches.

                                                                    I see a comparable effect in terms of “equality”. There can be two outcomes: “equal rights” or “equal status”. Many people will claim to support both, but one opposes the other. You can either have equal rights or equal status. To give an example: If you demand a 50% female-quota in leading positions, you are against equal rights as women are then treated differently compared to men. You could weigh both outcomes in some way, but one cannot truly exist while the other prevails.

                                                                    It would be nice to see more reasonable philosophically weighted debates on these things instead of this constant supercharged reproduction of political dogmas.

                                                                    1. 9

                                                                      I wonder what’s “refreshing” in this slide deck to you. I recommend supplementing with the original talk.

                                                                      The author imagines that racism cannot be unintentional, and that racist stereotypes have “an element of truth”. At the same time, they are unwilling to choose security over convenience, to stop calling heuristics “algorithms”, or to question why we have designed certain systems. This combination leads to a worldview where racism is a cost of doing business.

                                                                      As an example, the author discusses an algorithm which detects people blinking while a photograph is being taken. “If you can fix it, do that… I certainly can’t make my algorithm detect every single error,” they say. Implicit in this claim are the notions that algorithms are inherently buggy/lossy, that fixing showstopping product issues must be balanced with shipping the product, and most dangerously, that sometimes it’s acceptable to shrug, give up, and have a product with some racist behaviors.

                                                                      A legal theme in the USA that pervades the author’s critique of procedural fairness is intersectionality, which has become popular for legal arguments in much the same way that set theory has become popular for mathematical work.

                                                                      During the section about “maximizing profits”, the author takes a sickening tone, reducing people to data for the purpose of optimizing business. They fail to point out the typical humanist and socialist arguments against doing so, but claim to have objectivity; this is a glaring blind spot in their conceptualization of people. Indeed, you can hear it in their tone as they talk about how redlining “is considered…a big harm.” They don’t care whether people are harmed, or whether people are considered harmed; they care only about the numbers in their ethical calculus.

                                                                      You mention “equal rights”. This seems a wonderful opportunity to remind people what the term means, and to also reinforce why it is prime. Equality of rights ensures that people are treated without bias by The State; the biases of The State’s actions against its people are then assuredly the biases of its officers, and it is The State’s obligation to enforce its own rules against itself. From a highly social action, we gain a highly social institution, and from equality of rights, we gain an existence that is in stark contrast to this corporatized smorgasbord of data.

                                                                      The section on modelling criminality is not only typical for crime science, but is completely in line with typical research on the subject in terms of its assumptions.

                                                                      The author’s maths are alright, but the conclusions are quite wrong. Their analysis of base rates misses the base rate fallacy, their correlation between race and crime completely omits the well-known hidden variable of socioeconomic status (“wealth”), and their concept of “mathematically guaranteed bias” ignores statistical significance.

                                                                      They fail to link to the “impossibility theorem”. There exist overviews of the main result and concepts, but I want to offer my own conclusions here. First, note that the authors of the paper imply repeatedly, e.g. on p1 and p3, that their results generalize from decision-making machines to decision-making panels of people. We may comfortably conclude that the problem is with our expectation that bias can be removed from systems, not with the fact that our biases are encoded into our machines. Another conclusion which stands out is that compressing datasets will cause the compressor to discover spurious correlations; stereotyping, at least in this framework, is caused by attempting to infer data which isn’t present, much like decompressing something lossily-encoded. I would wonder whether this has implications for the fidelity of anonymized datasets; does anonymizing cause spurious correlations to form?

                                                                      I’m surprised that they waste time doing utilitarian maths and never mention that utilitarian maths leads to utility monsters or The Repugnant Conclusion.

                                                                      My choice quote:

                                                                      So, these are cases where there’s significant predictive power in demographic factors. So, your algorithm will actually be more accurate if you include this information, than if you exclude it.

                                                                      I wonder whether they understand why legislation like the Civil Rights Acts exist. It doesn’t come through in their tone at all; when they talk about the problems of inequality, they don’t discuss systemic racism. They discuss how poor FICO Scores are, but didn’t point out that FICO is a corporate entity like the credit bureaus. No consideration was made for systemic improvements. To the speaker, the banks are an immutable and unchanging wall whose owners always steer it towards profitability by careful management of balance sheets; any harm that they do is inadvertent, “second-order”, a result of tradeoffs made by opaque algorithms and opaque people trying their best to be “fair”.

                                                                      Their closing “meta-ethical consideration” is probably as good as it can get, given the constraints they’ve placed on themselves. If we can’t challenge the system, then the best that we can do is carefully instrument and document the system so that it can be examined.

                                                                      1. 4

                                                                        Thank you for this comment. The slides alone were giving me a weird vibe, but the talk cements that feeling.

                                                                        Approaching these issues through utilitarianism and formalism feels like completely the wrong approach and borders on scientism. After all, the author works for a lending company, so it’s not surprising he’s trying to paint bias as inevitable.

                                                                        1. 2

                                                                          I am incredibly grateful for your detailed dissection of this piece. The piece strikes me as pretty similar to the infamous James Damore memo, at least in its conclusions (trying to justify bigotry by an appeal to science), though the arguments it advances to get there are different. As somebody whose self-declared job as an activist involves figuring out messaging strategy for countering bullshit, I was deeply distressed by these slides because it would be better to have a coherent response ready to go, but I didn’t have time to dissect it in detail since there’s other stuff going on and no proximate need. I’m sure I’ll be referring back to your comment as necessary.

                                                                          1. 2

                                                                            Yes, this article is propaganda. Thank you for covering why much more thoroughly than I was willing to put the effort into doing.

                                                                            1. 0

                                                                              In the event, I just posted it to /r/SneerClub.

                                                                          2. 5

                                                                            I think that many people may agree with some pieces of what you’re saying, while finding your example to be inaccurate and harmful. I want to point this out so that people can be very thoughtful about whether and how they engage, and in particular so that people can remember to not treat these various separate ideas as if they’re one piece that must be accepted yes/no.

                                                                            It’s very easy, when responding to positions that are put forward as if they’re hard truths on contentious issues, to inadvertently accept some piece of a premise without understanding its full context and the harm it causes. People who try to calm things down then sometimes wind up exacerbating harms, instead. I think very highly of lobste.rs users and your critical thinking ability, but I still want to urge care in replies to this thread.

                                                                            I’m intentionally not taking a position on the actual topic, at this time, so that I can keep my personal feelings out of this to the extent possible, though that’s never 100%.

                                                                            1. 2

                                                                              Thank you for your thoughtful answer. If you look closely, I haven’t also taken any position in this regard and just given an example for such a concept.

                                                                              Where I should’ve been more clear is that the slides, in the end, actually give an idea on how to “solve” such a problem, namely by considering this process as a constrained optimization problem. If we look at “equal rights” versus “equal status”, it means that you could for instance proclaim “equal rights” as your main goal, but under the constraints of certain structural conditions that could steer processes into a more equally stated outcome. The thing is, if we discuss philosophical ideas, there is little wiggle-room, and a good solution must be found that is in-between both concepts.

                                                                            2. 2

                                                                              Isn’t your argument failing to do exactly what you praise the article for doing? To quote the last slide:

                                                                              formalize your ethical principles as terms in your utility function or as constraints

                                                                              Things like “equal rights” and “equal status” seem very poorly defined in this context vs the concepts of proedural and group fairness etc. outlined in the article.

                                                                              1. 3

                                                                                What I think is important is that the presenter first gives the two “extremes”, namely the goal of utilitarism and the goal of equity (each with their pros and cons), and in the end gives a possible solution to both by applying trade-offs to one goal so the other is at least partially respected.

                                                                                That’s what I meant with the following sentence.

                                                                                You could weigh both outcomes in some way, but one cannot truly exist while the other prevails.

                                                                                From what I understand, “equal rights” mean that any individual has the same rights, regardless of any traits or abilities; “equal status” consequently means that regardless of any traits or abilities there is a “fair” outcome for each individual in such a way that the structures reflect an equal status for all sub-groups. Granted, it is realtively easy to define for men-women, but much harder for other matters.

                                                                                Let me bring my point across by limiting ourselves to the men-women-matter for now; I agree that at least equal status is harder to define for other cases: Neither equal status nor equal rights is the golden way.

                                                                                Going with equal status would be crazy e.g. for engineering positions where only a minority of graduates are female and an equal status would intensely discriminate against men such that even much more qualified men compared to women wouldn’t get a position. To turn it the other way around, equal status would also discriminate women in female-dominated jobs.

                                                                                Going with equal rights only would possibly not bring change to areas that are male- or female-dominated due to structures. People weigh the importance of these structures differently, but sometimes, just equal rights are not enough to bring change that is desired, because there is written law and also societal norms, which are often different. Again, I’m taking no position here, as this is not the point.

                                                                                The presenter thus proposes this weighted approach in the end that allows a median that includes both factors. For instance, one could discuss an approach of “equal rights” which also reflects structural changes as constraints to this optimization problem. Or you could regularize the optimization problem with a weighted “status”-term. In the long run, this tends to get complex if you think about it, so these are just thoughts. However, it’s nice to see that it has been formalized like that.

                                                                                To give one more example, which is unrelated to the matters of equality discussed above: If you gave a machine the task to solve hunger in Africa, a valid solution in the machine’s sense would be to nuke the entire continent and wipe out all life on it. No humans = no hunger. This is because the machine hasn’t been given proper “constraints” for the solution. The main objective of “minimizing” hunger has to be matched with constraints that are easily forgotten when we, as humans, solve such problems. These constraints are about ethics, sustainability and so forth. The same problem, albeit a bit less dramatic than my example, was presented in the article. To go full circle, the point of the article is to give AI a sense of ethics it understands. We humans have a natural “limit” when it comes to realizing solutions. Big corporations and governments usually are only constrained by the biggest extent of a law and often act unethical. By thinking about laws and the lawmaking process, or just algorithm design, we simultaneously think about problems that affect better lawmaking and also better AI design.

                                                                                tl;dr: I may have been a bit short with my wording earlier and I explained it a bit deeper here.

                                                                                1. 2

                                                                                  I see what you’re getting at. As sanxiyn said, “equal rights” and “equal status” do seem to map to procedural and allocative fairness. I’m not sure that I agree that “going with equal status would be crazy”, but oviously there are tradeoffs there and this way of casting the discussion allows you to specifically discuss those tradeoffs, which is nice.

                                                                                  Thank you for taking the time to explain your thoughts in more depth.

                                                                                2. 1

                                                                                  I think “equal rights” and “equal status” is approximately procedural fairness and group fairness. I think both are wrong thing to focus on, as they result in things like FICO biased against Asians. See my other comment.

                                                                              1. 12

                                                                                This is a very well-written article. I really dislike GitHub and hate working with it. Especially for small changes, it is very cumbersome to open a pull-request and having to deal with all the kitchen-sinking. I much prefer the way to just send a patch to a mailing list, where it can be discussed and merged.

                                                                                1. 9

                                                                                  Isn’t code browsing of the pull request on the web much more convenient than applying the latch locally? I’ve used both GitHub and gitlab pull request flows on a lot of commercial products, and it would’ve been a pain to go through the email process.

                                                                                  TBH, I can’t remember when was the last time I used email aside from automatic notifications and some of the headhunters (most already prefer linkedin and messengers anyway).

                                                                                  1. 20

                                                                                    Here’s a video which demonstrates the email-based workflow using aerc. This isn’t quite ready for mass consumption yet, so I’d appreciate if if you didn’t repost it elsewhere:


                                                                                    1. 5

                                                                                      Wow… I’ve known for years that the git+email workflow was behind a more distributed model for open source development, but all my experience is on github and so making the switch has felt too-difficult. This article and this video together make me feel compelled (inspired?) to give it a go. aerc looks amazing.

                                                                                      1. 6

                                                                                        Consider giving sourcehut a try, too :)

                                                                                        1. 3

                                                                                          Thanks for making sourcehut!

                                                                                          I took a look a few times but I can’t seem to find any obvious documentation to setup and configure it. I fully acknowledge that I may be blind.

                                                                                          1. 5

                                                                                            I assume you don’t want to use the hosted version? If you want to install it yourself, instructions are here:


                                                                                            There are only a small handful of installations in the wild, so you might run into a few bumps. Shoot an email to ~sircmpwn/sr.ht-discuss@lists.sr.ht or join #sr.ht on irc.freenode.net if you run into issues.

                                                                                      2. 2

                                                                                        Having tried to record a few casts like this, I know how hard it is to do a take with few typos or stumbling on words. Well done.

                                                                                        The idea of using a terminal emulator in the client is a cool idea :). I usually use either VS code or sublime text, though this gives me an idea for a terminal ‘editor’ that just forwards to a gui editor, but displays either a message, or mirrors the file contents.

                                                                                        1. 2

                                                                                          I can’t do this either :) this is edited down from 10 minutes of footage.

                                                                                        2. 1

                                                                                          I have done the same thing albeit differently in my client[0]! I don’t have any casts handy, but the whole patch apply, test, HEAD reset, branch checkout/creation is handled by the client. I’ve also started patchwork integration.

                                                                                          [0] https://meli.delivery/ shameful plug, because I keep posting only about this lately. Guess I’m too absorbed in it.

                                                                                          1. 1

                                                                                            Ah, it’s this! I filed an issue because there doesn’t appear to be any public source, and I wanted to try it.

                                                                                        3. 1

                                                                                          I think locally you can script your workflow to make it as easy as you want. However not many people take the time to do this (I haven’t either).

                                                                                        4. 2

                                                                                          This is a very well-written article. I really dislike GitHub and hate working with it. Especially for small changes, it is very cumbersome to open a pull-request and having to deal with all the kitchen-sinking. I much prefer the way to just send a patch to a mailing list, where it can be discussed and merged.

                                                                                          I think a bridge would be nice, where emails sent can become pull requests with no effort or pointless github ‘forks’.

                                                                                          1. 2

                                                                                            Worse yet, barely anyone remembers that Git is still a “real” distributed version control system and that “request pull” exists - and, yes, you didn’t mean to say “pull request”. The fact that GitHub called their functionality a “pull request” is somewhat annoying as well.

                                                                                            Edit: I’m glad the article mentions this in the P.S. section - and I should really read the entire article before I comment.