1. 0

    This article is attacking a strawman using lies.

    The chief complaint with Fortran is that it’s unreadable but also that its buggy and slow, which it is. Being less readable than ASM is just the cherry on top of the mound of ****.

    The very benchmarks the author cites show that Fortran is slower than C++ and Rust in all tasks and slower than C in all but one. Conceptually Rust, C++, Fortran and C can aim to have about the same speed, C and C++ have the advantage in terms of native APIs and ABIs designed with them in mind, Rust has the advantage of safety when doing weird stuff, C++ has the advantage of library ecosystem, compile-time abstraction and abstraction in general, C has simplicity. Fortran is inferior to all 3 in any way imaginable.

    It should be noted that the speed difference here is very significant, with fortran being 2-10x times slower than the leader on virtually all problems and when the results are taken in aggregate, it barely beats C# (you know, the whole .NET language made to develop web apps?): https://benchmarksgame-team.pages.debian.net/benchmarksgame/which-programs-are-fastest.html

    And again, these are the benchmarks the author reference, so I assume they are better than average.

    Fortran only gets written in fields that are status-driven, where skill and merit are irrelevant, it’s used to keep the “old guard” into academic positions.

    If you want to know what the guys that need efficiency are doing (AWS, GCP, Cloudflare, Dropbox… etc), they are doing C++ and some C and some are considering using some Rust, if you want to know what the guys that need fast complex math are doing (Hedge funds, exchanges, brokers, applied ML services… etc) they are doing C++ and some C.

    Nobody is arguing for turning Fortran code into python, they are arguing for turning it into code that works as intended, can be read by normal humans and compiles to something that is 2-10x times as fast.

    1. 16

      The chief complaint with Fortran is that it’s unreadable but also that its buggy and slow, which it is. Being less readable than ASM is just the cherry on top of the mound of ****.

      The very benchmarks the author cites show that Fortran is slower than C++ and Rust in all tasks and slower than C in all but one. Conceptually Rust, C++, Fortran and C can aim to have about the same speed, C and C++ have the advantage in terms of native APIs and ABIs designed with them in mind, Rust has the advantage of safety when doing weird stuff, C++ has the advantage of library ecosystem, compile-time abstraction and abstraction in general, C has simplicity. Fortran is inferior to all 3 in any way imaginable.

      It should be noted that the speed difference here is very significant, with fortran being 2-10x times slower than the leader on virtually all problems and when the results are taken in aggregate, it barely beats C# (you know, the whole .NET language made to develop web apps?): https://benchmarksgame-team.pages.debian.net/benchmarksgame/which-programs-are-fastest.html

      You know that LAPACK (itself based on BLAS which is also written in Fortran) is written in Fortran 90 right? Almost all scientific code running on every platform used by every academy, government, or corporation is using LAPACK through their programming language/runtime of choice. If you want to push scientific computing out of the Fortran orbit, there’s a lot of work to be done.

      1. 1

        LAPACK is not very widely used for compute intensive problems, CUDA doesn’t support it, it support CUBLAS (which si written in C/ASM), so that’s 90% of high speed computing ruled out.

        Then you have CPU based stuff, for which LAPACK will be used, but not the original implementation, but rather e.g. intel’s MK, or whatever the guys with supercomputers are using. LAPACK is more of a standard than a library, and people have their own implementations.

        The fortran one is indeed used by… ahm? Who exactly? A few procedures on Android phones that lack GPUs?

        1. 2

          I’m not talking about SOTA. I’m talking about, well, every other application of linear algebra. If you think everything but Android has moved to using CUDA and MKL, I think you need to take a broader look at academia and industry. ARM cores don’t even have MKL available.

          1. 0

            I’m not sure if you’re intentionally dense, but let me try to take a stab at this again:

            • For applications where speed matters and where complex linear algebra is needed (e.g. what the author is talking about), NOBODY is using the default LAPACK implementation.
            • For applications where you need 2-3 LAPACK function calls every hour, LAPACK is a reasonable choice, since it’s lightweight and compatible on a lot of platforms and hard enough to write that nobody bothered to try and rewrite a free and open-source cross-hardware version.
            • However, 99% of usage is not 99% of devices, so the fact that LAPACK is running on most devices doesn’t matter, since this device run <1% of all LAPACK-interface based computations.

            This goes back to my point of Fortran is slow and buggy, so people that have skin in the game don’t use it for mathematical applications. It might be reasonable for a simple decision tree used in bejewelled but not, e.g., for meteorological models (at least not if the meteorologists were concerned about speed or accuracy, but see the problem of responsibility/skin-in-the-game).

            You seem to be attacking a strawman of my position here. I’m not arguing that libraries written in fortran don’t work anymore, just that they are very bad, very bad in that they are slower than alternatives written in Rust, C++ and C, and more on part with fast GCed language (e.g. C#, Java, Switf)

            1. 4

              I’m not sure if you’re intentionally dense, but let me try to take a stab at this again:

              Not everyone on the internet is trying to fight you, please relax. Assume good faith.

              You seem to be attacking a strawman of my position here. I’m not arguing that libraries written in fortran don’t work anymore, just that they are very bad, very bad in that they are slower than alternatives written in Rust, C++ and C, and more on part with fast GCed language (e.g. C#, Java, Switf)

              I was responding in particular to your claim that we should just rewrite these routines out of Fortran. When we need to use specific CPU or GPU features, we reach for MKL or CUBLAS. For applications that need the speed or CPU/GPU optimizations, this is fine. For all other applications, Fortran is sufficient. I’m only saying that rewriting Fortran into other languages is a lot of work for dubious gain. Folks that need the extra speed can get it. It sounds like we largely agree on this.

              I am a bit dubious about your claim that Fortran is chased by academic status-seekers, because it makes the assumption that industry is smart and efficient while academia is bureaucratic and not-efficient and uses results from the Benchmarks game to highlight it. I realize TFA linked to these benchmarks, but I took a cursory glance at them, and some of them don’t even parallelize across CPUs and some don’t even compile! Given that Fortran is a niche language, an online benchmark shootout is probably not the best place. That said, I don’t have a justified idea of Fortran’s numbers either so I can’t do anything more than say I’m dubious.

      2. 10

        The chief complaint with Fortran is that it’s unreadable but also that its buggy and slow, which it is.

        Prove it, how is it buggy and slow for its domain? Give 3 examples for each of buggy and slow, should be easy for you with how determined your claim is. Showing benchmarkgames isn’t showing real world code/simulations in any language. Matrix multiplication is what you want to benchmark, why are you using benchmark “games”? Those “games” are hardly readable after people golf them to insanity. If your argument is that I should trust anything from those “games”, well I’ve got ice cubes in antarctica to sell you.

        And demonstrate how the restrict keyword in c, which is relatively recent mind you can overcome what fortran can guarantee inherently. And how that affects aliasing and ultimately generated code and how your statement of fortran being inferior to all of c/c++/rust. Have you ever talked to a scientist writing fortran for simualtion? Ever ask them why? Methinks the answer to those last two questions is no. Because I have, and the people writing fortran for simulations are insanely smart, and bear in mind the goal isn’t 100% “it benchmarks faster than language xyz”, its “I run this simulation for $N months/weeks on end, it needs to be fast enough, and I’ll profile it to see what could be improved”. And for each supercomputer they rebuild and retune every library and object for the new processor as needed. I gotta be honest the amount of ignorance of the general computer programming population about how traditional HPC works is disheartening.

        And how are simulations that are being run by scientists using the literal fortran COMPLEX type by definition not “guys that need fast complex math”? Fortran specifically handles complex math better than c. My mind boggles at some of the statements here.

        Less supposition and hearsay and more demonstrable proof please, this comment is atrocious for basically being without any actual depth or substance to its claims.

        Think its time for me to call lobsters quits.

        1. 5

          Think its time for me to call lobsters quits.

          FWIW, every other commenter, except for the comment you replied to, agreed with your position on Fortran.

          1. 0

            I provided examples of fortran being bad within the terms the author set for himself. Have you heard of arbitrary demands for rigor? If every time an opinion you hold is challenge you start clamouring for more evidence instead of considering whether or not you have any counter-evidence, well, you see the problem.

          2. 3

            Are academics with significant clout even writing code these days? I assume they should have a small team of grad students and postdocs to do that in their lab for them…. is is the fact that everything is written in Fortran the cause of them being forced to maintain old code?

            Fortran only gets written in fields that are status-driven

            Our industry isn’t immune to status-driven endeavors. Think of all the “bad” technology that gets adopted because FAANG uses it or the proliferation of flavor-of-the-month frameworks in the web space.

            1. 1

              if you want to know what the guys that need fast complex math are doing [..] they are doing C++ and some C.

              https://blog.janestreet.com/why-ocaml/

              Together with the two parallel comments I’m going to conclude you don’t know what you’re talking about.

              1. 3

                Even if his assertion were true, assuming that all ‘complex math’ is the same and a language that is good for one is good for another is a pretty good indication that you should disregard the post. For example, a lot of financial regulation describes permitted errors in terms of decimal digits and so to comply you must perform the calculations in BCD. If you’re doing scientific computing in BCD, you have serious problems. Conversely, a lot of scientific computing needs the full range of IEEE floating point rounding modes and so on, yet this is rarely if ever needed for financial analysis.

                With C99, C grew the full support for floating point weirdness that you need to be able to port Fortan code to C, but so few people care about this in C that most C compilers are only now starting to grow acceptable support (GCC had it for a while, LLVM is gaining it now, in both cases the majority of the work is driven by the needs of the Fortran front end and then just exposed in C). Fortran has much stricter restrictions on aliasing, which make things like autovectorisation far easier than for C/C++. C/C++ developers now use modern OpenMP vectorisation pragmas to tell the compiler what assumptions it is allowed to make (and is unable to check, so miscompiles may happen if you get it wrong), whereas a half-decent Fortan compiler can get the same information directly from the source.

            1. 4

              The author of the post works at Cray, so he would have access to the very best Fortran developers in the world (along with the best compilers and OSes for running it) and knows exactly why climate models are still done in Fortran. He also knows how to write a headline that gets a lot of clicks :).

              1. 4

                I used to work there, and got to bs with Bill Long at lunch about Fortran (not the seattle office). Talking to Fortran compiler guys is fun.

                Fortran is cool in that the Cray Fortran compiler had ENV switches for, well basically everything, and i’m not kidding. So its “easy” to tune every single compilation to what you need to do. That and you can mix old and new fortran into the same binary. Try that with c++/c. Rust is only approaching what fortran longevity has had in the past.

              1. 16

                Along with the effort of rewriting, there’s also distrust of new models of rewrites of existing ones. There are decades of work in the literature publishing and critiquing results from existing models, and there are…not that for anything new. It’s much less risky and thus easier to accept to port an existing model to a new HPC platform or architecture than it is to adopt a new one.

                Additionally, HPC is a weird little niche of design and practice at both the software and hardware levels. There are a few hundred customers who occasionally have a lot of money and usually have no money. Spinning that whole ecosystem around is difficult, and breaking a niche off the edge of it (if you decided to take climate modelling into Julia without also bringing petrochemicals, bioinformatics, automotive, etc) is a serious risk.

                1. 7

                  When NREL finally open sourced SAM (GitHub), one of the standard tools for calculating PV output based on location and other factors, a friend of mine decided to take a look at the code. On his first compilation, it was clear no one had ever built it using -Wall, it had thousands of warnings. When he looked more closely he could tell it has been translated, badly, from (probably) MATLAB, and had many errors in the translation, like (this is C++)

                  arr[x,y]
                  

                  to access a 2D coordinate in arrays - for anyone playing at home, a,b in C++ means evaluate a, then evaluate b and return its result, so this code was accessing only the y coordinate.

                  This would be find if this was undergrad code, but this code had been around for a very long time (decades?), had dozens of papers based on it, and plenty of solar projects relied on it for estimating their ROI. I bring up this anecdote as a counterexample that the age of these libraries and programs does not mean they are high quality, and in fact their age lulls people into a false sense of trust that they actually implement the algorithms they claim to.

                  He’s since submitted many patches to resolve all the warnings and made sure it compiles with a number of compilers, but I wonder how valid the results over the years actually are- maybe they got lucky and it turns out the simplified model they accidentally wrote was sufficient.

                  1. 1

                    All governments take actions because of these models. They do affect lives of every person on the planet ant future generations to come. “If it’s not broken, don’t fix it” approach doesn’t fit here. Rewriting these models is could be made for a cost of a few international conferences, flights and accommodations of participants.

                    critiquing results from existing models

                    The models should be scrutinized, no the results they give?

                    1. 33

                      Rewriting these models is could be made for a cost of a few international conferences, flights and accommodations of participants.

                      Rarely have I read something so optimistic.

                      1. 14

                        As someone that has interacted with people that write these models, optimistic is putting it lightly. I think whomever thinks that rewriting a bunch of fortran will be productive is entirely underselling both fortran, and the effort to make fortran super fast for simulations.

                        Rewriting this stuff in javascript isn’t realistic nor will it be fast. And any rewrite is going to have the same problem in 50 years. What are you going to rewrite it again then? How do you know its the same simulation and results?

                        Sometimes I think us computer programmers don’t really think through the delusions we tell ourselves.

                        1. 4

                          But by rewriting we may check if the implementation follows the specification - see this as a reproductivity issue, do you recall when a bug in excel compromised thousands of researches? And by not changing anything we may find ourselves in a situation where no one knows how these models work, nor be able to improve them. Something similar to banking and cobol situation, but much worse.

                          1. 5

                            The “specification” is “does this code give the same results as it always has”. HPC isn’t big on unit testing or on other forms of detailed design.

                            1. 4

                              Isn’t that a problem? How do we know then they follow peer reviewed papers they were supposed to follow?

                              1. 7

                                In general, we don’t know, and in the abstract, that’s a problem (though specifically in the case of weather forecasting, “did you fail to predict a storm over this shipping lane” is a bigger deal than “did you predict this storm that actually happened for reasons that don’t stand up to scrutiny”, and many meteorological models are serviceably good). There are recent trends to push for more software engineering involvement in computational research (my own PhD topic, come back to me in three years!), and for greater discoverability and accessibility of source code used in computational research.

                                But turning “recent trends to push” into “international shift in opinion across a whole field of endeavour” is a slow process, much slower than the few international flights and a fancy dinner some software engineers think it should take. And bear in mind none of that requires replatforming anyone off of Fortran, which is not necessary, sufficient, desirable, feasible, or valuable to anyone outside the Rust evangelism strike force.

                      2. 8

                        Most climate models are published in papers and are widely distributed. If you would like to remake these models in other languages and runtimes, you absolutely could (and perhaps find gaps in the papers along the way, but that’s a separate matter.) The problem is, getting the details right is very tough here. Is your library rounding in the same places the previous library was rounding at, in the same ways? How accurate is its exponential arithmetic? What’s the epsilon you need to verify against to be confident that the results are correct?

                        The article links CliMA as a project for Julia based climate models, but remember, most scientific computing libraries use Fortran one way or another. We’re just pushing the Fortran complexity down to LAPACK rather than up into our model code. Though that’s probably enough to greatly increase explainability, maintainability, and iterative velocity on these models.

                        1. 4

                          “If it’s not broken, don’t fix it” approach doesn’t fit here.

                          What are you even trying to say here? If it’s not broken we should rewrite it because… reasons? It’s not broken, so why would we waste public money rewriting it when we could use that money to further improve its ability to help us?

                          1. 2

                            Rewriting these models is could be made for a cost of a few international conferences, flights and accommodations of participants.

                            The article is low on these specific details and I’m not too familiar with climate modelling but I bet that the models aren’t just fancy Excel sheets on steroids – there’s at least one large iterative solver involved, for example, and these things account for most of the Fortran code. In that case, this estimate is off by at least one order of magnitude.

                            Even if it weren’t, and a few international conferences and flights were all it takes, what actual problem would this solve? This isn’t a mobile app. If a 30 year-old piece of.code is still in use right now, that’s because quite a stack of taxpayer money has been spent on solving that particular problem (by which I mean several people have spent 3-5 of their most productive years in solving it), and the solution is still satisfactory. Why spend money on something that has not been a problem for 30 years instead of spending it on all the problems we still have?

                        1. 3

                          I thought NTP clients won’t blindly shift the date (especially with a wild shift like that), but rather slowly increment the date towards that.

                          1. 1

                            Depends on how they’re configured. If they’re configured to step (aka jump) they can. If you configure them to slew then they do the +/- ppm adjustment of roughly at most 500ppm.

                            The old ntpd client in most unix distros defaulted to step as a note. Well if i remember right. The issue with slewing though is that if your computer’s time say is wicked out of date, like January 1 1970, its impossible to eventually get your time in sync.

                          1. 3

                            Migrating away from lastpass to either bitwarden or pass. Will see which is easier to self host on a vps I have.

                            1. 1

                              I’ve been using Bitwarden a long time and am VERY happy with it. I run it self hosted and only accessible from within my home network (I’m always on VPN home when I am out). I admit this is a bit more complex setup, but Bitwarden itself is great!

                            1. 1

                              Is MSVC really so bad at instruction selection?

                              1. 2

                                No, it does a emit a cmov, on -O1 or -O2. But apparently -O is not an abbreviation for -O2: https://docs.microsoft.com/en-us/cpp/build/reference/o-options-optimize-code

                                1. 1

                                  Can’t seem to get it to emit a cmov even with -O2 or -O1. You sure about msvc emitting a cmov?

                                  https://godbolt.org/z/Edhqr9

                                  1. 1

                                    Oh, I was trying a different example: https://godbolt.org/z/W8dbP6 I also get different results with if vs ternary. I guess it is pretty fragile.

                                    1. 1

                                      Yeah it surprised me to be honest, that small example seems like an easy optimization but its interesting. Wonder what is tripping the optimizer up.

                              1. 5

                                One of the comments links to Matt Tait’s walkthrough of the claims on Twitter: individual sources, and how specific they were likely to have been: https://twitter.com/pwnallthethings/status/1360234953011851264

                                1. 1

                                  That makes way more sense, along with this just being some hokey pump and dump stock scheme. Can’t say I really trust bloomberg at all since they’re doubling down on this china hack. Put up one hacked board as evidence or shut up is my pov.

                                1. 2

                                  I think Cocoa Emacs may need a bigger rewrite to become future proof. It will probably survive macOS 11 but the Cocoa integration code really is a huge complicated 20? year old hair ball.

                                  1. 2

                                    Would be interesting to see if the new pgtk (pure gtk) integration works, since gtk3 does support macOS. It’s still WIP of course, and I’m not saying it should replace cocoa, but still…

                                    1. 2

                                      As a macos user, there is nothing about gtk3 or gtk4 that would make me want it to be used in gui emacs on macos.

                                      Why would I trust a work in progress over something that actually uses the platform? Why is macos special in this regard and windows being left out? Why not just update cocoa emacs instead of make emacs an even worse experience on macos? Will gtk improve the spawned process speed on macos?

                                  1. 4

                                    Maybe this could be generalized into something that also includes Guix? Or package management in general? On the other hand, I wouldn’t mind filtering nix…

                                    1. 12

                                      Where this tag is concerned, the perfect is the consistent enemy of the good. This discussion comes up again and again, people bikeshed about whether xyz is a better tag because Blub is kind of like Nix too, and nothing happens. It’s time for a tag and nix is the only obvious choice.

                                      1. 2

                                        I don’t get your point, I’m not asking for perfection? Or to put it this way, will we also be getting a guix tag?

                                        1. 3

                                          Sorry, my point was that when this discussion has come up previously, the discussion died at “hey we should generalize this.”

                                          I am not requesting a generalized tag for reproducible builds or package management. I am requesting a tag for Nix and NixOS.

                                          1. 1

                                            So far, it’s not even clear if we’d get nix or nixos tag :) Regrading guix, we might get it if someone requests it.

                                            1. 2

                                              I’d vote for nix personally, nixos can be covered by nix, but nix also runs on macos and other unixes. So even if its nixos specific if its about nix there is a good chance I want a look see at it.

                                        1. 3

                                          mosh doesn’t work with port forwarding, or remote display last I checked.

                                          1. 1

                                            Good to know. I just use it for terminal access directly to a mosh/ssh host.

                                        1. 9

                                          Usually python, just because I know it well, and can get results within a predictable timeframe. My personal rule of thumb is when I need something like a hashmap or array – I give up on bash that instant.

                                          However with time I’ve also gotten more comfortable with bash, so often it’s okay to mix and match. E.g. say, you want to find out the amount of free memory

                                          $ cat /proc/meminfo  | grep MemFree: 
                                          MemFree:        23727768 kB
                                          

                                          Right, how do we pick out the number? Normally you’d use cut, or awk:

                                          $ cat /proc/meminfo  | grep MemFree: | awk '{print $1}'
                                           23727768
                                          

                                          , but what if you forgot, or need something more elaborate? Well, why not use python?

                                          $ cat /proc/meminfo  | grep MemFree: | python3 -c 'print(input().split()[1])' 
                                          23727768
                                          

                                          Not as concise as awk, but you can type it quicker than the time you’d spend googling how to use awk.

                                          Note that you also can use multiline input if you press enter after the opening quote, so even if you need imports etc, it doesn’t have to look horrible. Also if you have some sort of vi mode or a hotkey to edit the command in editor (Ctrl-X Ctrl-E in bash), it helps a lot for messing with long shell commands.

                                          I also tried using xonsh a few times, a shell combining python & bash syntax. Cool idea, but I tend to forget how to use it, so never got into the habit.

                                          1. 2

                                            Not as concise as awk, but you can type it quicker than the time you’d spend googling how to use awk.

                                            Ah, someone who shares my shame.

                                            1. 0

                                              Err, am I weird in that awk isn’t that hard to use?

                                              $ awk ‘/memFree/ {print $2}’ < /proc/meminfo

                                              Two less fork()/exec()’s and does the same thing as all that python. Why break out the combine when the hedge trimmer will do to cut the grass.

                                              I’m not gonna lie, this falls under learning to use your tools. If you always reach for python/scripting languages for these simple tasks, I’m going to argue your general unix knowledge is too low.

                                              Also that second cat | grep | awk has a bug with print $1 versus $2 so not sure the gp actually ran that shell.

                                              1. 3

                                                Err, am I weird in that awk isn’t that hard to use?

                                                Probably not.

                                                I’m not gonna lie, this falls under learning to use your tools.

                                                I would disagree on a technicality: if you don’t know it, it isn’t your tool.

                                                If you always reach for python/scripting languages for these simple tasks, I’m going to argue your general unix knowledge is too low.

                                                This I do agree with. I can’t say that it is difficult to use because I never took the time to really learn awk. Instead, I just try to pick up what I need to to do a particular task. To a large extent, my relationship with awk is governed by apathy. It is an exceedingly practical tool and I just don’t really care. I love those little transcendental moments with software where you feel like you know something more about the world. awk doesn’t do that for me so I haven’t really given it the time it deserves.

                                                That said, my comment about shame comes from responses like:

                                                … awk isn’t that hard to use…

                                                … [learn] to use your tools.

                                                … your general unix knowledge is too low.

                                                Missing a little bit of context from your comment, but things can be read this way and it doesn’t feel so good. My comment isn’t about how difficult awk actually is, but how people assume that you should just know these things and if you don’t you are deficient.

                                                To be clear, I don’t think that that there is any malice on your part.

                                          1. 6

                                            This question was prompted by a discussion I had about how I felt like all the momentum in programming languages is pointing toward Rust these days, and I felt like there’s no point in keeping my Go current (it’s been languishing for a couple of years now anyway).

                                            So, I asked this question to see (among other things) if I’m right or wrong.

                                            1. 12

                                              What is driving that feeling? Genuinely curious, because I feel the opposite. I am seeing more and more enterprises adopt Go. In conversations I have with other engineers, Rust still feels a little underground.

                                              I also think that Go and Rust have slightly different use cases & target audiences.

                                              1. 9

                                                Well, lobste.rs, for one. I feel like everywhere I look on here people talk about how they’d re-do everything in Rust if they could. The number of Rust advocates I see here seems to dwarf the number of Go advocates. Maybe that’s perception because Rust is “newer” and its advocates louder, but who knows.

                                                The things that really stuck with me, though, were Linus indicating that he’d be open to allowing Rust in the kernel, Microsoft starting to switch to Rust for infrastructure, and Dropbox migrating their core technologies to Rust.

                                                I just don’t see stories like that for Go. I don’t know if I’m not looking in the right place, or what.

                                                1. 22

                                                  Go and Rust more or less solve the same problem (although the overlap isn’t 100%), just in different ways, not too dissimilar to how Perl and Python more or less solve the same problems in very different ways.

                                                  I have the impression that, on average, Go tends to attract people who are a little bit jaded by the Latest Hot New Thing™ churn for 20 years and just want to write their ifs and fors and not bother too much with everything else. This is probably one reason why the Go community has the (IMHO reasonably deserved) reputation for being a bunch of curmudgeonly malcontents. These are not the sort of people who go out and enthusiastically advocate for Go, or rewrite existing tools in Go for the sake of it: they’re happy with existing tools as long as they work.

                                                  Another reason I don’t really like to get involved in Go discussions is because some people have a massive hate-on for it and don’t shy away from telling everyone that Go is stupid and so is anyone using it every chance they get. It gets very boring very fast and I have better things to do than to engage with that kind of stuff, so I don’t. There’s some people like that on Lobsters as well, although it’s less than on HN or Reddit. It’s a major reason why I just stopped checking /r/programming altogether, because if the top comment of damn near every post is “lol no generics” followed by people ranting about “Retards in Go team don’t think generics are useful” (which simply isn’t true) then … yeah… Let’s not.

                                                  1. 14

                                                    Go and Rust more or less solve the same problem

                                                    Hard disagree, rust is way more useful for actual problems like c/c++ are. I can write kernel modules in it, there is almost no way i’d want to do that with go. Having a garbage collector, or even really a runtime in go means a different use case entirely to being able to run without an os and target minis. Yes I know go can be used for that too but just due to having a GC you’re limited on how far down the horsepower wagon you can go.

                                                    I hold no views outside of that rust actually has solutions and community drive (aka people using it for that upstreaming things) for programming things like avr processors etc… I don’t hate go it just strikes me as redundant and not useful for my use cases. Kinda like if you learn python not much use for learning ruby too kind of a deal. And if I’m already using rust for low level stuff, why not high level too?

                                                    I don’t however miss debugging goroutine and channel bugs though, go is way easier to shoot yourself in a concurrent foot without realizing it. It might be ‘simple’ but that doesn’t mean its without its own tradeoffs. I can read and write in it but prefer the rust compiler telling me i’m an idiot for trying to share data across threads to goroutine debugging where two goroutines read off one channel and one of ems never gonna complete cause the other already got it. I’m sure “I’m holding it wrong” but as I get older these strict and more formal/functional languages like rust/haskell/idris/blah strike my fancy more. I”m not talking about generics really but stuff like Monads (Option/Result essentially) read to me way better than the incessant if the thing i did isn’t nil constantly. Its closer to what I’ve done in the past in C with macros etc…

                                                    Its not that I hate it though, just that the language seems a step back in helping me do things. Idris as an example though is the coolest thing I’ve used in years in that using it was like having a conversation with the compiler and relearning how to move my computation to the type system. It was impressive how concise you can make things in it.

                                                    As a recovering kernel/c hacker, you’d think go would appeal but to be honest it just seems more of the same as c with less ways of stopping me from shooting myself in the foot needlessly.

                                                    But to each their own, functional languages with types just strike me as actually doing a lot of things that OO languages from the mid 90’s always said could be done with open form polymorphism but never seemed to happen.

                                                    In 10 years we’ll see where the ball landed in the outfield so whatever.

                                                    1. 11

                                                      Yes, hence the “more or less”. Most people aren’t writing kernel modules; they’re writing some CLI app, network service, database app, and so forth. You can do that with both languages. TinyGo can be used for microcontrollers, although I don’t know how well it works in practice – it does still have a GC and a (small) runtime (but so has e.g. C).

                                                      I don’t however miss debugging goroutine and channel bugs though, go is way easier to shoot yourself in a concurrent foot without realizing it.

                                                      Yeah, a lot of decisions are trade-offs. I’ve been intending to write a “why Go is not simple”-post or some such, which argues that while the syntax is very simple, using those simple constructs to build useful programs is a lot less simple. In another thread yesterday people were saying ‘you can learn Go in two days”, but I don’t think that’s really the case (you can only learn the syntax). On the other hand, I’ve tried to debug Rust programs and pretty much failed as I couldn’t make sense of the syntax. I never programmed much in Rust so the failure is entirely my own, but it’s a different set of trade-offs.

                                                      In the end, I think a lot just comes down to style (not everything, obviously, like your kernel modules). I used to program Ruby in a previous life, which is fairly close to Rust in design philosophy, and I like Ruby, but it’s approach is not without its problems either. I wrote something about that on HN a few weeks ago (the context being “why isn’t Ruby used more for scripting?”)

                                                      1. 4

                                                        Most people aren’t writing kernel modules; they’re writing some CLI app, network service, database app, and so forth. You can do that with both languages.

                                                        CLI yes, database probably, but I don’t think Rust’s async or concurrency or whatever story is mature enough to say it’s comparable with Go for network services.

                                                        1. 1

                                                          Cooperative concurrency is just more complicated (as a developer, not as a language designer) than preemptive concurrency. The trade-off is that it’s more performant. Someone could build a Rust-like language, i.e. compiler-enforced data race freedom, with green threads and relocatable stacks. And someday someone might. For now, the choice is between performance and compiler-enforced correctness on the Rust side, or “simpler” concurrency on the Go side.

                                                    2. 3

                                                      There’s just a lot of toxicity about programming languages out there, and subjectively it feels particularly bad here. Rust has a lot to like and enough to dislike (abandoned libraries, inconsistent async story, library soup, many ways to do the same thing), but something about its culture just brings out the hawkers. I still heartily recommend giving Rust a try, though you won’t be super impressed if you’ve used Haskell or Ocaml in the past.

                                                      1. 6

                                                        I came to Rust after having used Haskell and the main thing about it that impressed me was precisely that it brought ML-style types to language with no GC that you could write an OS in.

                                                        1. 5

                                                          with no GC

                                                          I guess I find this often to be a solution with very few problems to solve. It’s understandable if you’re writing an OS or if you’re working on something real-time sensitive, but as long as the system you’re making can tolerate > 1ms p99 response times, and doesn’t require real-time behavior, Go, JVM languages, and .NET languages should be good enough. One could argue that there exists systems in the 1-10ms range where it’s easier to design in non-GC languages rather than fight the GC, and I can really see Rust succeeding in these areas, but this remains a narrow area of work. For most systems, I think working with a GC keeps logic light and easily understandable. When it comes to expressive power and compiler-driven development, I think both Haskell and Ocaml have better development stories.

                                                          1. 1

                                                            Rust also has a much cleaner package management story and (ironically) faster compile times than Haskell. And first-class support for mutability. And you don’t have to deal with monad transformers.

                                                            Haskell is still a much higher level language, though. I experimented last night with translating a programming language core from Haskell to Rust, and I quickly got lost in the weeds of Iterator vs Visitor pattern vs Vec. Haskell is pretty incredible in it’s ability to abstract out from those details.

                                                        2. 0

                                                          Your descriptions of both advocates and Go haters match my experience exactly.

                                                        3. 14

                                                          I’ve been using Go since around the 1.0 release and Rust for the last year or so. I don’t think either of them is going away any time soon. Go has less visible advocates, but it’s definitely still used all over the place. Datadog has a huge Go repo, GitHub has been looking for Go engineers, etc.

                                                          Rust is more visible because it’s newer and fancier, but it still loses for me in multiple aspects:

                                                          • Development speed - It’s a much pickier language and is much slower to develop in, though it forces you to get things right (or at least handle every case). Rust-Analyzer is great, but still fairly slow when compared to a simpler language.
                                                          • Compilation speed - Go compiles way faster because it’s a much simpler language.
                                                          • Library support/ecosystem - Because Go has been around for quite a while, there are a wealth of libraries to use. Because Rust hasn’t been around as long, many of the libraries are not as mature and sometimes not as well maintained.

                                                          However, Rust makes a number of improvements on Go.

                                                          • Error handling - Rust’s error handling is miles above Go. if err != nil will haunt me to the end of my days.
                                                          • Pattern matching - extremely powerful. This is an advantage Rust has, but I’m not sure how/if it would fit in to Go.
                                                          • Generics - In theory coming to Go soon… though they will be very different feature-set wise

                                                          They’re both great languages, and they have different strengths. For rock-solid systems software, I’d probably look at Rust. For web-apps and services, I’d probably look to Go first.

                                                          1. 4

                                                            Rust also loses massively in concurrency model. Tokio streams is so subpar to channels.

                                                            1. 2

                                                              Tokio also has channels - MPSC is the most common variant I’ve seen.

                                                              When Stream is added back to tokio, these will also impl Stream

                                                              I do agree that having goroutines as a part of the language and special syntax for channels makes it much easier to get into though.

                                                              1. 1

                                                                Rust’s async/await is definitely more complicated than Go’s green threads, and is almost certainly not worth it if Go has everything you need for your project. However, Rust’s standard threads are extremely powerful and quite safe, thanks to the Sync and Send marker traits and associated mechanics, and libraries like rayon make trivial parallelism trivial. Just yesterday I had a trivially parallel task that was going to take 3 days to run. I refreshed myself on how to use the latest rayon, and within about 10 minutes had it running on all 8 hyperthreads.

                                                              2. 2

                                                                Spot on. They both have their annoyances and use cases where they shine.

                                                              3. 6

                                                                A further difference is that Go code is hard to link with other languages due to its idiosyncratic ABI, threading and heaps. Rust fits in better. Not just in OS kernels but in mobile apps and libraries. (I’m still not a fan of Rust, though; Nim fits in well too and just feels a lot easier to use.)

                                                                1. 1

                                                                  I would say that is entirely compiler dependant, and not a property of the language Go. https://github.com/tinygo-org/tinygo

                                                                  1. 2

                                                                    As long as the language has highly scalable goroutines, it won’t be using native stacks. As long as the language has [non ref-counted] garbage-collection, it won’t be using native heaps.

                                                                    1. 1

                                                                      Well, tinygo != go, and e.g. gccgo still reuses the official standard library from the main implementation (which is where lots of the frustrating stuff is located, e.g. the usage of raw syscalls even on platforms like FreeBSD where it’s “technically possible but very explicitly not defined as public API”).

                                                                  2. 5
                                                                2. 9

                                                                  Golang is awesome. It works without fanfare.

                                                                  1. 8

                                                                    As someone who dislikes Go quite a lot, and is really enjoying Rust: I think they are different enough they serve different use cases.

                                                                    Go is much faster to learn, doesn’t require you to think about memory management or ownership (sometimes good, sometimes very very bad, Go code tends to be full of thread-safety bugs), is usually fast enough. For standard web app that is CPU-bound it’ll work just fine, without having to teach your team a vast new language scope.

                                                                    On the flip side, I’m working on a LD_PRELOADed profiler that hooks malloc(), and that’s basically impossible with Go, and I also need to extract every bit of performance I can. So Rust is (mostly) great for that—in practice I need a little C too because of Reasons. More broadly, anytime you want to write an extension for another language Go is not your friend.

                                                                    1. 4

                                                                      Go comes with a built-in race detector. What sources do you have for “tends to be full of thread-safety bugs”?

                                                                      1. 11

                                                                        As someone with a fair bit of go experience: the race detector only works on races you can reproduce locally, as it’s too slow to run in production.

                                                                        Rust stops you from doing things that can race; go gives you a flashlight to look for one after you accidentally introduce it.

                                                                        1. 1

                                                                          That is a good point, but once a developer has encountered the type of code that makes the race detector unhappy, will they not write better code in the future? Is this really a problem for most popular Go projects on GitHub, for instance? Also, the “staticcheck” utility helps a lot.

                                                                          1. 2

                                                                            Unfortunately, there’s still, even among some experienced programmers, a notion that “a race will happen so rarely we don’t have to worry about this one case”. Also I’ve seen an expectation that e.g. “I’m on amd64 so any uint64 access is atomic”, with no understanding of out-of-order execution etc. I assume in Rust this would be a harder sell… though the recent drama with a popular web framework (can’t recall the name now) in Rust seems to show to me that using unsafe is probably kinda similar approach/cop-out (“I just use unsafe in this simple case, it really does nothing wrong”, or “because speed”).

                                                                    2. 3

                                                                      I think (hope) the world settles into Rust and Go. I don’t see the Bell Labs folks not going 3 for 3 with Go. Rust I enjoyed playing with many months ago and it was a bonus I guess (at the time) that it was backed by Mozilla. Misplaced loyalties abound. Maybe (in a decade or so) lots of things once written in C will Rust over and tcp/http stuff will be written in Go.

                                                                    1. 2

                                                                      I’d like to buy an M1 Mac for the better battery life and thermals, but I have to run a lot of Linux VMs for my job, so it’s a nonstarter.

                                                                      If VirtualBox or VMWare or whatever adds support for M1 Macs to run ARM VMs and I could run CentOS in a virtual machine with reasonable performance, it would definitely affect my decision.

                                                                      (Note that I’d still have to think about it since the software we ship only ships for x86_64, so it would be…yeah, it would probably still be a nonstarter, sadly.)

                                                                      1. 4

                                                                        Parallels runs well, at least for Windows. I’ve heard the UI for adding Linux VMs is picky, but they’ll work fine too.

                                                                        Much of the work around HVF/Virtualization.framework is to make Linux stuff drop-dead easy.

                                                                        1. 3

                                                                          And Qemu is a good option for those too, with picking the patchset from the mailing list for using HVF.

                                                                          VMWare Fusion support is coming, VirtualBox will not be supported according to Oracle.

                                                                        2. 3

                                                                          I do have Parallels running Debian (10, arm64) on an M1. It was a bit weird getting it setup, but it works pretty well now, and certainly well enough for my needs.

                                                                          1. 2

                                                                            There’s a Parallels preview for M1 that works: https://my.parallels.com/desktop/beta

                                                                            It has VM Tools for ARM64 Windows, but not Linux (yet).

                                                                            In my opinion Linux is a better experience under QEMU w/ the patches for Apple Silicon support (see https://gist.github.com/niw/e4313b9c14e968764a52375da41b4278#file-readme-md). I personally have it set up a bit differently (no video output, just serial) and I use X11 forwarding for graphical stuff. See here: https://twitter.com/larbawb/status/1345600849523957762

                                                                            Apple’s XQuartz.app isn’t a universal binary yet so you’re probably going to want to grab xorg-server from MacPorts if you go the route I did.

                                                                            1. 1

                                                                              Genuine question, why not just have a separate system to run the vm’s? That keeps the battery life nice at the expense of requiring network connectivity but outside of “on an airplane” use cases its not a huge issue i find.

                                                                            1. 10

                                                                              Nice view on how the email flow works. Though, I don’t agree with some things.

                                                                              The only reason that merge button was used on Github was because Github can’t seem to mark the pull request as merged if it isn’t done by the button.

                                                                              No, it does. I merge locally all the time, and GitHub instantly marks a PR as merged when I push. In fact, I primarily host repos at GitLab and keep a mirror on GitHub. I accept PRs on GitHub (the mirror) as well to keep things easy for contributors. I manually merge these locally, and push the updated branch to GitLab. GitLab in turn syncs the GitHub mirror, and the PR on GitHub is marked as merged in a matter of seconds.

                                                                              …we have to mess with the commits before merging so we’re always force-pushing to the git fork of the author of the pull request (which they have to enable on the merge request, if they don’t we first have to tell them to enable that checkbox)

                                                                              Yes, of course you’ve to mess with them. But after doing that, don’t even bother to push to the contributors branch. Just merge it into the target branch yourself and push. Both GitLab and GitHub will instantly mark the PR as merged. It is the contributors job to keep his branch up to date, and he doesn’t even have to for you to be able to do your job.

                                                                              I understand that you like the email workflow, which is great. But I don’t agree with some arguments for it that are made here.

                                                                              Thanks for sharing though!

                                                                              1. 7

                                                                                No, it does. I merge locally all the time, and GitHub instantly marks a PR as merged when I push.

                                                                                In the article they talk about wanting to rebase first. If you do that locally, GitHub has no way to know that the rebased commits you pushed originally came from the PR, so it can’t close them automatically. It does work when you push outside GitHub without rebasing tho.

                                                                                1. 2

                                                                                  IIRC, can’t you rebase, (force) push to the PR branch, then merge and push and it’ll close? More work in that case but not impossible. Just if you rebase locally then push to ma(ster|in) then github has no easy way to know the pr is merged without doing fuzzy matching of commit/pr contents which would be a crazy thing to implement in my opinion.

                                                                                  1. 3

                                                                                    Typically the branch is on someone else’s fork, not yours.

                                                                                    1. 2

                                                                                      In Github, you can push to anothers branch if they have made it a PR in your project. Not sure if force push works, never tried. But I still feel it’s a hassle, you need to set up a new remote in git.

                                                                                      In Gitlab, apparently, you have to ask the branch owner to set some checkbox in Gitlab so that you can push to the branch.

                                                                                      1. 3

                                                                                        In Gitlab, apparently, you have to ask the branch owner to set some checkbox in Gitlab so that you can push to the branch.

                                                                                        That is the case in GitHub as well. (Allow edit from maintainers). It is enabled by default so I’ve never had to ask someone to enable it. Maybe it is not enabled by default on GitLab?

                                                                                        1. 1

                                                                                          I can confirm that it is disabled by default on GitLab.

                                                                              1. 28

                                                                                I think the analysis is completely off.

                                                                                Alongside these tools, many things changed in programming. Complexity of systems exploded, the bar for quality dropped (mostly because software got applied first into mission-critical domains where failure was not acceptable) and expectactions of the value a programmer should provide in its worktime skyrocketed.

                                                                                This puts a lot of pressure on the single developer or on the team. Overall the goal of programming fundamentally changed. The mode of production also changed: bigger teams, in bigger organizations, working on bigger problems and being hired from a much bigger and more varied workforce, with different drives than perfectionism, that for sure won’t be enforced by the stakeholders. The more you grow the number of people working concurrently on a single system, the harder it is to progress.

                                                                                The way software was developed, let’s say, in the 60s survives only in very small niches and it has no resemblace with how software is developed overall so the comparison is imho unfair.

                                                                                On top of it there’s a huge misconception about the contribution of the individual and its skills to a piece of software. Software is written by teams situated in organizations that are in turn situated in networks of organizations. The informational flow is complex and rarely individual programmers are hubs in this flow. There are some that are more prominent and influential than others and retain some small degree of autonomy and creativity, but they are ultimately constrained by the incredible amount of complexity they operate in. Some exceptions exist, like Terry Davis, but clearly they are not representative of the category.

                                                                                This misconception arises, imho, from the extremely individualist and machist culture that shaped the IT since its early years, where the production of software is a kind of performative act much alike a sport. You’re judged by your peers and by yourself on the results of your performance, real or imagined. Only in this ideologica framework this rethoric of the degradation of the individual skills can make sense. It’s not by chance that it echoes the reactionary narratives of old people celebrating an imagined “past-time” to contrast with the “softness” of the present times.

                                                                                Then for sure the two ways of writing code can condition people to think differently and lead to different behaviors and “performance” on some metrics, but the whole problem is ill-concieved.

                                                                                I’m reading Evidence-based Software Engineering these days and I’m grown more and more skeptical of all the pseudo-science that is pushed as facts to defend the cultural status quo. Most of these claims are unsubstatiated and the low bar for accepting these kinds of naive and broad statements over matters of software development without the need for strong evidence should go out of fashion as soon as possible.

                                                                                1. 5

                                                                                  If an individual’s role is ultimately so context dependent that we cannot assess skill, then what is the appropriate unit of critique? The end product? And how do we sell the fact that dev performance is so context-dependent to the people who hire and fire?

                                                                                  1. 9

                                                                                    Well, we speak about “software ecosystems”. Assessing their properties seems like a more meaningful endeavor. Most of them are not doing well.

                                                                                    For the second question, I would say it’s political and it’s hard to discuss without going OT. Anyway salaries are always determined by a more complex network of factors than the simple expectation of performance, something that is going out of fashion anyway in favor of soft skills. I personally believe the only way out is to implement workplace democracy and let colleagues vote your salary. Or even better, give the same salary to everyone regardless of role, skill and seniority.

                                                                                  2. 4

                                                                                    …with different drives than perfectionism, that for sure won’t be enforced by the stakeholders.

                                                                                    Programming is a job. Being good at your job means delivering what you’re asked to deliver. One can argue that you should also insert yourself into the conversation to some extent, and I agree with that, but at the end of the day it’s always going to be a compromise. Programmers who insist on any kind of “purity” probably just aren’t good at their actual jobs.

                                                                                    Software is written by teams situated in organizations that are in turn situated in networks of organizations.

                                                                                    I couldn’t agree more and, with respect to teams, I think John Wooden said it best:

                                                                                    “A player who makes a team great is more valuable than a great player. Losing yourself in the group, for the good of the group, that’s teamwork.”

                                                                                    Does a person help my team deliver software to stakeholders? If they stomp their feet and demand that we spend a week reducing the program’s memory footprint by 10% when the stakeholders don’t care, then the answer is “no”.

                                                                                    1. 3

                                                                                      Does a person help my team deliver software to stakeholders? If they stomp their feet and demand that we spend a week reducing the program’s memory footprint by 10% when the stakeholders don’t care, then the answer is “no”.

                                                                                      What if the person is demanding you spend a week improving critical security when the stakeholders don’t care?

                                                                                      1. 4

                                                                                        Short answer: no, if the stakeholders don’t want security and we can’t convince them to want security then whining about it isn’t going to change anything. Quit or get back to work.

                                                                                        Long answer: it’s really complicated. Is the team split on whether to care about security? If so, then we need some internal discussions to either get the rest of the team on board or to convince the teammate with concerns that the problem isn’t critical.

                                                                                        Also, who’s liable? Is it me personally? Then the answer is pretty clear, I’m not going to take the fall for something like that. But what if the stakeholders hold (hehe) liability? Well, then maybe that teammate needs to decide whether they care enough to quit in protest. Obviously we can keep bringing up, keep forcing conversations, but at the end of the day if the people paying for the software don’t care about security, we can’t make them care.

                                                                                        Another possibility is that we find time to fix it ourselves. I’m not going to kill myself working overtime to protect some corporate behemoth from an avalanche of lawsuits or embarrassment, but maybe the stakeholder is a non-profit with an extremely limited budget trying to solve some important social problem. Or maybe I feel solidarity with the corporate behemoth’s customers. It all depends.

                                                                                        Ultimately, it’s not cut and dry. The world is messy and we just have to deal with this stuff case-by-case.

                                                                                      2. 2

                                                                                        Programming is a job. Being good at your job means delivering what you’re asked to deliver. One can argue that you should also insert yourself into the conversation to some extent, and I agree with that, but at the end of the day it’s always going to be a compromise. Programmers who insist on any kind of “purity” probably just aren’t good at their actual jobs.

                                                                                        Lets flip this around a bit to demonstrate how your viewpoint applied to another job results in something you don’t want and ultimately shows how little power programmers truly have in decision making with this mindset.

                                                                                        Civil engineering is a job. Being good at your job means delivering what you’re asked to deliver. One can argue that you should also insert yourself into the conversation to some extent, and I agree with that, but at the end of the day it’s always going to be a compromise. Civil engineers who insist on any kind of “purity” probably just aren’t good at their actual jobs.

                                                                                        Lets imagine a civil engineer insisting on the “purity” of doing design constraints and getting a better idea of the actual loads and lifetime for a bridge. I’d consider this “purity” to be necessary, and civil engineers not doing it to be the ones that are bad at their jobs. Yet we as a profession in software seem to take every corner and cut it off not with a chisel, but a full blown guillotine.

                                                                                        Delivering a bridge that will fail because we didn’t think things through to me isn’t really doing any stakeholders anything good. Doing that for software to me is no different, we’ve just deluded and infantalized ourselves into thinking the end result is the only thing that matters to the alter of stakeholders. If there isn’t a conversation about the shortcuts that have been taken, then how can they properly evaluate the deliverable? If that 10% memory footprint is the difference between success and failure, and your team knows it, you’re doing nobody any good by ignoring it.

                                                                                        I agree with Jonathan Blow on a lot of modern programming, we are deluding ourselves if we think we can make robust software lately that uses the least resources and fits the domain well. If we keep saying “nobody does it cause nobody will pay for it or its not really needed”, do we really think we can do it at all if we never do it? All we’re ending up doing is building shitty bridges that get one car across in steady winds constantly, patching them up when they inevitably fell down, and called it “engineering” to fix it to be closer to a design that could stand a lifetime. Then we throw it away with the next fad and build a new bridge in the new fad just like the first bridge that fell down and re learn our mistakes and lack of foresight yet again.

                                                                                        Most programming should be boring, and damn near bulletproof. What we have today is to me the equivalent to making obsidian arrowheads. We have no clue what we’re doing and seem to as a profession ignore things that other fields have done to improve their fields. We also have no real stakes in what we do and seem to lack the wherewithal to push back on stakeholders to say: I can give you something now, but it won’t last and the long term cost will be more than the short term cost of doing it right. We constantly get fat off of the seed corn for tomorrow in this profession and wonder why our time is starved later in our projects and we have so little corn to plant to make tomorrow great.

                                                                                        Sorry for the rant, I’m seeing this everywhere and had to get my thoughts over the past two years into a more concrete form.

                                                                                        1. 4

                                                                                          If there isn’t a conversation about the shortcuts that have been taken, then how can they properly evaluate the deliverable?

                                                                                          This is exactly what I meant when I said that there should be compromise. A conversation seldom ends in one side or the other getting exactly what they originally (said they) wanted. Even in Civil Engineering the engineers might come back and say “look, you’re asking for a two lane bridge, but the traffic patterns on this road will mean traffic jams within 10 years if we don’t bump it up to four lanes”. But if there’s only a budget for a two lane bridge, then that may be what gets built.

                                                                                          If that 10% memory footprint is the difference between success and failure, and your team knows it, you’re doing nobody any good by ignoring it.

                                                                                          When I said “purity” I was thinking of people who want to do things a certain way not because it will meaningfully improve the product, but because they believe that is the “right” way to do it. And I specifically said that the stakeholders don’t care about the 10% memory gain. That means that either the customers don’t care (because they won’t notice it) or the business won’t pay for it and is willing to suffer the consequences. Ideally, the team told the stakeholders what the performance would look like, and hopefully even presented benchmarks and demonstrations. But at the end of the day, it’s a conversation, and just because you have a conversation doesn’t mean you get your way. We shouldn’t infantilize ourselves, but we also shouldn’t infantilize the non-developers we work with by pretending that we’re always “right”.

                                                                                          As an aside, I think it’s interesting that you brought up Civil Engineering because that’s one of the most regulated branches of engineering (at least in the US). I’m actually 100% fine with the government stepping in and laying down rules for Software Engineering and who is allowed to engage in it, particularly for areas where people might be injured or killed. But there’s a libertarian streak a mile wide in this industry and those folks want to keep it the “wild wild west”. If Civil Engineering didn’t have rules and the accompanying liability for breaking those rules there’s no way things would be as “pure” (to use the term the way you used it) as they are.

                                                                                        2. 1

                                                                                          Being good at your job means delivering what you’re asked to deliver. Debatable. Not all organizations are vertical authoritarian machines where orders are transfered downwards. It’s a very ineffective way of organizing things. Actually for programmers it is quite the opposite: our power as workers and our desire for autonomy forced a great deal of organizational science progress to make organizations horizontal, to some degree. But as you say, this autonomy should be paired with responsability and be driven by intent, not by edonism.

                                                                                          Just because some (most?) programmers abuse this autonomy, it’s not a good argument to become sheeps and surrender to the will of the shepherd.

                                                                                          1. 2

                                                                                            I see your point. I guess what I had in mind was that you can be “asked” to deliver something by the marketplace, customers, your team through a group decision, etc. I agree wholeheartedly that a vertical command-and-control pattern is almost never the right way to run an organization.

                                                                                      1. 3

                                                                                        We’re living in an age where dark mode is becoming a must. Soon a switch for on/off will be present pretty much everyhere. It reminds me a transition of the web to HTTPS.

                                                                                        1. 2

                                                                                          Dark mode is an aesthetic choice. Many people seem to love it; some hate it. The arguments that it’s objectively better or reduces eye strain are pretty suspect, IMHO. My objective experience is that it exacerbates my (normally mild) astigmatism by making the pupils open wider, which reduces depth of field. The only time I find it useful is when reading in bed after light-out to avoid disturbing my partner.

                                                                                          Modern browsers have a way for pages to detect whether dark mode is enabled in the OS, right? Can’t lobste.rs use that?

                                                                                          1. 3

                                                                                            If you’re worried about eye strain, just dial back the brightness. It always amazes me how we collectively make things harder by trying to make things easier and prettier:

                                                                                            • Old way: monitors have physical brightness and contrast knobs. New way: buttons are ugly and we need a ton of other controls that no-one uses, so we hide everything behind invisible buttons and menus that are hard to operate. Result: it is a hassle to adjust the most basic things on a screen and no-one knows hoe to do it anymore.
                                                                                            • Old way: let’s define only the structure of the document and let everyone display it the way they want. New way: we want pixel perfect control even though everyone has different pixels. Result: we need new layouts (or apps!) for every device and too bad if you don’t like the one we give you.

                                                                                            If you look for it, you see this pattern everywhere.

                                                                                          2. 1

                                                                                            I’m curious, what do folks use a dark mode for? A few apps have started defaulting to inverted brightness and it’s sometimes attractive, but I haven’t found a personal use case.

                                                                                            1. 1

                                                                                              I like how it looks better, and often turn on dark mode for various programs I use if available. It’s not super important to me - I didn’t mind that lobsters didn’t have it, for instance.

                                                                                            2. 1

                                                                                              Dark mode a must? Dark mode is basically the computer nerd equivalent of “this seasons in color is pink” in fashion, its at best stylistic. https has technical reasons to exist, dark mode is “lets use different colors to display things, just in dark cause that’s the new fashion”. I’m not sure they’re even close to similar. One is simply color themes.

                                                                                              1. 2

                                                                                                You’ve misunderstood my comment. I am not comparing “dark mode” vs “https” technical implementation, just a similar trend happening in web.

                                                                                            1. 15

                                                                                              Bad title, but actually interesting read. Except this part:

                                                                                              A good webcam, in my opinion, is a device that provides a decent sound quality

                                                                                              Maybe I’m just realistically pessimistic but years of experience with people and their shitty audio setups made me swear to never ever use a room mic myself for a video or audio call.

                                                                                              1. 11

                                                                                                When using a MacBook Pro for video calls I can get away with using the inbuilt mic as others tell me I come across very clearly and with no background noise / echo etc. I’ve noticed the same with other callers on Apple devices.

                                                                                                Those on our company’s (expensive) Dells all need to wear headsets to be heard properly and avoid noise / echo.

                                                                                                I don’t know what Apple is doing with their mics and processing of the audio but it works.

                                                                                                1. 11

                                                                                                  The Apple mic array is doing a ton of digital signal processing behind the scenes – identifying voice-like noise and “steering” to it using phased array techniques. That stuff is really cool but expensive to develop.

                                                                                                  1. 2

                                                                                                    It sounds cool, but in reality it is not that hard to develop. And they only have three mics, that gives only a very crude beam-steering abilities.

                                                                                                    1. 2

                                                                                                      I wonder why other device / OS manufacturers aren’t providing something similar then. Perhaps it’s encumbered by patents or is much harder to implement if you don’t have your hands on both the hardware and OS. Windows drivers should have enough access though, I’d have thought.

                                                                                                      1. 1

                                                                                                        Crude is good enough to distinguish between the voice directly ahead of the camera and other sources. If two people are directly in front of the camera, the chance is good that both intend to be heard.

                                                                                                  2. 2

                                                                                                    I use one and it works (and I’m conscious enough of these things to have spent almost €1k on conference quality improvements, and some of it was my own money). Location is everything. My microphone is far from my keyboard, somewhat directional, and I’m alone in the room.

                                                                                                    if I wanted to build a good-quality product I’d probably spend a lot of effort on using two or three good microphones and driver code and training/calibration tools to be able to boost the voice and suppress noise sources (typing on the keyboard, construction work in the neighbouring offices, neighbours fighting, whatever). And I’d forget absolutely 4k.

                                                                                                    I’m sure 4k resolution is useful for something, but being able to count the hairs on people’s chins during video conferences is not likely to help the conference achieve its purpose.

                                                                                                    1. 2

                                                                                                      That and probably no current videoconferencing system allocates users enough bandwidth to transmit 4k anyway, even if their internet connection suffices for it.

                                                                                                    2. 2

                                                                                                      My solution to all this, is I bought a good microphone. I have 2 now, a blue yeti, and a rode podcaster. The former is a condenser mic, so its a bit finicky on room noise pickup. The latter is a shotgun mic which is way better for meetings. I bought an arm for them as well.

                                                                                                      My only problem with all this is I kinda want the whole thing in a package, so I’m tempted to try to buy a new arm with in arm cable management (I HATE CABLES DIE DIE DIE), and to then get an rpi4 with the camera module v2 and make this whole setup work off the boom arm and setup obs there to act as a usb camera passthrough for say 720p. I’m also wondering if I solder up an led light array powered off usb too by the camera. Lighting seems to be the biggest issue with most live meeting setups.

                                                                                                      Then I can just use like xpra to connect to obs on the pi and all the stupid software on any system I plug this crap into will “just work” and think of the entire thing as a mic/camera but I’ll have sane audio filters.

                                                                                                      I’ve started down this path actually but not entirely sure I want to do the entire race. It all seems like a ton of silly work for little gain. Depends how bored I get this winter.

                                                                                                      1. 1

                                                                                                        Is OBS running on a Pi really good for that? I mean, I’ve thought about building a webcam out of a Pi and a HQ Camera module (and by my suggestion a friend did so), but he used Ethernet/RTSP from the Pi, and I thought about just getting a UVC stream from the Pi and using OBS on the computer it’s plugged into.

                                                                                                        I guess the real question I have is, how well does OBS run on a Raspberry Pi?

                                                                                                        1. 1

                                                                                                          Great question. I’m not entirely sure to be honest, I have a spare 4b 8gig I can test on. But my fallback option is this: https://www.lattepanda.com/products/3.html

                                                                                                          For 1080/720p should be enough and i’m also trying to have the goal of it doing all this over usb as a device not over ethernet/wifi which is a huge pita. The wifi is the only thing i’ll use and use it for having xpra run obs so I can disconnect/reconnect to things. I might just abandon the pi as the backbone and just use x86 instead because its a lot less of a pain to maybe do something that could be booted/installed off of pxe.

                                                                                                          You could run obs on the host computer as well instead of on the soc, my goal here was to more to have “obs in a box hooked up to a camera and mic”. It won’t connect super fast if I power it off the host bus and will have to boot but the tradeoffs seem worth it.

                                                                                                      2. 1

                                                                                                        Bad title, but actually interesting read.

                                                                                                        I’m open for suggestions on improving the title :)

                                                                                                        Maybe I’m just realistically pessimistic but years of experience with people and their shitty audio setups made me swear to never ever use a room mic myself for a video or audio call.

                                                                                                        It is actually possible to make a good audio setup with room mic. Of course, in some cases it is very hard, if someone is sitting in a crammed open space, but this product is supposed to be used basically at home, where it is much easier to do.

                                                                                                        1. 1

                                                                                                          I didn’t say it’s impossible, but I seem to have exclusively worked with people who don’t care about others in the past. I’m regularly the only one using a headset with a microphone, some people at least have earbuds with a non crappy mic, but environmental noise or static is more common than not. And yes, maybe I’m just grumpy because nobody seems to care a bit.

                                                                                                          Regarding the title: I think “good” is very subjective here, especially given the many different use cases. Yes, my ThinkPad one is horrible, but for team meetings where I have the people on a 14” laptop screen the one in e.g. 4-5y old Macbooks is totally fine. Also I kinda like the Logitech ones (forgot the model) that were actually just 70-100€ and maybe? catered to streamers. No, it’s not 4k but I honestly don’t see the need for that, many people I know never watch this on big enough screens to even notice.

                                                                                                          1. 2

                                                                                                            the Logitech ones (forgot the model) that were actually just 70-100€ and maybe? catered to streamers

                                                                                                            Maybe the C920 HD? They’re excellent, especially for their price. Not sure about the quality of the built-in mic, I always use a headset, but it’s overall a very solid product.

                                                                                                          2. 1

                                                                                                            I’m open for suggestions on improving the title :)

                                                                                                            Buzzfeed it! Top 10 reasons you can’t buy a good webcam, #10 will shock you! I think its fine as-is though, but I am also annoyed that getting good video and audio even for zoom stuff is sooooo more effort than i’d expected. I appreciate the people that put in the effort on calls now though. So much background noise that could be eliminated with a filter through obs or some other audio processing that would remove my headphones letting me hear every wash cycle of their clothes. (also, why do people not mute when not talking or try to do laundry when they’re not on a meeting but I digress)

                                                                                                            1. 1

                                                                                                              Top 10 reasons you can’t buy a good webcam, #10 will shock you!

                                                                                                              Ughh, thanks, I hate it :)

                                                                                                              So much background noise that could be eliminated with a filter through obs or some other audio processing

                                                                                                              When in a meeting from PC - sure, it’s possible. When someone is on a meeting from phone it’s both really hard to do anything custom and generally a lot more noise, because someone is walking by a busy street, or standing next to a grinding coffee machine… Or does their laundry, as you say.

                                                                                                              Seems like the only solution here is to convince people to buy some noise-cancelling headsets for their phone.

                                                                                                          3. 1

                                                                                                            At work, I have a fairly expensive VoIP phone with a speakerphone mode that I use exclusively as a microphone. It works very well in my office. In my home office, I’ve been using the microphone built into my Surface Book 2. That also works very well, though it works far better with Teams than Signal. As far as I can tell (having not looked at the code), Teams is doing some dynamic measurement to detect the latency between the sound and the speaker. This is really apparent when you use a wireless setup (for social things, I sometimes use the WiFi display functionality of my Xbox to send video and audio to the living room screen and speakers - this has about half a second latency, which Teams is fine with but Signal can’t handle at all).

                                                                                                            My webcam actually does have a microphone but I’ve not tried using it.

                                                                                                          1. 4

                                                                                                            I’d bolt that bad boy under the desk, free up some room on top for important things like coffee.

                                                                                                            1. 12

                                                                                                              Working on a Cargo plugin for packaging Rust binaries into AppImage files! Hopefully it’ll even work.

                                                                                                              1. 1

                                                                                                                Hopefully it’ll even work.

                                                                                                                You’re naming it cargo hereholdmybeer right?

                                                                                                                1. 1

                                                                                                                  I was thinking of cargo heregoes but yours is fine also.

                                                                                                                  1. 2

                                                                                                                    Heh, both bad ideas are fun things I wish existed ignore my silliness.

                                                                                                              1. 2

                                                                                                                Git stash scary? Git stash is a key part to how I work in git.

                                                                                                                Make changes, decide nah, this isn’t quite right, lets sit on it a bit. Stash away, give it a good name, sometime later come back to it and make it a real commit/branch and pr. Boom done.

                                                                                                                Also good for a cheap stack of local only changes you want to reapply to new branches you create or review.