1.  

    C++ is great if someone else set up all the tooling, compilers, and libraries for me.

    1. 3

      This is a really cool idea for a small widescreen laptop, where tiling left-to-right is desirable, but tiling top-to-bottom results in windows too short to be useful.

      The first tiling window manager I ever used was Ion, which unfortunately ended up in a debate between the author and distro maintainers over distributing modified and/or outdated versions of the software under the same name. Similar debates keep popping up even today.

      After Ion, I tried other alternatives such as awesome and wmii, but it wasn’t the same. Today I just use whatever window manager happens to be installed, with hotkeys to navigate to whichever window is above/below/left of/right of the currently focused window (https://github.com/cout/windowfocus). This works for me on the desktop, but I might have to try PaperWM next time I’m using linux on my laptop (I particularly like the scratch layer – Ion had a separate untiled workspace instead).

      1. 2

        iirc PaperWM started as a fork of ion3 and then they rewrote it so you might find some roots there. I’ve personally tried tons of tiling WMs but I couldn’t find a replacement for the joy of manually splitting + tabs that ion3 gave.

        Then I migrated to notion which you might enjoy (https://github.com/raboof/notion) as it’s a fork of Ion3 with fixes/improvements after Tuomov dropped development. A new version is going to be released soon without Ion’s licensed code, so hopefuly distros will be willing to adopt it :)

        Edit: typos

        1. 1

          Been using notion full time for a while, it’s great. I’ve been using ion since ~ion2’s release. Long lived software is wonderful.

        2. 2

          ion3 was a good window manager, but the maintainer was difficult to get along with.

          i3wm is an entirely new window manager based on the ion3 model of subdividing windows (as opposed to the awesome/dwm model of automatically resizing all the windows when a new one appears), and it’s pretty great. I personally use it in combination with GNOME Flashback (so I have all the GNOME goodies like volume-control keys and disk automounting).

          1. 2

            I really like a lot of i3wm’s features, but I literally can’t live without key chorded full screen zoom and, while I know that integrating compiz or the like is possible, I’m not sure how to achieve it, so I went back to running KDE/Gnome.

        1. 12

          I haven’t worked with a quality focused team since ~2009, so it has nothing to do with weakness, and turning this into a moral choice that someone is making seems misplaced to me. I think it’s a capitalist choice, and yet again capitalism optimizing for nothing useful.

          The worse is better theory winning is not some victory lap for C, but I believe just a part of the fact that consumers / clients have no other choices, and if they do the cost and effort of switching is almost an impossible hurdle. The idea of me switching to an iPhone or my wife switching to Android is almost an insurmountable set of unknown complexity.

          1. 2

            I don’t think the article really states it as a moral choice, but rather as an emergent property of software development as it is practiced.

            1. 1

              I’m sure there’s a philosophical name for this. It’s a practice that results in morally problematic results, despite that practice not being a deliberate moral choice. Sort of like how capitalism as currently practiced fills the ocean with microplastic garbage despite nobody making a choice to do that.

              1. 5

                Hot take: most “morality” is just a matter of aesthetics. Billions of people would presumably rather be alive than not existing because a non-capitalist system is grossly inefficient at developing the supporting tech and markets for mass agriculture. Other people would prefer that those folks not exist if it meant prettier beachfront property, or that their favorite fish was still alive.

                Anyways, that’s well off-topic though I’m happy to continue the conversation in PMs. :)

                1. 8

                  Just as “software development” is a pretty broad term, “capitalism” is a pretty broad term. I wouldn’t advocate eliminating capitalism any more than I would advocate eliminating software development. The “as currently practiced” is where the interesting discussion lies.

                2. 3

                  There’s an economic name for it - externality - though economics is emphatically not philosophy.

                  1. 1

                    Sort of like how capitalism as currently practiced fills the ocean with microplastic garbage despite nobody making a choice to do that.

                    This is a classic False Cause logical fallacy.

                    Capitalism is not the cause of microplastic pollution. The production of microplastics and subsequent failure to safely dispose of microplastics is the cause of microplastic pollution.

                    Microplastics produced in some centrally-planned wealth-redistribution economy would be just as harmful to the environment as microplastics produced in a Capitalist economy (although the slaves in the gulags producing those microplastics would be having less of a fun time).

                    Further example:

                    • Chlorofluorocarbons were produced in Capitalist economies.
                    • Scientists discovered that chlorofluorocarbons are poking a hole in the ozone layer and giving a bunch of Australians skin cancer.
                    • People in Capitalist economies then decided that we should not allow further use of chlorofluorocarbons.
                    1. 3

                      Again, the key phrase here is not “capitalism”, but “as currently practiced”. Capitalism doesn’t cause microplastics, but it doesn’t stop them either. In other words microplastics are “an emergent property of capitalism as it is practiced”. You could practice it differently and not produce microplastics, but apparently the feedback mechanism between the bad result (microplastics/bloated software) and the choices (using huge amounts of disposable plastics/using huge amounts of software abstractions) is not sufficient to produce a better result. (Of course assuming one thinks the result is bad to begin with.)

                      1. 0

                        Of course assuming one thinks the result is bad to begin with.

                        That is really the heart of the matter, as far as I see it. In contemporary discourse, capitalism as a values system (versus capitalism as a set of observations about markets) does not have a peer, does not have a countervailing force.

                        I’m sure there’s a philosophical name for this

                        @leeg brought this up as well, but “negative externality” is in the ballpark of what you are looking for . An externality is simply some effect on a third party, and whose value is not accounted for within the system. Environmental pollution is a great example of a negative externality. Many current market structures do not penalize pollution at a level commensurate with the damage caused to other parties. Education is an example of a positive externality: the teachers and administrators in schools rarely achieve a monetary reward commensurate with the long-term societal and economic impact of the education they have provided.

                        Societies attempt to counteract these externalities by some degree of magnitude (regulations and fines for pollution, tax exemptions for education), and much ink is spilled in policy debates as to whether or not the magnitudes are appropriate.

                        Bring back in my first statement, that capitalism (née economic impact) is not only values system, but is the only system that is assumed to be shared in contemporary discourse. This results in a lot of roundabout arguments, in pursuit of other values, being made in economic terms.

                        What people really wish to convey, what really motivates people, may be something else. However, they cannot rely on those values being shared, and resort to squishy, centrist, technocratic studies and statistics that hide their actual values, in hopes other people will at least share in the appeal to this-or-that economic indicator (GDP, CPI, measures of inequality, home ownership rates, savings rates, debt levels, trade imbalances, unemployment, et cetera). This technocratic discussion fails to resolve the actual difference in values, and causes conflict-averse people to tune it out entirely, thus accepting the status quo (“capitalism”). I lament this, despite being very centrist and technocratically-inclined myself.

                        Rambling further would eclipse the scope of what is appropriate for a post on Lobsters, so I will chuck it your way in a DM.

                        1. -1

                          Capitalism doesn’t cause microplastics, but it doesn’t stop them either.

                          I’m not sure I understand what you’re trying to say here. How is Capitalism related to the production of microplastics? Are you saying that in a better form of Capitalism, the price of the the externality of microplastic pollution would be costed into its production, thus making microplastics not financially viable?

                          I’m also not sure microplastic pollution is strongly analogous to bloated software.

                          1. 3

                            I apparently chose an explosive analogy here, and now I’m fascinated by all the stuff that’s coming back.

                            But let me just try again with something less loaded…how about transportation?

                            The bad effects in the essay (wasted resources, bugs, slowness, inelegance) are a result of how we do software development. Assume for argument that most people don’t choose waste, bugs, slowness, and inelegance deliberately. Nevertheless, that’s what we get. It’s an “emergent property” of all the little choices of how we do it.

                            Similarly, most people—I hope certainly the engineers involved—don’t choose to have the NOx pollution, two-hour commutes, suburban sprawl, unwalkable communities, and visual blight that result from how we do transportation. It just happens because of how we do it.

                            So we’re all actively participating in making choices that cause an outcome that a lot of participants don’t like.

                            My point was just that there are lots of things like this, not just software development. So I figure this sort of problem must have a name.

                            (And yes, this means writing an essay about how awful the result is doesn’t do anything to fix it, because the feedback from result to cause is very weak.)

                            1. 2

                              So I figure this sort of problem must have a name.

                              Engineering. Engineering is trading off short commutes for private land. Engineering is a system of cars that get every individual acting alone where they need to go, even though getting all people at the same destinations from the same origin really calls for mass transit. Engineering is families with kids making different living and thus commuting arrangements than single people. These are all tradeoffs.

                              The ideal keyboard takes no space and has a key for everything you want to type from letters to paragraphs. Everything else is engineering. The ideal city has zero school, work, leisure, and shopping commutes for everybody. What we have instead is engineering.

                              The ideal bus line goes to every possible destination and stops there. It also takes no time to complete a full circuit. We compromise, and instead have buses that work for some cities and really don’t for others.

                1. 11

                  Huzzah, more spooky action at a distance, just what programs need. The points of contact between modules become your messages, which are essentially global in scope. And the rules may contradict each other or otherwise clash, and understanding what’s going on requires you to go through each module one by one, understand them fully, and then understand the interactions between them. This isn’t necessarily a deal breaker, but it also isn’t any simpler than any other method.

                  Interesting idea, but I’m deeply unconvinced. It seems like making an actual complex system work with this style would lead to exactly the same as any other paradigm: a collection of modules communicating through well-defined interfaces. Because this is a method of building complex machines that our brains are good at understanding.

                  1. 7

                    IMO this comes from the fact that the act of writing/extending software easily that you’ve spent N years understanding and reading software later are two entirely different activities, and push your development style in different directions.

                    The ability to write software that integrates easily pushes folks to APIs that favor extension, inversion of control, etc. This is the “industrial java complex” or something like it - and it appears in all languages I’ve ever worked on. I’ve never seen documentation overcome “spooky action at a distance”.

                    The ability to read software and understand it pushes you to “if this and this, then this” programming, but can create long methods, lots of direct coupling of APIs etc. I’ve never seen folks resist the urge to clean up the “spaghetti code” that actually made for delicious reading.

                    It’s my opinion that this is where we should build more abstractions and tools for human software development, similar to literate programming, layered programming, or model oriented programming. One set of tools are for writing software quickly and correctly, and another set of tools for reading and understanding, i.e. macroexpand-1 or gcc -E style views of code for learning & debugging, and a very abstract easy to manipulate view of code that allows for minimal changes for maximal behavioral extension.

                    ¿por qué no los dos?

                    1. 2

                      The points of contact between modules become your messages, which are essentially global in scope.

                      This was exactly my thought, too. It reminds me of a trade-off in OOP where I think you had to decide whether you want to be able to either add new types (classes) easily or add new methods easily. One approach allowed the one, the other approach the other. But you could not have both at the same time. Just can’t wrap my head around what exactly was the situation… (it might have been related to the visitor pattern, not sure anymore)

                      In this case, the author seems to get easy addition/deletion of functions by having a hard time changing the “communication logic” / blocking semantics (which operation blocks which other operation, defined by block and waitFor). While in the standard way the “communication logic” is easy to change, because you just have to replace && by || or whatever you need, but the addition of new functions is harder.

                      1. 3

                        That’s sometimes known as the “expression problem”.

                        https://eli.thegreenplace.net/2016/the-expression-problem-and-its-solutions/

                    1. 7

                      I use rc(1), the only shell that doesn’t confuse me endlessly with absurd quoting problems. Now I actually enjoy writing shell scripts…

                      1. 2

                        I wrote a dotfile manager in rc and it was such a breath of fresh air. Just reading the documentation honestly made me happy, and not much documentation does that! I don’t think I could ever use it as an interactive shell though, and I still write most scripts in portable sh, but I do wish rc were more ubiquitous.

                        1. 1

                          I loved using RC but eventually gave up and use zsh (home) and bash (work).

                          1. 1

                            I use rc as my fulltime shell as well - specifically Byron’s rc which cleans up some of the silly “if not” logical things.

                          1. 18

                            Torn between “this is a clickbait title, language is irrelevant the important part is using the right algorithm” and “well, having set operations in the standard library does sure help with that”.

                            1. 9

                              I think the biggest difference is that Python allows you to focus on the task at hand, rather than plumbing like implementing linked lists, dealing with memory, etc. This makes it much easier to use the right algorithm.

                              The downside is that some “simple” operations in Python can be quite complex under the hood, and that in C you always have a good idea of what exactly is happening.

                              1. 13

                                Unless you’re in the habit of looking at the generated assembly, you really /don’t/ though. C as a shorthand assembler was really only a thing on the PDP, and we have been drifting further away from that since.

                                1. 1

                                  Right, fair enough. I meant that there are (usually) no hidden complexities in C code, and that it’s usually reasonably obvious what the computer will do (although perhaps not “exactly”). In Python, it can be quite easy to create really slow code if you don’t have a good insight in how Python treats your code.

                                  This is mostly an issue with new(ish) programmers, who have experience with only Python (or similar languages) and lack a certain “insight” in to these things.

                                  1. 3

                                    I meant that there are (usually) no hidden complexities in C code

                                    You mean like cache misses? Branch mispredictions? Compiler optimisations you’d never think would happen due to undefined behaviour? Never mind memory safety.

                                    1. 1

                                      You mean like cache misses? Branch mispredictions?

                                      There’s no way to avoid those though, even with assembly. In that regard C’s performance profile is very similar to assembly, and I say that as someone who isn’t impressed with the machine code that modern compilers produce.

                                      The undefined behaviour is a really good point though. GCC 2.95 4 lyfe. :(

                                      1. 1

                                        There’s no way to avoid those though, even with assembly.

                                        True, but I don’t think that changes the fact that I consider “no hidden complexities in C code” to be incorrect. What you’re saying is that there’s no way to avoid it, and my point is that writing C is no silver bullet (or, IMHO, silver. Or a good bullet).

                                      2. 1

                                        I totally get your point, but arp242 talked about hidden complexities, not hidden simplifications.

                                        I’d probably rather have a cache miss than an assignment turned into a deep copy.

                                        1. 1

                                          I don’t consider anything I wrote to be a hidden simplification.

                                          Deep copies only matter if they show up in the profiler, at which point you’ll know exactly where they are.

                                          1. 2

                                            I’m sorry. (I considered compiler optimizations to be hidden simplifications).

                                2. 4

                                  I’m just mad it was written in C in the first place when sort, uniq, and cut would’ve been just as slow and probably used more memory and taken up multiple process table slots…but would’ve been more UNIXy.

                                  1. 1

                                    This is the logic that gets you C++

                                  1. 3

                                    I want to install arch on my 2018 macbook pro really bad but I dont think its a good idea after past experiences… at least in 2015 there was docs on the Arch wiki. Looks dead now.

                                    I’m going to ask the boss for a non-Mac next year…

                                    1. 0

                                      I don’t understand why they removed the Installation Guide from the Arch Wiki. It used to be such a comprehensive resource to guide you through the installation process. Now all that information is spread across multiple wiki articles and you have to somehow piece it together.

                                        1. 0

                                          It is. I meant to say it’s much less comprehensive now than it used to be.

                                          1. 3

                                            I think all the content is still there it just got split up. It’s a bit of a shame since that install page used to be mostly standalone, now it’s a bit more of a choose your own adventure that branches off at various points.

                                            1. 0

                                              Yep, exactly. It’s up to you now to piece together all the info spread across multiple pages.

                                    1. 1

                                      I use the rc shell, and I use a very simple prompt.

                                      ;

                                      Then when you copy paste sessions without the output, you can run it again.

                                      In my previous ZSH days, …

                                      1. 4

                                        PHP: wow, I’ll never know what any of this code does, but the html/dynamic mix is actually easier for me to read than most frameworks.

                                        Java: I’ve never had to learn java professionally yet, and that’s because someone else always already knows it. This is actually because Java is somehow teachable.

                                        Haskell: Lets individuals write LOTS of code without having to keep it all in their head - there are tons of haskell projects that are just monstrous, and it’s because of the stronger type system imo.

                                        Go: It’s amazing to me just how quickly someone with little to no understanding of the language’s semantics can whip out a working program quickly. It’s not a duck, but it is.

                                        Thanks for this post. I’ve been working on gratitude and this was a great exercise.

                                        1. 50

                                          One thing I think is useful:

                                          It doesn’t matter what you were paid at your previous gig, and don’t answer if they ask.

                                          “Since this is a different engagement, with different technical and team needs, my previous compensation is not a useful datapoint.”

                                          1. 7

                                            Also, in some places it’s not legal for them to ask (though they can still ask what salary range you want).

                                            1. 14

                                              “My current compensation is part of the reason why am looking into other opportunities”

                                              1. 13

                                                I would avoid saying this. It provides signal that your current pay is low and will lead to a lowball offer, which is the opposite of its intention.

                                                1. 1

                                                  Last week we had to get rid of a bunch no-demand electronic components, so we sent the spreasheet (with purchase prices removed) to one of the scavenger companies. First thing they asked was, what we paid for them originally?

                                                  Seriously, this is a super common tactic in purchasing, and you are a resource being purchased. They are minimizing the cost. No polite retort here would make your negotiation position worse vs revealing the figures.

                                                2. 3

                                                  “My current compensation is part of the reason why am looking into other opportunities”

                                                  Respectfully, I see this as an anti-pattern. Here are things potential employers might well read between the lines of this statement:

                                                  “I only value money and don’t care about the work”

                                                  “I’m a self important primadonna”

                                                  “I’m not loyal and will cut and run if things are not precisely to my liking.”

                                                  I recognize that your statement doesn’t ACTUALLY say any of these things.

                                                  1. 1

                                                    Fair points I guess… but consider that asking your current salary is an attempt to gain negotiating leverage on you using power imbalance. You know it and the interviewer knows it, there is absolutely no other reason asking for it. And I mean it’s not something you blurted out of blue, the comp was the question they brought up in the first place. Only so much ways for a polite retort, and none of them is 100% safe if someone insists to read between the lines deep enough.

                                                    1. 2

                                                      Oh totally it’s a CRAPPY thing for a potential employer to do and should be a red flag to anyone looking, I’m just suggesting that explicitly saying that crappy salary is why you’re leaving your current gig, in my opinion, weakens your position.

                                                      YMMV.

                                                3. 4

                                                  It doesn’t matter what you were paid at your previous gig

                                                  Strong agree

                                                  don’t answer if they ask.

                                                  Or do answer, with a number that sets an expectation for future negotiations. Depends how you feel about lying.

                                                  1. 5

                                                    It can be dodgy to lie since that can be discovered, but “It would take $X to get me to leave” is probably always better than a lie and give you a lot more flexibility…

                                                    1. 3

                                                      Or do answer, with a number that sets an expectation for future negotiations. Depends how you feel about lying.

                                                      Problem with that one is that a new employer sees your old income on your P60 (in the UK at least,) with lying during an interview being grounds for dismissal.

                                                      That being said past wage shouldn’t matter to a new employer unless they are trying to lowball a potential hire. On principle I never ask during interviews I host and have in the past hired people on nearly double what they came from; usually wages are negotiated by an intermediary such as a recruiter.

                                                      1. 5

                                                        Your wage is not technically secret information in the US, but if you don’t share it yourself, there’s no plausible mechanism for a new employer to find out. Your previous employer almost certainly won’t share it, and if they do, you’ll have cause to be very upset with them. (A functional, professional HR department will confirm dates of employment and possibly job title, and nothing else.)

                                                        That said, I’m definitely more comfortable redirecting or answering “how much are you currently making” with “I’m looking to make $X” than lying outright.

                                                        1. 1

                                                          Problem with that one is that a new employer sees your old income on your P60 (in the UK at least,) with lying during an interview being grounds for dismissal.

                                                          So why dodge the question if employers have access to this information? I don’t know about the United States if companies also have this information.

                                                    2. 3

                                                      It doesn’t matter what you were paid at your previous gig, and don’t answer if they ask.

                                                      What if it’s required?

                                                      1. 10

                                                        Walk, if you can. There are fewer gestures more powerful than walking away for something that an HR person would believe is so small in order to convey how serious it actually is and how serious you are about your financial privacy.

                                                        When it’s been required for me for a job I was earnestly interested in seeking, I told them to put down “something absurd so we get past this hurdle” and when pressed for a real answer, I would say “one dollar” or “ten million dollars” to make it look like a typo on their part.

                                                        Also, I’d remind them that asking current salary is illegal in several states and there’s a bill in almost every state legislature now that would outlaw it.

                                                        1. 1

                                                          For entry level folks at big big tech co’s, they’ll let them walk away, and offer to pay them much less if they don’t make up counter offer numbers.

                                                          So to me the answer is obvious, and representative salary numbers aren’t hard to find these days.

                                                          1. 2

                                                            Entry level is a whole different game. You have essentially zero leverage at that point. Given that nearly half of job offers are at or near entry-level, I feel like these posts really should distinguish between the kinds of advice given.

                                                        2. 2

                                                          Tell them you have an NDA

                                                      1. 0

                                                        In the era of clusterfsck of dozens of different platforms, creating a language which can’t easily produce a dependency-free binary which can run on customers machine without any sort of additional effort in runtime bring up (even an “installation”) can’t be called “acceptable”.

                                                        The same goes for Python, except its interpreter is by default available on all of three major desktop platforms since last month (thanks Microsoft) at least until next major macOS release (thanks Apple!).

                                                        1. 19

                                                          While it’s true that Racket can’t produce a single statically-linked binary for your program (assuming that’s what you mean by a dependency-free binary), it can certainly produce a self-contained distribution of your program by packing your program and the Racket runtime into a single executable along with all of its dependencies. This is how I distribute all of my web application and it works out nicely.

                                                          https://docs.racket-lang.org/raco/exe.html

                                                          https://docs.racket-lang.org/raco/exe-dist.html

                                                          1. 2

                                                            Do you have any end-to-end examples that you can share of using these tools to build a self-contained web app?

                                                            Thanks! either way (the links are a good start!)!

                                                            1. 12

                                                              It really doesn’t take much beyond having an app with a main submodule as its entrypoint and then running

                                                              raco exe -o app main.rkt
                                                              

                                                              followed by

                                                              raco distribute dist app
                                                              

                                                              Any libraries referenced by any of the modules in the app as well as any files referenced with define-runtime-path will automatically be copied into the distribution and their paths updated for the runtime code (no need for any special configuration files (and, especially, no need for a MANIFEST.in, which, if you’ve ever tried to do this with Python you might know is a horrible experience)).

                                                              For Linux distributions (since I am on macOS), I run the same process inside Docker (in CI) to produce a distribution. Here are my test-app.sh and build-app.sh scripts for one of my applications:

                                                              The raco koyo dist line you see in the second file is just a combination of the two commands (exe and distribute). In fact, if you want to give this a go yourself, then you can use my yet-to-be-fully-documented koyo library to create a basic webapp from a template and then create a distribution for it:

                                                              $ raco pkg install koyo
                                                              $ raco koyo new example  # think "rails new" or "django startproject"
                                                              $ cd example
                                                              $ less README.md  # optional
                                                              $ raco koyo dist
                                                              $ ./dist/bin/example  # to run the app
                                                              

                                                              Hope that helps!

                                                              P.S.: Here is the implementation for koyo dist and here is the blueprint (template) that is used when you run raco koyo new.

                                                              1. 2

                                                                Thanks, this was just the kind of example I was hoping for.

                                                          2. 4

                                                            can’t be called “acceptable”.

                                                            To you, of course. Even the go nerds moved on to docker for deployments, you should consider it - I use docker for python codebases to manage things without needing to remember the exact invocation of venv, pip, etc.

                                                            However, for me raco exe has been more than enough. Have you tried it?

                                                            https://docs.racket-lang.org/raco/exe.html

                                                            1. 7

                                                              [edit, spell roost correctly]

                                                              I’m not sure which go nerds you’re referring to. Perhaps I’m the exception that proves the rule, or perhaps the go community’s more diverse than you’ve seen. I love the fact that terraform, hugo, direnv, and a big handful of my other tools (large and small) are simple single file executables (ok, terraform is bigger than that, but…). It’s one of the things that attracts me to the language.

                                                              I’m burnt out on solving a problem at build time and then having to solve it again each time I install an application (PERL5LIB, PYTHONPATH, LD_LIBRARY_PATH, sigh…). Thank goodness for Spack for my work deployments.

                                                              I’ve found Docker to be useful in the small (”I use Docker on my laptop to X, Y, and Z.”) and in the big (”We have an IT team that wrangles our Docker hosting servers.”). For the stuff in the middle (”Dammit Jim, I’m a bioinformatician, not a syadmin!”) it turns into a problem on its own.

                                                              If you know how to use venv, pip, and etc to build the Docker image, you could use them to do the deployment (though not for free…). I’ve seen many situations where people didn’t understand the Python tool chain but could hide the mess from review, at least until it came home to roose roost.

                                                              1. 5

                                                                I agree with you. I build lots of tiny, glue-style Go tools (mostly for my coworkers on my ops team), and somebody always ends up contributing a Dockerfile.

                                                                I still prefer

                                                                ./ldap_util --validate user1
                                                                

                                                                to

                                                                docker run --rm -it ldap_util:latest -e "USER_TO_VALIDATE=user1"
                                                                
                                                                1. 1

                                                                  I just think of docker images as universal server executables, which makes it easier to accept docker as a whole.

                                                                2. 2

                                                                  I don’t think it’s bad that they have executables, but installing racket is very simple, and most of your complaints are actually places where python & perl are much worse than racket.

                                                                  This all sounds like you haven’t tried racket, and tried to rag on a general complaint where in racket it’s not nearly the same problem, and you haven’t worked with it or researched it.

                                                                  1. 1

                                                                    I think that you’re replying to my grumpy comments above….

                                                                    Most of that grumpiness was meant to be targeted at Docker and the anti-pattern of using it to hide a mess rather than clean the mess up; I’ve spent a lot of time (though it has earned me a fair bit of income) cleaning up problems downstream of Docker solutions. I made a similar comment somewhere else here on Lobste.rs that sums up my feelings; I’ve seen Docker used effectively in the small (e.g. on individual machines) and in the large (sites with savvy teams and resources to invest in keeping everything Docker-related happy) but the middle seems to run into problems.

                                                                    Other than the grumpiness above, I really don’t mean to rag on Racket, I’ve seen some neat things built with it (e..g Pollen).

                                                                    You’re right that I haven’t spent much time with racket; and big part of that is burnout with installing things that require finding and installing dependencies at “install-time”.

                                                                    I’m excited by @bodgans nicely packaged demo of distribute earlier in the thread.

                                                                    Are there any examples of tools written in racket (or it’s various sub-languages) that have been packaged up for installation by tools like Homebrew or Spack or …?

                                                              1. 21

                                                                I only recently came across his Twitter account and started following him (he had some interesting ideas about and experiments with TiddlyWiki). He seemed like a passionate and enthusiastic technology lover, which was a good enough reason to follow him.

                                                                And it is only just a few minutes ago that I discovered that Joe is the guy with the moustache from the famous Erlang: The Movie video…. I had not pieced those two things together.

                                                                So I knew nothing about his history, just one little thing he was working on/interested in. That may make me seem ignorant, but I share this because it goes to show that the light that shines in people like Joe was a genuine light of curiosity and sharing. Bright enough to attract people like me.

                                                                1. 4

                                                                  I started using TiddlyWiki because of his enthusiastic posts on it, and I have to say I’m incredibly happy with it. It’s great to be able to own your own knowledge repository, and to be able to tag things appropriately to distill versions of the TiddlyWiki that you may want to share in certain venues.

                                                                  1. 1

                                                                    Do you have any suggestions for using tiddlywiki on mobile?

                                                                    The save workflow is already pretty terrible now on modern browsers, and I never figured out syncing. I really liked the idea of it, and am always curious how people use it.

                                                                1. 2

                                                                  With Emacs I just do C-x 8 RET, but I wish it would show my the emoji I was choosing as well as it’s name. If anyone knows how to do that let me know.

                                                                  1. 28

                                                                    This is the content I come here for. Unix knowledge lost to the annals of tech bro culture. Sometimes the solutions are readily available, but you’ve just never heard of the tool before because it’s not shiny and trendy.

                                                                    1. 29

                                                                      The reason people stopped using rdist and rsync et al took over was that it used to not support SSH or diffs. It appears it does now, but I last looked a long time ago. I think you may be ignoring the history of how we actually got here - It’s called shitty late 90’s sysadmin patterns, and ansible is a great solution to that life.

                                                                      rdist being unused has nothing to do with shine or trends, but humans and the very history you lament no one remembers.

                                                                      1. 0

                                                                        And to think, if we got rid of it we could make more room for news and content marketing!

                                                                      1. 6

                                                                        Come on guys! not everyone using Emacs, stop suggesting them using org-mode!!!

                                                                        Suggest them to switch to Emacs first! Then org-mode!

                                                                        1. 1

                                                                          Haven’t tried it, but there’s an org-mode extension for vscode: https://github.com/vscode-org-mode/vscode-org-mode

                                                                          1. 1

                                                                            btw, a combo of org-mode + deft is good, except one thing, deft does not allow me to create the file in a folder of my choice, anybody has solution for this?

                                                                            1. 1

                                                                              When you say “folder of my choice” do you mean a different folder from deft-directory? The problem would then be how does deft know how to search for it - I’m not sure I see how that’s better than just doing find-file.

                                                                              However, I have customized my deft usage a little, it may or may not help you:

                                                                              https://github.com/codemac/config/blob/master/emacs.d/boot.org#search-through-my-_notes-directory

                                                                              1. 1

                                                                                no, it’s just a subfolder inside deft root folder, for example I have deft configured at ~/notes and I have ~/notes/emacs, ~/notes/haskell,… creating new file in deft would make a file at ~/notes and I want to specific the sub dir for it.

                                                                                But hey the cm/deft-new-file-named is awesome, I use the same template for my notes!

                                                                                1. 1

                                                                                  You may be able to do something similar where you have a record/capture function that has you choose/create a new directory, then enter the note name (or maybe just parse on a / for the directory/filename).

                                                                                  As far as searching it back with deft, this looks worse :( there are ~30 references to the deft-directory variable. I imagine most could be wrapped up in a function that does “expand-file-name” on a list of directories, but you’d basically be implementing multi-deft.

                                                                                  If you look below in that same section, I have a hook for cm/org-notes-search to use org’s searching. It requires you write more valid regexes, and it’s slower than deft for sure - but it works on an arbitrary org-agenda-files list, which can be multiple directories. You may end up being able to modify a capture + search function for yourself pretty easily.

                                                                          1. 4

                                                                            If you are doing request/response services that rarely mutate the request buffer, consider using other serialization methods so you can get zero-copy performance.

                                                                            1. 2

                                                                              What do you mean zero-copy performance? Zero copies of what?

                                                                              1. 14

                                                                                When protobufs are unpacked, they’re copied from the serialized binary stream into structs in the native language. FlatBuffers are an alternative, also by Google, that don’t need to be unpacked at all. The data transmitted over the network is directly useable.

                                                                                Interestingly, there are also some zero copy, zero allocation JSON decoders, like RapidJSON. Its methods return const references to strings directly in the source buffer rather than copying them out. But of course it still needs to parse and return numbers and booleans, rather than using them directly from the message like FlatBuffers.

                                                                                The biggest problem with copying is copying large chunks of binary data out of the message. Suppose you wanted to implement a file storage API using gRPC. Right now to handle a read call the server would have to copy the data into memory, copy it into the serialization buffer, and send it. It would be much better to avoid that extraneous copy for serialization.

                                                                                Our internal protobuf implementation handles this with something called cords—essentially large copy-on-write shared buffers—but cords aren’t open source yet. You can see references to ctype=CORD in the protobuf open source code and docs, and there’s a little bit of discussion here on the public mailing list.

                                                                                1. 2

                                                                                  +1 to this. In a real-world test case with ~webscale~ traffic, the heap fragmentation caused by unserialiasing ~10,000 protobufs per minute was enough to inexorably exhaust available memory within minutes, even with jemalloc and tuning to minimise fragmentation, and after doubling the memory available a few times to check that it wouldn’t cap out. I kept bumping into cord references online and wishing they were part of the open-source implementation.

                                                                                  Swapped out protobuf for a zero-copy solution (involving RapidJSON! :D) — which meant swapping out gRPC — and memory use became a flat line. We’ve become somewhat avoidant of gRPC since this and some other poor experiences.

                                                                                  1. 4

                                                                                    That’s weird, 10k protobufs per minute isn’t very many. As you might imagine, we do a lot more at Google and don’t have that problem.

                                                                                    Since you mention cords, were these protobufs with large strings?

                                                                                    What did you tune in jemalloc? Was this in a containerized environment? Did you limit the max number of arenas?

                                                                                    1. 3

                                                                                      Since you mention cords, were these protobufs with large strings?

                                                                                      Yes – the documents were about as simple as it gets, two strings. One huge, one tiny. The response to most requests was a repeated string, but we found that returning an empty array didn’t affect the heap fragmentation – just parsing the requests was enough.

                                                                                      What did you tune in jemalloc?

                                                                                      Honestly, I tried a bit of everything, but first on the list was lg_extent_max_active_fit, as well as adjusting the decay options to force returning memory to the OS sooner (and so stave off the OOM killer). It performed much better than the default system malloc, but increasing the traffic was enough to see the return of the steady increase in apparent memory use.

                                                                                      (At any point in time, turning off traffic to the service would cause the memory use increase to stop, and then after some minutes, depending on decay options, memory would return to baseline. I mention this explicitly just to make sure that we’re 100% sure there was no leak here – repeated tests, valgrind, jemalloc leak reporting, etc. all confirmed this.)

                                                                                      Was this in a containerized environment?

                                                                                      Yes, Kubernetes. This does complicate things, of course.

                                                                                      Did you limit the max number of arenas?

                                                                                      No, I didn’t – the stats didn’t give me any off feelings about how threads were being multiplexed with arenas. (Things looked sensible given what was going on.) I did try running jemalloc’s background thread, but as you might expect, that didn’t do much.

                                                                                      1. 2

                                                                                        Ah. I ask about arenas because of this problem. In that example it happened with glibc, but the same could happen with jemalloc.

                                                                                        I ask about containers because max arena count is often heuristically determined from core count, and containers expose the core count of the host system. You can easily run e.g. a 4 core container on a 40 core server and container-unaware code will incorrectly believe it has 40 cores to work with. I believe jemalloc defaults to 4 arenas per core, so 160 arenas in that example. That could massively multiply your memory footprint, just as it did in the linked post.

                                                                                        If you didn’t notice a surprisingly large amount of arenas in the stats, that probably wasn’t the issue.

                                                                                        At Google all binaries are linked with tcmalloc. I don’t know whether that matters, but it’s another possible difference.

                                                                                        If parsing empty protobufs was enough to cause memory fragmentation, I doubt cords would have made a difference either. But I agree, I wish they were open source. I’m sure they’re coming at some point, they just have to be detangled from Google internal code. That’s the whole point of Abseil, extracting core libraries into an open source release, so Google can open source other things more easily.

                                                                                        1. 1

                                                                                          Aaaah, ouch, yes, that makes sense; that could easily have bitten me, and I just got lucky that our machines had only 4 cores. I do wonder about tcmalloc.

                                                                                          If parsing empty protobufs was enough to cause memory fragmentation, I doubt cords would have made a difference either.

                                                                                          I may have been a little unclear – we were never parsing empty protobufs, always valid (full) requests, but we changed it so we returned empty/zero results to the RPCs, in case constructing the response protobufs were responsible for the memory use. So it’s possible cords would have helped some, but I have my doubts too.

                                                                                          Abseil looks neat! I’m glad such a thing exists.

                                                                                        2. 2

                                                                                          Apache Arrow uses gRPC with effectively a message similar to yours, some metadata and a giant binary blob. It is possible to use zero-copy:

                                                                                          https://github.com/apache/arrow/blob/master/cpp/src/arrow/flight/serialization-internal.h

                                                                                          1. 1

                                                                                            Whew! That is interesting. Thank you for the link, I’ll digest this. Unfortunately the project my experience was with is dead in the water, so this will have to be for the future.

                                                                                  2. 1

                                                                                    The contents (or portions thereof) of the input buffer.

                                                                                    As an example, if what you’re parsing out of the buffer is a collection of strings (typical of an HTTP request, for instance), zero-copy parsing would return pointers into the buffer, rather than copying sections of the buffer out into separately-allocated memory.

                                                                                    It’s not always beneficial (for instance, if you keep parts of the request around for a long time, they will force you to keep the entire buffer allocated; or if you need to transform the contents of the buffer, separate allocations will be required anyway), but in the right circumstances, it can speed parsing significantly.

                                                                                    Neither JSON (due to string escaping and stringified numbers) nor protobuf (varints, mostly) are terribly well-suited to zero-copy parsing, but some others serialization formats (Cap’n Proto being the one I’m most aware of) are specifically designed to enable it.

                                                                                    1. 1

                                                                                      AFAIK, the binary format of protobuf allows you to do zero-copy parsing of string/binary?

                                                                                      1. 1

                                                                                        Yes, definitely; string and bytes fields’ data is unchanged and whole in the serialised form.

                                                                                1. 5

                                                                                  This post doesn’t address the massive human side of this problem, as I have worked at several of the companies listed. Software usage is a social phenomenon, mostly.

                                                                                  I don’t know of any of the companies OP listed that have used BSD for their newest platforms, and in some cases (NetApp, Cisco) the BSD usage was not a technical choice that the company made, but through acquisitions. Curiously not mentioned is Isilon, they were big BSD committers, but again, most of them have moved on to companies / products that use Linux.

                                                                                  The big companies that contributed significantly to BSD aren’t relevant anymore - this is your biggest hurdle to making BSD relevant. If you had all the features listed implemented magically today, would companies pick BSD? If not, what would your actual strategy be?

                                                                                  1. 1

                                                                                    I recently started using git-worktree more and more. It basically allows you to create a second “view” on a cloned repository which has a different branch checked out: https://git-scm.com/docs/git-worktree

                                                                                    1. 1

                                                                                      I had my own crappy git-worktree implementation for a while based on clones. Cool to see it fully implemented in git! Thanks for the link, I’m not sure I knew this existed.

                                                                                      As a fan of filesystems, I always like branches having their own directories, easier for my brain and my text editor and my shell etc.

                                                                                    1. 5

                                                                                      Does anyone here use recutils? I think I hear about them every few years, but never use them for anything.

                                                                                      1. 1

                                                                                        The Guix distribution uses it, for package descriptions maybe? I forgot, sorry.

                                                                                        1. 3

                                                                                          No, they use S-Expressions (see package archive). Would be a shame if they used Lisp without making use of such a fundamental feature.

                                                                                          1. 3

                                                                                            They print the package descriptions on the CLI as recutil records.

                                                                                            1. 1

                                                                                              Oh, didn’t notice that. I guess I was confused by the term “package descriptions” in the context of a functional/declarative package manager.

                                                                                              1. 3

                                                                                                That’s on me, I was in a hurry when I wrote that. Sorry!