1. 5

    Where YAML gets most of it’s bad reputation from is actually not from YAML but because some project (to name a few; Ansible, Salt, Helm, …) shoehorn a programming language into YAML by adding a template language on top. And then try to pretend that it’s declarative because YAML. YAML + Templating is as declarative as any languages that has branches and loops, except that YAML hasn’t been designed to be a programming language and it’s rather quite poor at it.

    1. 2

      In the early days, Ant (Java build tool) made this mistake. And it keeps getting made. For simple configuration, YAML might be fine (though I don’t enjoy using it), but there comes a point where a programming language needs to be there. Support both: YAML (or TOML, or even JSON) and then a programming language (statically typed, please, don’t make the mistake that Gradle made in using Groovy – discovery is awful).

      1. 4

        I’m very intrigued by Dhall though I’ve not actually used it. But it is, from the github repo,

        a programmable configuration language that is not Turing-complete

        You can think of Dhall as: JSON + functions + types + imports

        it sounds neat

        1.  

          There is also UCL (Universal Config Language?) which is like nginx config + json/yaml emitters + macros + imports. It does some things that bother me so I stick to TOML but it seems like it is gaining some traction in FreeBSDd world. There is one thing I like about it which is there is a CLI for getting/setting elements in a UCL file.

      2.  

        Yes! This is one of the reasons I’m somewhat scared of people who like Ansible.

        1. 1

          Yep! People haven’t learned from mistakes. Here’s a list of XML based programming languages.

        1. 3

          Not a bug. It cuts down on reader confusion, especially as replies come in.

          1.  

            Does the backend actually store all of the comment revisions? Is it possible to see the diff on comments?

            1.  

              Nope.

            2.  

              Thanks for the clarification!

            1. 3

              Java is a language, while Node is a runtime. Node should be compared against the JVM because each platform can be targeted by different languages. For example, I can target both Node and the JVM with Clojure. In that scenario the problems regarding locking threads don’t exist because Clojure is designed to be thread safe and it provides tools, such as atoms, for working with shared mutable state.

              My experience targeting both the JVM and Node, is that the JVM provides a much simpler mental model for the developer. The JVM allows you to write predominantly synchronous code, and the threads are used to schedule execution intelligently ensuring that no single chunk of code hogs the CPU for too long. With Node you end up doing scheduling by hand, and it’s your responsibility to make sure that your code isn’t blocking the CPU.

              Here’s a concrete example from a script I ended up writing on Node:

              (defn post-status-with-images
                ([status-text urls]
                 (post-status-with-images status-text urls []))
                ([status-text [url & urls] ids]
                 (if url
                   (.get (if (string/starts-with? url "https://") https http) url
                         (fn [image-stream]
                           (post-image image-stream status-text #(post-status-with-images status-text urls (conj ids %)))))
              (post-status status-text (not-empty ids)))))
              

              here’s what the JVM equivalent would look like:

              (defn post-status-with-images [status-text urls]
                (future (post-status status-text (map post-image urls))))
              

              You could use promises or async to make the Node example a bit cleaner, but at the end of the day you’re still doing a lot more manual work and the code is more complex than it would’ve been with threads.

              1. 1

                Couldn’t this be better described as a limitation of the implementation Clojure on Node and not actually node?

                1. 2

                  I don’t really see how that’s the case. The problem I’m describing is that Node has a single execution thread, and you can’t block it. This means that the burden of breaking up your code into small chunks and coordinating them is squarely on the developer.

                  As I said, you could make the code a bit more concise, but the underlying problem is still there. For example, I used promises here, but that’s just putting on a bandaid in my opinion.

                  Threads are just a better default from the developer perspective, and it’s also worth noting that you can opt into doing async on the JVM just fine if you really wanted to. It’s not a limitation of the platform in any way.

                  1. 1

                    Threads are just a better default from the developer perspective

                    There is the caveat that threads (at last in the JVM) dramatically increase the complexity of the memory model and are generally agreed to make it harder to write correct code. Single threaded event-loop style programs don’t remove the chance of race conditions and dead locks but they remove a whole class of issues. Personally, I like something like the Erlang model which is fairly safe and scales across hardware threads. My second personal preference is for a single threaded event-loop (although I generally use it in Ocaml which makes expressing the context switches much more pleasant than in JavaScript/Node).

                    1. 1

                      The part about it being harder to write correct code only applies to imperative languages though. This is why I’m saying that it’s important to separate the platform from the language. I like the Erlang model as well, however shared nothing approach does make some algorithms trickier.

                      Personally, I found Clojure model of providing thread safe primitives for managing shared mutable state to work quite well in practice. For more complex situations the CPS model such as core.async or Go channels is handy as well in my experience.

              1. 26

                I disagree with the negative posts. Writing about something you’ve just learned is absolutely a wonderful way to cement the knowledge, record it as you understand it for posterity if only for yourself, and help you pull others up right behind you. It’s not your responsibility to keep your ideas to yourself until some magic day where you reach enlightenment and only then can convey blessed knowledge on the huddled masses, a lot of this stuff (the specific tech, for the most part) moves too damn fast for that anyway. Maybe we need better mechanisms for surfacing the best information, sure, but discouraging people (yes, even noobs) from sharing what they’ve learned only ensures we’ll have fewer people practiced in how to do it effectively in the future.

                That said, I do 1000% agree that people writing in public should be as up front as possible about where they are coming from and where they are at. I definitely get annoyed with low quality information that also carries an authoritative tone.

                1. 7

                  There’s a world between documenting how you learned a thing, and writing a tutorial for that same thing. If you’re learning a thing, probably don’t write a tutorial. I agree with you, writing about a freshly learned lesson helps in making the learning more permanent, though.

                  1. 4

                    In the case of projects, I’d rather see people committing documentation changes back to the project, at least here the creator of the project can review it.

                    It’s a free internet and nobody can stop someone from doing this, but, IMO, the problem with technology is not that there is too little poorly written tutorial out there. Maybe it’s worth finding other ways of being constructive.

                    1. 3

                      Writing it down can help the mind remember or think on things. If errors are likely, then maybe they just don’t publish it. They get the benefits of writing it down without the potential downsides.

                    1. 10

                      Good on you. It’s worth mentioning here that Microsoft is going in the other direction. https://www.mercurynews.com/2018/06/19/microsoft-defends-ties-with-ice-amid-separation-outcry/amp/

                      1. 3

                        In response to questions we want to be clear: Microsoft is not working with U.S. Immigration and Customs Enforcement or U.S. Customs and Border Protection on any projects related to separating children from their families at the border, and contrary to some speculation, we are not aware of Azure or Azure services being used for this purpose. As a company, Microsoft is dismayed by the forcible separation of children from their families at the border.

                        Maybe I’m missing something, but it seems they are going in the exact same direction…

                        1. 6

                          It’s a very confusing article; my best guess is that they are working with ICE, but not on “projects related to separating children from their families at the border”.

                          1. 10

                            And just because Microsoft isn’t directly helping, they are still helping. That nuance is discussed in OP’s article - any support to an morally corrupt institution is unacceptable, even if it is indirect support.

                            1. 7

                              But that perspective is very un-nuanced. Is everything ICE does wrong? It’s a large organization. What if the software the company that @danielcompton denied service to is actually just trying to track down violent offenders that made it across the border? Or drug trafficking?

                              To go even further, by your statement, Americans should stop paying their taxes. Are you advocating that?

                              1. 17

                                ICE is a special case, and deserves to be disbanded. It’s a fairly new agency, and its primary mission is to be a Gestapo. So yes, very explicitly, everything ICE does is wrong.

                                1. 3

                                  On what ground and with which argument can you prove your statement? I mean, there is probably an issue with how it’s run, but the whole concept of ICE doesn’t sound that wrong to me.

                                  1. 13

                                    From https://splinternews.com/tear-it-all-down-1826939873 :

                                    The thing that is so striking about all three items is not merely the horror they symbolize. It is how easy it was to get all of these people to play their fascistic roles. The Trump administration’s family separation rule has not even been official policy for two months, and yet look at where we are already. The Border Patrol agent is totally unperturbed by the wrenching scenes playing out around him. The officers have sprung to action with a useful lie to ward off desperate parents. Nielsen, whom the New Yorker described in March as “more of an opportunist than an ideologue” and who has been looking to get back into Donald Trump’s good graces, is playing her part—the white supremacist bureaucrat more concerned with office politics than basic morality—with seeming relish. They were all ready.

                                    I’m going to just delegate all arguments to that link, basically, with a comment that of it’s not exceedingly obvious, then I probably can’t say anything that would persuade you. Also, this is all extremely off-topic for this forum, but, whatevs.

                                2. 10

                                  There’s always a nuance, sure. Every police force ever subverted for political purposes was still continuing to fight petty crime, prevent murders and help old ladies cross the street. This always presented the regimes a great way to divert criticism, paint critics as crime sympathisers and provide moral leeway to people working there and with them.

                                  America though, with all its lip service to small government and self reliance was the last place I expected that to see happening. Little did I know!

                                  1. 5

                                    Is everything ICE does wrong? It’s a large organization.

                                    Just like people, organizations should be praised for their best behaviors and held responsible for their worst behaviors. Also, some organizations wield an incredible amount of power over people and can easily hide wrongdoing and therefore should be held responsible to the strictest standard.

                                    1. 8

                                      Its worth pointing out that ICE didn’t exist 20 years ago. Neither, for that matter did the DHS (I was 22 when that monster was born). “Violent offenders” who “cross the border” will be tracked down by the same people who track down citizen “violent offenders” ie the cops (what does “violent offender” even mean? How do we who these people are? how will we know if they’re sneaking in?) Drug trafficking isn’t part of ICEs institutional prerogative in any large, real sense, so its not for them to worry about? Plenty of americans, for decades, have advocated tax resistance precisely as a means to combat things like this. We can debate its utility but it is absolutely a tactic that has seen use since as far as I know at least the Vietnam war. Not sure how much nuance is necessary when discussing things like this. Doesn’t mean its open season to start dropping outrageous nonsense, but institutions which support/facilitate this in any way should be grounds for at the very least boycotts.

                                      1. 5

                                        Why is it worth pointing out it didn’t exist 20 years ago? Smart phones didn’t either. Everything starts at some time.

                                        To separate out arguments, this particular subthread is in response to MSFT helping ICE, but the comment I responded to was referring to the original post, which only refers to “border security”. My comment was really about the broader aspect but I phrased it poorly. In particular, I think the comment I replied to which states that you should not support anything like this indirectly basically means you can’t do anything.

                                        1. 5

                                          Its worth pointing out when it was founded for a lot of reasons; what were the conditions that led to its creation? Were they good? Reasonable? Who created it? What was the mission originally? The date is important because all of these questions become easily accessible to anyone with a web browser and an internet connection, unlike, say, the formation of the FBI or the origins of Jim Crow which while definitely researchable on the net are more domains of historical research. Smart phones and ethnic cleansing however, not so much in the same category.

                                          1. 4

                                            If you believe the circumstances around the formation of ICE are worth considering, I don’t think pointing out the age of the institution is a great way to make that point. It sounds more like you’re saying “new things are inherently bad” rather than “20 years ago was a time with a lot of politically questionable activity” (or something along those lines).

                                            1. 8

                                              dude, read it however you want, but pointing out that ICE is less than 20 years old, when securing a border is a foundational issue, seems like a perfect way to intimate that this is an agency uninterested in actual security and was formed expressly to fulfill a hyper partisan, actually racist agenda. Like, did we not have border security or immigration services or customs enforcement prior to 2002/3? Why then? What was it? Also, given that it was formed so recently, it can be unformed, it can be dismantled that much easier.

                                              1. 1

                                                I don’t understand your strong reaction here. I was pointing out that if your goal was to communicate something, just saying it’s around 20 years old didn’t seem to communicate what you wanted to me. Feel free to use that feedback or not use it.

                                      2. 2

                                        In addition, I bet the ICE is using Microsoft Windows and probably Office too.

                                        1. 1

                                          That’s a great point, and no I don’t advocate for all Americans to stop paying taxes.

                                        2. 0

                                          any support to an morally corrupt institution is unacceptable, even if it is indirect support

                                          A very interesting position. It just requires you to stop using any currency. ;-)

                                          1. 3

                                            No, it requires you to acknowledge that using any currency is unacceptable.

                                            Of course not using any currency is also unacceptable. When faced with two unacceptable options, one has to choose one. Using the excuse “If I follow my ethics I can never do anything” is just a lazy way to never think about ethics. In reality everything has to be carefully considered and weighed on a case by case basis.

                                            1. 1

                                              Of course not using any currency is also unacceptable.

                                              Why? Currency is just a tool.

                                              Using the excuse “If I follow my ethics I can never do anything” is just a lazy way to never think about ethics.

                                              I completely agree.
                                              Indeed I think that we can always be ethical, but we should look beyond the current “public enemy”, be it Cambridge Analytica or ICE. These are just symptoms. We need to cure the disease.

                                  1. 15

                                    I recently discovered how horribly complicated traditional init scripts are whilst using Alpine Linux. OpenRC might be modern, but it’s still complicated.

                                    Runit seems to be the nicest I’ve come across. It asks the question “why do we need to do all of this anyway? What’s the point?”

                                    It rejects the idea of forking and instead requires everything to run in the foreground:

                                    /etc/sv/nginx/run:

                                    #!/bin/sh
                                    exec nginx -g 'daemon off;'
                                    

                                    /etc/sv/smbd/run

                                    #!/bin/sh
                                    mkdir -p /run/samba
                                    exec smbd -F -S
                                    

                                    /etc/sv/murmur/run

                                    #!/bin/sh
                                    exec murmurd -ini /etc/murmur.ini -fg 2>&1
                                    

                                    Waiting for other services to load first does not require special features in the init system itself. Instead you can write the dependency directly into the service file in the form of a “start this service” request:

                                    /etc/sv/cron/run

                                     #!/bin/sh
                                     sv start socklog-unix || exit 1
                                     exec cron -f
                                    

                                    Where my implementation of runit (Void Linux) seems to fall flat on its face is logging. I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.

                                    The only other feature I can think of is “reloading” a service, which Aker does in the article via this line:

                                    ExecReload=kill -HUP $MAINPID

                                    I’d make the argument that in all circumstances where you need this you could probably run the command yourself. Thoughts?

                                    1. 6

                                      Where my implementation of runit (Void Linux) seems to fall flat on its face is logging. I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.

                                      The logging mechanism works like this to be stable and only lose logs in case runsv and the log service would die. Another thing about separate logging services is that stdout/stderror are not necessarily tagged, adding all this stuff to runsv would just bloat it.

                                      There is definitively room for improvements as logger(1) is broken since some time in the way void uses it at the moment (You can blame systemd for that). My idea to simplify logging services to centralize the way how logging is done can be found here https://github.com/voidlinux/void-runit/pull/65. For me the ability to exec svlogd(8) from vlogger(8) to have a more lossless logging mechanism is more important than the main functionality of replacing logger(1).

                                      1. 1

                                        Ooh thankyou, having a look :)

                                      2. 6

                                        Instead you can write the dependency directly into the service file in the form of a “start this service” request

                                        But that neither solves starting daemons in parallel, or even at all, if they are run in the ‘wrong’ order. Depending on network being setup, for example, brings complexity to each of those shell scripts.

                                        I’m of the opinion that a dsl of whitelisted items (systemd) is much nicer to handle than writing shell scripts, along with the standardized commands instead of having to know which services that accepts ‘reload’ vs ‘restart’ or some other variation in commands - those kind of niceties are gone when the shell scripts are individually an interface each.

                                        1. 6

                                          The runit/daemontools philosophy is to just keep trying until something finally runs. So if the order is wrong, presumably the service dies if a dependent service is not running, in which case it’ll just get restart. So eventually things progress towards a functioning state. IMO, given that a service needs to handle the services it depends on crashing at any time anyways to ensure correct behaviour, I don’t feel there is significant value in encoding this in an init system. A dependent service could also be moved to running on another machine which this would not work in as well.

                                          1. 3

                                            It’s the same philosophy as network-level dependencies. A web app that depends on a mail service for some operations is not going to shutdown or wait to boot if the mail service is down. Each dependency should have a tunable retry logic, usually with an exponential backoff.

                                          2. 4

                                            But that neither solves starting daemons in parallel, or even at all, if they are run in the ‘wrong’ order.

                                            That was my initial thought, but it turns out the opposite is true. The services are retried until they work. Things are definitely paralleled – there is not “exit” in these scripts, so there is no physical way of running them in a linear (non-parallel) nature.

                                            Ignoring the theory: void’s runit provides the second fastest init boot I’ve ever had. The only thing that beats it is a custom init I wrote, but that was very hardware (ARM Chromebook) and user specific.

                                          3. 5

                                            Dependency resolving on daemon manager level is very important so that it will kill/restart dependent services.

                                            runit and s6 also don’t support cgroups, which can be very useful.

                                            1. 5

                                              Dependency resolving on daemon manager level is very important so that it will kill/restart dependent services

                                              Why? The runit/daemontools philsophy is just to try to keep something running forever, so if something dies, just restart it. If one restarts a service, than either those that depend on it will die or they will handle it fine and continue with their life.

                                              1. 4

                                                either those that depend on it will die or they will handle it fine

                                                If they die, and are configured to restart, they will keep bouncing up and down while the dependency is down? I think having dependency resolution is definitely better than that. Restart the dependency, then the dependent.

                                                1. 4

                                                  Yes they will. But what’s wrong with that?

                                                  1. 2

                                                    Wasted cycles, wasted time, not nearly as clean?

                                                    1. 10

                                                      It’s a computer, it’s meant to do dumb things over and over again. And presumably that faulty component will be fixed pretty quickly anyways, right?

                                                      1. 5

                                                        It’s a computer, it’s meant to do dumb things over and over again

                                                        I would rather have my computer do less dumb things over and over personally.

                                                        And presumably that faulty component will be fixed pretty quickly anyways, right?

                                                        Maybe; it depends on what went wrong precisely, how easy it is to fix, etc. We’re not necessarily just talking about standard daemons - plenty of places run their own custom services (web apps, microservices, whatever). The dependency tree can be complicated. Ideally once something is fixed everything that depends on it can restart immediately, rather than waiting for the next automatic attempt which could (with the exponential backoff that proponents typically propose) take quite a while. And personally I’d rather have my logs show only a single failure rather than several for one incident.

                                                        But, there are merits to having a super-simple system too, I can see that. It depends on your needs and preferences. I think both ways of handling things are valid; I prefer dependency management, but I’m not a fan of Systemd.

                                                        1. 4

                                                          I would rather have my computer do less dumb things over and over personally.

                                                          Why, though? What’s the technical argument. daemontools (and I assume runit) do sleep 1 second between retries, which for a computer is basically equivalent to it being entirely idle. It seems to me that a lot of people just get a bad feeling about running something that will immediately crash.

                                                          Maybe; it depends on what went wrong precisely, how easy it is to fix, etc. We’re not necessarily just talking about standard daemons - plenty of places run their own custom services (web apps, microservices, whatever).

                                                          What’s the distinction here? Also, with microservices the dependency graph in the init system almost certainly doesn’t represent the dependency graph of the microservice as it’s likely talking to services on other machines.

                                                          I think both ways of handling things are valid

                                                          Yeah, I cannot provide an objective argument as to why one should prefer one to the other. I do think this is a nice little example of the slow creep of complexity in systems. Adding a pinch of dependency management here because it feels right, and a teaspoon of plugin system there because we want things to be extensible, and a deciliter of proxies everywhere because of microservices. I think it’s worth taking a moment every now and again and stepping back and considering where we want to spend our complexity budget. I, personally, don’t want to spend it on the init system so I like the simple approach here (especially since with microservies the init dependency graph doesn’t reflect the reality of the service anymore). But as you point out, positions may vary.

                                                          1. 2

                                                            Why, though? What’s the technical argument

                                                            Unnecessary wakeup, power use (especially for a laptop), noise in the logs from restarts that were always bound to fail, unnecessary delay before restart when restart actually does become possible. None of these arguments are particularly strong, but they’re not completely invalid either.

                                                            We’re not necessarily just talking about standard daemons …

                                                            What’s the distinction here?

                                                            I was trying to point out that we shouldn’t make too many generalisations about how services might behave when they have a dependency missing, nor assume that it is always ok just to let them fail (edit:) or that they will be easy to fix. There could be exceptions.

                                                        2. 2

                                                          Perhaps wandering off topic, but this is a good way to trigger even worse cascade failures.

                                                          eg, an RSS reader that falls back to polling every second if it gets something other than 200. I retire a URL, and now a million clients start pounding my server with a flood of traffic.

                                                          There are a number of local services (time, dns) which probably make some noise upon startup. It may not annoy you to have one computer misbehave, but the recipient of that noise may disagree.

                                                          In short, dumb systems are irresponsible.

                                                          1. 2

                                                            But what is someone supposed to do? I cannot force a million people using my RSS tool not to retry every second on failure. This is just the reality of running services. Not to mention all the other issues that come up with not being in a controlled environment and running something loose on the internet such as being DDoS’d.

                                                            1. 2

                                                              I think you are responsible if you are the one who puts the dumb loop in your code. If end users do something dumb, then that’s on them, but especially, especially, for failure cases where the user may not know or observe what happens until it’s too late, do not ship dangerous defaults. Most users will not change them.

                                                              1. 1

                                                                In this case we’re talking about init systems like daemontools and runit. I’m having trouble connecting what you’re saying to that.

                                                        3. 2

                                                          If those thing bother you, why run Linux at all? :P

                                                      2. 2

                                                        N.B. bouncing up and down ~= polling. Polling always intrinsically seems inferior to event based systems, but in practice much of your computer runs on polling perfectly fine and doesn’t eat your CPU. Example: USB keyboards and mice.

                                                        1. 2

                                                          USB keyboard/mouse polling doesn’t eat CPU because it isn’t done by the CPU. IIUC the USB controller generates an interrupt when data is received. I feel like this analogy isn’t a good one (regardless). Checking a USB device for a few bytes of data is nothing like (for example) starting a Java VM to host a web service which takes some time to read its config and load its caches only to then fall over because some dependency isn’t running.

                                                        2. 1

                                                          Sleep 1 and restart is the default. It is possible to have another behavior by adding a ./finish script to the ./run script.

                                                      3. 2

                                                        I really like runit on void. I do like the simplicity of SystemD target files from a package manager perspective, but I don’t like how systemd tries to do everything (consolekit/logind, mounting, xinet, etc.)

                                                        I wish it just did services and dependencies. Then it’d be easier to write other systemd implementations, with better tooling (I’m not a fan of systemctl or journalctl’s interfaces).

                                                        1. 1

                                                          You might like my own dinit (https://github.com/davmac314/dinit). It somewhat aims for that - handle services and dependencies, leave everything else to the pre-existing toolchain. It’s not quite finished but it’s becoming quite usable and I’ve been booting my system with it for some time now.

                                                      4. 4

                                                        I’d make the argument that in all circumstances where you need this you could probably run the command yourself. Thoughts?

                                                        It’s nice to be able to reload a well-written service without having to look up what mechanism it offers, if any.

                                                        1. 5

                                                          Runits sv(8) has the reload command which sends SIGHUP by default. The default behavior (for each control command) can be changed in runit by creating a small script under $service_name/control/$control_code.

                                                          https://man.voidlinux.eu/runsv#CUSTOMIZE_CONTROL

                                                          1. 1

                                                            I was thinking of the difference between ‘restart’ and ‘reload’.

                                                            Reload is only useful when:

                                                            • You can’t afford to lose a few seconds of service uptime (OR the service is ridiculously slow to load)
                                                            • AND the daemon supports an on-line reload functionality.

                                                            I have not been in environments where this is necessary, restart has always done me well. I assume that the primary use cases are high-uptime webservers and databases.

                                                            My thoughts were along the lines o: If you’re running a high-uptime service, you probably don’t care about the extra effort of writing ‘killall -HUP nginx’ than ‘systemctl reload nginx’. In fact I’d prefer to do that than take the risk of the init system re-interpreting a reload to be something else, like reloading other services too, and bringing down my uptime.

                                                          2. 3

                                                            I hoped it would do something nice like redirect stdout and stderr of these supervised processes by default. Instead you manually have to create a new file and folder for each service that explicitly runs its own copy of the logger. Annoying. I hope I’ve been missing something.

                                                            I used to use something like logexec for that, to “wrap” the program inside the runit script, and send output to syslog. I agree it would be nice if it were builtin.

                                                          1. 0

                                                            I don’t really understand this. Sure, it’s cool to optimize something so well, but I don’t see the point of going to so much effort to reduce memory allocations. The time taken to run this, what it seems like you would actually care about, is all over the place and doesn’t get reduced that much. Why do we care about the number of allocations and GC cycles? If you care that much about not “stressing the GC”, whatever that means, then better to switch to a non-GC language than jump through hoops to get a GC language to not do its thing.

                                                            1. 10

                                                              On the contrary, I found this article a refreshing change from the usual Medium fare. Specifically, this article is actually technical, has few (any?) memes, and shows each step of optimization alongside data. More content like this, please!

                                                              More to your point, I imagine there was some sort of constraint necessitating it. The fact that the allocation size dropped so drastically fell out of using a pooled allocator.

                                                              1. 4

                                                                Right at the beginning of the article, it says:

                                                                This data is then used to power our real-time calculations. Currently this import process has to take place outside of business hours because of the impact it has on memory usage.

                                                                So: They’re doing bulk imports of data, and the extra allocation produces so much overhead that they need to schedule around it (“outside of business hours”). Using 7.5GB may be fine for processing a single input batch on their server, but it’s likely they want to process several data sets in parallel, or do other work.

                                                                Sure, they could blast the data through a DFA in C and probably do it with no runtime allocation at all (their final code is already approaching a hand-written lexer), but completely changing languages/platforms over issues like this has a lot of other implications. It’s worth knowing if it’s manageable on their current platform.

                                                                1. 3

                                                                  They’re doing bulk imports of data, and the extra allocation produces so much overhead that they need to schedule around it

                                                                  That’s what they claim, but it sounds really weird to me. I’ve worked with plenty of large data imports in GCed languages, and have never had to worry about overhead, allocation, GC details, etc. I’m not saying they don’t have these problems, but it would be even more interesting to hear why these things are a problem for them.

                                                                  Also of note - their program never actually used 7.5GB of memory. That’s the total allocations over the course of the program, virtually all of which was surely GC’ed almost immediately. Check out the table at the end of the article - peak working set, the highest amount of memory actually used, never budged from 16kb until the last iteration, where it dropped to 12kb. Extra allocations and GC collections are what dropped. Going by the execution time listing, the volume of allocations and collections doesn’t seem to have much noticeable effect on anything. I’d very much like to know exactly what business goals they accomplished by all of that effort to reduce allocations and collections.

                                                                  1. 1

                                                                    You’re right – it’s total allocations along the way rather than the allocation high water mark. It seems unlikely they’d go out of their way to do processing in off hours without running into some sort of problem first (so I’m inclined to take that assertion at face value), though I’m not seeing a clear reason in the post.

                                                                    Still, I’ve seen several cases where bulk data processing like this has become vastly more efficient (from hours to minutes) by using a trie and interning common repeated substrings, re-using the same stack/statically allocated buffers, or otherwise eliminating a ton of redundant work. If anything, their timings seem suspicious to me (I’d expect the cumulative time to drop significantly), but I’m not familiar enough with the C# ecosystem to try to reproduce their results.

                                                                  2. 1

                                                                    From what I understood, the 7.5GB of memory is total allocations, not the amount of memory held resident, that was around 15 megs. I’m not sure why the memory usage requires running outside business hours.

                                                                    EDIT: Whoops, I see you responded to a similar comment that showed up below when I was reading this.

                                                                  3. 2

                                                                    The article doesn’t explain why they care, but many garbage collection make it hard to hit a latency target consistently (i.e. while the GC is running its longest critical section). Also, garbage collection is (usually better optimized for short-living allocations than malloc, but still) somewhat expensive, and re-using memory makes caches happier.

                                                                    Of course, there’s a limit to how much optimization one needs for a CSV-like file in the hundreds of MBs…

                                                                    1. 1

                                                                      Maybe their machines don’t have 8gb of free memory lying around.

                                                                      1. 2

                                                                        As shown in the table, they don’t use anywhere close to 8gb of memory at a time. This seems like a case that .NET is already very good at at a baseline level

                                                                    1. 3

                                                                      This seems more appropriate for barnacles.

                                                                      1. 8

                                                                        This leaves out a pretty important part of work: you work on a team. Increasingly it’s acceptable for people to work hours that suit them, and for many people that means coming in at 10 or 11. That means they are staying later and they are probably most productive around 3 or 4 or 5. That means they’ll be dropping the most PRs on you then or asking the most questions.

                                                                        That isn’t to say that this suggestion won’t work, but you probably can’t just institute it and call it a day. The post doesn’t even mention colleagues or teams.

                                                                        1. 14

                                                                          This leaves out a pretty important part of work: you work on a team.

                                                                          I don’t think it matters whether you work 9-5 or 11-7. If other people on the team are working within a certain time period (such as 11-7), then by all means try to accommodate them by adjusting your hours to overlap with theirs to the extent that doing so doesn’t impact your productivity or get in the way of the rest of your life.

                                                                          The fundamental principle is to do a solid day’s work in eight hours or less because unpaid overtime is for suckers. Not only are you not getting paid for the extra hours when you draw a salary, but working more than 40 hours a week reduces the amount of money you earn per hour.

                                                                          1. 9

                                                                            unpaid overtime is for suckers

                                                                            It’s not only stupid, but unethical too. If somebody works overtime without pay, it creates pressure for other workers to do it as well. If you do it regularly, your output gets worse, which means that your employer benefits nothing either. It’s just loss/loss.

                                                                            1. 1

                                                                              I know that. You know that. Managers refuse to know it. They’d rather make wild promises, letting their egos cut checks that their own asses won’t be called upon to cash.

                                                                            2. 3

                                                                              This was my take too. 9 and 5 are arbitrary fence posts. The key here is working an 8ish hour day and not a 10ish or 12ish hour day.

                                                                              1. 5

                                                                                4-6 hours would be better, IMO, but I find myself turning into some kind of dirty long-haired pinko as I approach middle age.

                                                                                1. 3

                                                                                  I would agree if the workday were actually one solid block of nothing but writing code or thinking about writing code. However in the real world (or at least MY real world) the workday consists of that plus a whole host of scheduled and unscheduled interruptions like meetings, chats with manager and coworkers, etc.

                                                                                  When you add in those things, a 4-6 hour workday starts to look kinda sketchy :)

                                                                                  1. 2

                                                                                    I don’t think it’s sketchy. I think it’s something we should have forced down management’s throat in the 1960s. In the meantime, when you add in the bullshit that comes with a coding job, you end up with an eight hour workday.

                                                                            3. 4

                                                                              For teams, I think it’s fundamental to estabilish a common ground from the get go. I feel that team members should (ideally) agree on a (flexible as much as possible) schedule that accomodates everyone needs, instead of just individually decide which work hours suite them. Personally, I think that, when other team members depends on some measure of your availability, showing up “whenever you feel like it” is a sign of lack of respect for your peers (and I won’t allow it on my team).

                                                                              1. 2

                                                                                My team is doing mostly 10-8 (so working more than the 8h/d). Now I usually do 8-4/5 (depending on the work pressure, my commitments, if I took an additional personal time at lunch break …) if a team member throws a PR when I have to leave, I have absolutely no scruples to let it for tomorrow. Once or twice some asked for a review when I was leaving. To that you just have to answer that you’re leaving because you called it a day and that except if it’s critical to have it reviewed it today, it can probably wait for tomorrow.

                                                                                To me the teams are not an issue as long as you communicate.

                                                                                1. 2

                                                                                  In my experience it’s better to let important reviews wait for the morning, when my judgement is clear, rather than wave them through at when I’m tired.

                                                                              1. 2

                                                                                The hash example seems to go entirely against duck typing without ever mentioning it. Does that mean I shouldn’t do duck typing? Or I should? Based on this I should probably just be using a static language…

                                                                                1. 1

                                                                                  I too think that it’s unreasonably strict. I’m not familiar with Ruby, but is_a is frowned upon in most class-based OOP: the usual alternative is to ask the object if it can do something (i.e. call one of its methods), rather than the language (via is_a).

                                                                                  A simplistic version might use a method like canHash, returning a boolean. This moves the responsibility away from the language and into the objects, where we can choose how it’s implemented. The problem with doing this is that we’ll probably have to change/wrap the objects that we’re using to give them this new method.

                                                                                  A nicer approach would use what’s already there. For example, in Python the foo[bar] notation is implemented by calling a __getitem__ method. Hence we could have the constructor check whether the given object has a __getitem__ method, rather than what class it is/inherits from. One problem with this is that the object might use a “magic method” to handle those calls, which may cause our check to fail despite giving a valid object. Whether we want to allow that or not depends on the language culture.

                                                                                  The equivalent with static typing is an interface, e.g. KeyValueMap. The nice thing about interfaces is that we don’t care what the actual type is, how it is structured, if it’s a subtype of something, etc. we only care about “can we use it like a hash map?”. This is like doing a canHash check, but doing it statically at compile time rather than at run time; like canHash, we would have to ensure that the types we care about provide an implementation of the interface. Some languages, like Haskell, allow us to provide such implementations ourselves in an ad-hoc way (although there can be conflicts, see “orphan instances”); in languages like Java we can only implement interfaces from within the type (actually class) definition, we can’t “plug in” an implementation later (we’d have to provide a wrapper).

                                                                                1. 4

                                                                                  These vary pretty heavily in quality. Many seem to be missing proper quoting. Use with caution.

                                                                                  1. 4

                                                                                    Use bash with caution.

                                                                                    1. 1

                                                                                      Yeah, but its the same as any script you find online, don’t run it if you don’t understand it. The benefit here is that some of the better one are explained or corrected by other users.

                                                                                    1. 5

                                                                                      Leaving the React Native part aside just for this thread. Are they actually rewriting Office 365?

                                                                                      I thought that was a thing you should never do.

                                                                                      1. 5

                                                                                        Looks like it’s just the UI layer, other components are still written in different languages according a tweet later down.

                                                                                        https://twitter.com/thelarkinn/status/1006746626617008128

                                                                                        1. 1

                                                                                          I see. I mean, that’s not surprising, but it’s quite a departure from “All of Office 365 is… being completely rewritten in… JavaScript”. I guess he didn’t intend for this to be taken at face value, we’ve been taking a tweet too seriously.

                                                                                      1. 3

                                                                                        Looks like the result at the top is pulled from the “min” column because it’s more favorable to nuster :)

                                                                                        1. 3

                                                                                          I’m not sure how to interpret the requests per second. They are all over the place on nginx but the average is pretty close to nuster.

                                                                                          1. 1

                                                                                            Not min, from finished in 29.51s, 338924.15 req/s, 48.81MB/s and finished in 90.56s, 110419.16 req/s, 15.62MB/s

                                                                                          1. 0

                                                                                            A list of beliefs about programming that I maintain are misconceptions.

                                                                                            1. 3

                                                                                              Small suggestion: use a darker, bigger font. There are likely guidelines somewhere but I don’t think you can fail with using #000 for text people are supposed to read for longer than a couple of seconds.

                                                                                              1. 3

                                                                                                Current web design seems allergic to any sort of contrast. Even hyper-minimalist web design calls for less contrast for reasons I can’t figure out. Admittedly, I’m a sucker for contrast; I find most programming colorschemes hugely distasteful for the lack of contrast.

                                                                                                1. 6

                                                                                                  I think a lot of people find the maximum contrast ratios their screens can produce physically unpleasant to look at when reading text.

                                                                                                  I believe that people with dyslexia in particular find reading easier with contrast ratios lower than #000-on-#fff. Research on this is a bit of a mixed bag but offhand I think a whole bunch of people report that contrast ratios around 10:1 are more comfortable for them to read.

                                                                                                  As well as personal preference, I think it’s also quite situational? IME, bright screens in dark rooms make black-on-white headache inducing but charcoal-on-silver or grey-on-black really nice to look at.

                                                                                                  WCAG AAA asks for a contrast ratio of 7:1 or higher in body text which does leave a nice amount of leeway for producing something that doesn’t look like looking into a laser pointer in the dark every time you hit the edge of a glyph. :)

                                                                                                  As for the people putting, like, #777-on-#999 on the web, I assume they’re just assholes or something, I dunno.

                                                                                                  Lobsters is #333-on-#fefefe which is a 12.5:1 contrast ratio and IMHO quite nice with these fairly narrow glyphs.

                                                                                                  (FWIW, I configure most of my software for contrast ratios around 8:1.)

                                                                                                  1. 2

                                                                                                    Very informative, thank you!

                                                                                              2. 3

                                                                                                I think the byte-order argument doesn’t hold when you mentioned ntohs and htons which are exactly where byte-order needs to be accounted for…

                                                                                                1. 2

                                                                                                  If you read the byte stream as a byte stream and shift them into position, there’s no need to check endianness of your machine (just need to know endianness of the stream) - the shifts will always do the right thing. That’s the point he was trying to make there.

                                                                                                  1. 2

                                                                                                    ntohs and htons do that exact thing and you don’t need to check endianess of your machine, so the comment about not understanding why they exist makes me feel like the author is not quite groking it. Those functions/macros can be implemented to do the exact thing linked to in the blog post.

                                                                                              1. 1

                                                                                                Another way to get around the issue described in the section on -- is to always use something like ./* instead of * for globs.

                                                                                                1. 3

                                                                                                  And one more thing from the man page of dd(1)

                                                                                                  Sending a USR1 signal to a running 'dd' process makes it print I/O statistics to standard error and then resume copying.
                                                                                                  
                                                                                                  1. 2

                                                                                                    In FreeBSD we have SIGINFO which is triggered by C-t and many applications support it (it’s one thing that I really miss when I am in Linux). For example, when doing a cp just hit C-t to see stats on the progress.

                                                                                                    1. 1

                                                                                                      oooo, that’s a neat tidbit to find out. Thanks!

                                                                                                    1. 18

                                                                                                      I no longer believe that daemons should fork into the background. Most Unix systems now have better service control and it makes the code easier to deal with if it doesn’t call fork(). This makes it easier to test (no longer do you have to provide an option not to fork() or an option to fork()) and less code is always better.

                                                                                                      1. 6

                                                                                                        Not forking also allows logging to be an external concern and the process should just write to stdout and stderr as normal.

                                                                                                        1. 1

                                                                                                          This is not so much about the forking per se, but rather the other behaviour that generally goes with it: closing any file descriptors that might be connected to a controlling terminal.

                                                                                                        2. 4

                                                                                                          OpenBSD’s rc system seems to expect that processes fork. I don’t see an obvious workaround for processes that don’t fork.

                                                                                                          1. 3

                                                                                                            It’s not that hard to write a program to do the daemonization (call umask(), setsid(), chdir(), set up any redirection of stdin, stdout and stderr, then exec() the non-forking daemon.

                                                                                                            1. 2

                                                                                                              It’s even simpler when you have daemon(3): http://man7.org/linux/man-pages/man3/daemon.3.html

                                                                                                              1. 1

                                                                                                                Which you do on OpenBSD, actually.

                                                                                                                Note that daemon(3) is a non-standard extension so it should be avoided for portable code. The implementation is simple enough, though.

                                                                                                            2. 2

                                                                                                              I’m not sure this is accurate, at least on -current. There are several go “deamons” that as far as I understand don’t support fork(2). These can still be managed by OpenBSD’s rc system:

                                                                                                              # cd /etc/rc.d
                                                                                                              # cat grafana                                                                                                                                                                                                  
                                                                                                              #!/bin/ksh
                                                                                                              #
                                                                                                              # $OpenBSD: grafana.rc,v 1.2 2018/01/11 19:27:10 rpe Exp $
                                                                                                              
                                                                                                              daemon="/usr/local/bin/grafana-server"
                                                                                                              daemon_user="_grafana"
                                                                                                              daemon_flags="-homepath /usr/local/share/grafana -config /etc/grafana/config.ini"
                                                                                                              
                                                                                                              . /etc/rc.d/rc.subr
                                                                                                              
                                                                                                              rc_bg=YES
                                                                                                              rc_reload=NO
                                                                                                              
                                                                                                              rc_cmd $1
                                                                                                              

                                                                                                              I’m not sure if there’s more to it that I don’t understand, I don’t write many deamons!

                                                                                                              1. 1

                                                                                                                Well, it turns out, I can’t read! The key to this is rc_bg, see https://man.openbsd.org/rc.subr#ENVIRONMENT

                                                                                                            3. 1

                                                                                                              For those that don’t know, daemontools is a nice service system that explicitly wants programs to not try to daemonize themselves. For services I build and run I try to use that.

                                                                                                            1. 6

                                                                                                              I suggest each person answer as briefly as possible the following questions:

                                                                                                              1. Where am I?
                                                                                                              2. Where do I want to be?
                                                                                                              3. What’s the next think to do to get me there?
                                                                                                              1. 36

                                                                                                                Then again, I’ve rarely seen anyone use their editor of choice well. I’ve lost count of how many times I’ve watched someone open a file in vim, realise it’s not the one they want, close vim, open another file, close it again… aaarrrgh.

                                                                                                                I do this a lot, because I prefer browsing files in the shell. I make pretty extensive use of a lot of other vim features though. When did you become the arbiter of how “well” I’m using my computer?

                                                                                                                1. 3

                                                                                                                  Closing vim seems odd to me. Why wouldn’t one instead open the new file without closing vim? Maybe it’s a cultural thing? I don’t think anyone would do that in Emacs.

                                                                                                                  1. 26

                                                                                                                    “Why would I ever leave my editor” definitely feels like a common refrain from the Emacs crowd.

                                                                                                                    1. 1

                                                                                                                      I do the thing you quoted as well, but that is because vim is often my editor of convenience on a machine rather than my editor of choice, which is true for many usages I see of vim.

                                                                                                                    2. 21

                                                                                                                      Because the shell lets me change directories, list files with globs, run find, has better tab-completion (bash, anyway), etc, etc. I might not remember the exact name of the file, etc. Finding files in the shell is something I do all day, so I’m fast at it. Best tool for the job and all that.

                                                                                                                      (Yes I can do all that with ! in vi/vim/whatever, but there’s a cognitive burden since that’s not how I “normally” run those commands. Rather than do it, mess it up because I forgot ! in front or whatever, do it again, etc, I can just do it how I know it’ll work the first time.)

                                                                                                                      1. 6

                                                                                                                        This is exactly why I struggle with editors like Emacs. My workflow is definitely oriented around the shell. The editor is just another tool among many. I want to use it just like I use all my other tools. I can’t get on with the Emacs workflow, where the editor is some special place that stays open. I open and close my editor many, many times every day. To my mind, keeping your editor open is the strange thing!

                                                                                                                        1. 3

                                                                                                                          It’s rather simple actually: the relationship between the editor and the shell is turned on it’s head – from within the editor you open a shell (eg. eshell, ansi-term, shell, …) and use it for as long as you need it, just like a one would use vi from a shell. Ninja-slices.

                                                                                                                          You can compare this as someone who claims to log out of their x session every time they close a terminal or a shell in a multiplexer. Would seem wierd too.

                                                                                                                          1. 3

                                                                                                                            I know you can launch a shell from within your editor. I just never really understood why you would want to do that.

                                                                                                                            Obviously some people do like to do that. My point is just that different ways of using a computer make intuitive sense to different people. I don’t think you can justify calling one way wrong just because it seems odd to you.

                                                                                                                            1. 6

                                                                                                                              I know you can launch a shell from within your editor. I just never really understood why you would want to do that.

                                                                                                                              I do it because it allows me to use my editor’s features to:

                                                                                                                              a) edit commands b) manipulate the output of commands in another buffer (and/or use shell pipelines to prep the output buffer) c) not have to context switch to a different window, shutdown the editor, suspend the editor, or otherwise change how I interact with the currently focused window.

                                                                                                                              1. 1

                                                                                                                                That makes a lot of sense. I guess I have been misleading in using the word “shell” when I should really have said “terminal emulator”. I often fire off shell commands from within my editor, for just the same reasons as you, but I don’t run an interactive shell. I like M-! but I don’t like eshell, does that make sense?

                                                                                                                                Having pondered this all a bit more, I think it comes down to what you consider to be a place. I’m pretty certain I read about places versus tools here on lobsters but I can’t find it now. These are probably tendencies rather than absolutes, but I think there are at least a couple of different ways of thinking about interaction with a computer. Some people think of applications as places: you start up a number of applications, they persist for the length of your computing session, and you switch between them for different tasks (maybe a text editor, a web browser and an e-mail client, or something). Alternatively, applications are just tools that you pick up and drop as you need them. For me, a terminal, i.e. an interactive shell session, is a place. It is the only long-lived application on my desktop, everything else is ephemeral: I start it to accomplish some task then immediately kill it.

                                                                                                                          2. 3

                                                                                                                            It’s really simple in emacs. Just Ctrl-z and run fg when you are ready to go back.

                                                                                                                      1. 11

                                                                                                                        I don’t really get this list. I’m sure someone believes these things but I don’t know them. Seems like just a mish-mash of things the author has heard someone say and trying to make some sort of generalization out of it.

                                                                                                                        1. 11

                                                                                                                          I see these on Hacker News and Reddit all the time:

                                                                                                                          1. Assumption that rewriting in C will always be fast. So, better to do that than make your HLL version faster or do a hybrid.

                                                                                                                          2. Assumption that GC’s always have long delays or something that forces you to use C. Many still don’t know that real-time GC’s were invented or that some real-time software in industry uses HLL’s despite pjmpl and I continuously posting about that. Myths about memory management are so strong that we about need some coding celebrities to do a blog post with references these kinds of things to generate a baseline of awareness. Then, maybe people will build more of them into mainstream languages. :)

                                                                                                                          3. You have to use C for C ABI. People were just arguing this on Lobsters and HN a while back on C-related posts. I was arguing the position you don’t need C in majority of cases in C ecosystem. Just include, wrap, and/or generate it whenever you can.

                                                                                                                          4. Conflating of C/C++ where one shouldn’t. I slipped on that myself a lot in the past because many people coded C++ like it was higher-level C. There is a distinct style for C++ that eliminates many C-related problems. I’m more careful now. I still pass on the correction to others.

                                                                                                                          5. You can only write kernels in C. This is a little inaccurate since many know you can use C++. They consider it a C superset, though, where it sort of reinforces the concept you need some kind of C. Many do believe everything depends on C underneath, though. I shows up in damn near every thread on high-level languages in low-level or performance-critical situations. In my main comment, I gave an extensive list of usages that came before, around same time, and much later than C. There’s even more that used C only for convenience of saving time with pre-built components. Linking to those would defeat the purpose of showing one doesn’t need C, though. ;)

                                                                                                                          6. C maps closely to the hardware. This has been debated multiple times on Lobsters just this year. The most wide-spread metaphor for it is “cross-platform assembly.” So, this is a widespread belief whether it’s a myth or not. There’s a lot of disagreement on this one.

                                                                                                                          The others outside of Emacs or terminal stuff I’ve also seen or countered plenty of times. I don’t know that they’re widespread, though. The ones I cited I’ve seen debated by many people in many places over long periods of time. They’re definitely popular beliefs even if I don’t know specific numbers of people involved.

                                                                                                                          1. 5

                                                                                                                            I’m not sure what you’re trying to say. I’m said that I’m sure some people believe things mentioned in this blog, but what about it? Your list is just a rehash of the contents of the blog, what are you specifically adding to the discussion? There are people in every industry that are ignorant of aspects of their own industry, that’s just how the world works. The counter evidence to the items in this list are accessible to anyone interested.

                                                                                                                            1. 4

                                                                                                                              I don’t really get this list. I’m sure someone believes these things but I don’t know them.

                                                                                                                              You originally said the quote above. In your experience, you must never see these things. In my experience and probably the author’s, they’re so common that they programmers believing them due to widespread repetition miss opportunities in their projects. This includes veterans that just never encountered specific technologies used way outside their part of industry or FOSS. Countering the myths and broadening people’s perspectives on social media might bring attention to the alternatives that lead to more of it fielded in FOSS or commercial applications.

                                                                                                                              That’s the aim when I bring stuff like this up. Sometimes, I see people actually put it into practice, too.

                                                                                                                              1. 5

                                                                                                                                believing them due to widespread repetition miss opportunities in their projects.

                                                                                                                                But that’s clearly not true, right? How much code is written in JavaScript, Python, Ruby, Java, etc? Even if one believes GC’d languages are slow, they are still solving problems in them.

                                                                                                                                After some thought, I believe my strong reaction to this are a few reasons:

                                                                                                                                1. I think most programmers are just ignorant of layers they don’t actively work with. A web dev might think C is the only language to implement kernels in because they just don’t know any better. Maybe that makes it a myth and my splitting hairs but I feel like it’s more of just “thinks some programmers are ignorant of”.
                                                                                                                                2. I think the wording of several of these is misleading. “C is magically fast”?? Even people that believe C is the fastest language to implement anything in don’t call it magic, IME. Things implemented in C are usually very fast, but there is nothing magic about it. Same for GC languages magically slow. Those descriptions just do not match the nuance of the actual discussion, IME.
                                                                                                                                3. The endianess one is also misleading, IMO. The endianess of your machine certainly matters. The blogpost linked is not about endianess not mattering, it’s about how to write code agnostic if endianess, these are very different things.

                                                                                                                                I think my initial response was stronger than it needed to be, however I do not believe this blog post really helps the situation and might even add some of its own myths.

                                                                                                                                1. 4

                                                                                                                                  My experience isn’t with webdevs thinking C is the only language to implement kernels with - it’s about systems programmers who think so. The same goes for C being magically fast - people literally believe that it’s impossible to beat C at execution speed, no matter what language they pick to write code in. I had to prove to a colleague that C++‘s std::sort was faster than C’s qsort with code and benchmarks.

                                                                                                                                  On endianess I think I did myself a disservice by implying that it never matters - it just doesn’t 99.9% of the time. Write endianness-agnostic code and be done with it. It’s like caring which bits of a float are the mantissa - the CPU will do the right thing.

                                                                                                                                  1. 3

                                                                                                                                    Oh yeah, Ive had many C programmers say that stuff to me. You were on point with the Fortan counter: I use it, too.

                                                                                                                          2. 1

                                                                                                                            This is true - it’s a list of things that I hear/read a lot and don’t understand why people believe them.

                                                                                                                            1. 6

                                                                                                                              Subtitle: the perils of the comment section.

                                                                                                                              Corollary: disproving these myths will cause ten others to spontaneously emerge to take their place. :)

                                                                                                                              1. 2

                                                                                                                                Because people believe strange things and/or are often ignorant of things they don’t have close proximity to. How many production kernels are not written in C? And how many of those will your average developer have knowledge of if they aren’t actively interested in the OS layer?

                                                                                                                                To put it another way: you probably believe things that someone with alternative experience would classify as myths and not understand why you believe them.

                                                                                                                                1. 3

                                                                                                                                  Because people believe strange things and/or are often ignorant of things they don’t have close proximity to

                                                                                                                                  Sure, but I know people who still think these things despite having been presented with evidence to the contrary.

                                                                                                                                  you probably believe things that someone with alternative experience would classify as myths and not understand why you believe them

                                                                                                                                  With nearly 100% certainty. I love to be proven wrong, because after I am, I’m no longer wrong about that particular matter.

                                                                                                                                  This list is just things that I have observed that I find irrational, with no implications on my part that I’m any more rational on other matters. Or even these!