distros are mostly the same under the hood, linux, systemd and deb/rpm packages.

    the interesting parts are things like “will it destroy itself during distro upgrades” but those are rarely included in reviews


      “free as in free beer”!


        They claim that the gopher is still there but I didn’t see it anywhere…


        Rest easy, our beloved Gopher Mascot remains at the center of our brand.”

        and why on earth is this downvoted off topic?


          I can confirm your experience - I sometimes have the issue with waking from sleep, and regularly see the OS freezing for extended periods of time (I do have a lot of applications open, but come on, it’s 2018). The quality of software has been declining over the last 4 years. Unfortunately, I still don’t see any better alternative.


            The #1 recommendation in the article is silence, but for myself, learning that I should RTFM was a revelation, right up there with “code is to be read by humans” and “testing is a good thing to do”.

            I’d be interested in hearing more stories and polite variants on RTFM. Giving and receiving feedback is hard (to the point that cursing when you do it is sometimes considered acceptable?)


              This looks pretty interesting. I’ll check it out tomorrow. Thanks!


                Neat. I wrote a Rust version, which involved a bit more yak shaving.


                  Well that’s a shame.


                    Thanks very much for your pointing out to use mdoc!

                    Compared to vmstat(8), my simple toy has following differences:
                    (1) Add displaying swap space;
                    (2) Only consider active pages as “used” memory, others are all counted as “free” memory.IMHO, for the end user who doesn’t care the guts of Operating System, maybe this method is more plausible?

                    All in all, I just write a small tool for fun, and thanks very much again for giving pertinent advice!


                      It’s very clever given the constraint “the SUT might be written in anything”. Many things come with a -v flag.

                      The workflow the post discusses sounds like it would involve iteratively putting a few printfs in, seeing which ones have inconveniently large gaps between, adding printfs to whatever happens between, until the gaps between profile lines leave no particularly difficult mysteries.

                      I suppose it does implicitly assume a short-ish edit compile run cycle, but that’s true for any workflow based on adding printf calls.


                        Elegant! The simplicity is really impressive, a real 80/20 kind of solution.

                        Maybe you could solve the pipefail thing by having a tts utility that invokes the target program as a subprocess, capturing its stdout+err and then when it stops, wait4() it and return with the same error code the child did.


                          Is there anyone who can review a distro without reviewing some desktop manager?

                          Is there anyone who understands that desktop managers are independent of distros?


                            “Margin Call” in an underrated film that delas oh-so-briefly, but, speaking from experience, meaningfully and realistically, on issues of ethics and morality, as seen through the lens of the (corporate side) of the Very Large Financial Institution, and is well worth a watch.


                              It’s the Apple Tax: “In the end, we found each Apple machine to cost more than a similarly equipped PC counterpart, with the baseline Mac Pro being the exception. Usually the delta is around $50 to $150…”


                                This post is great! This has helped me clarify my thinking about a few things:

                                You just can’t do without being able to break your program into pieces.

                                I’ve found that “pure” code wants to be broken up in to smaller and smaller pieces. This often comes at a performance cost for both call overhead and duplicate work, but that performance is often easy to recover with bulk pure operations, at some risk of repeating yourself.

                                However, imperative code is often dramatically clearer as big linear blocks of code. More and more, I’m just inlining procedural functions at their call sites with few if any ill effects, and usually benefits. Similarly, instead of extracting procedures, I’m now frequently creating intermediate maps or parallel arrays, and then looping over them.

                                The one big reason I still break up imperative pieces is for open-dispatch. However, even that can usually be separated along functional and imperative lines: the functional code operates on open union types, but return values of a closed sum type to communicate desires to a central imperative procedure.

                                Functional code wants to be composed. Imperative code wants to be parameterized.

                                runST operator lets you create computations that use highly aliasable state

                                This is why I’m a monad detractor. Once you get in to a sufficiently complex monad, your right back where you started with imperative programming. Only now, you have the added complexity that everything is effectively “quoted” by default, so you can accidentally “metaprogram” your way in to a mess, re-ordering sub-procedures and the like. The type system won’t tell you that should have bound an intermediate result to a variable, rather than splice in a chain of first-class statements.

                                I don’t doubt that reasoning about imperative programs in Haskell is easier than in say Ruby, but I think that’s more related to the amount of aliased state by default, more than anything else.

                                Removing any one of the three difficulties makes things easier […snip…] a) State + procedure calls b) State + aliasing c) Aliasing + procedures

                                I’m designing a language that embraces immutability, but has statements and imperative constructs. Some core design elements aim directly at spending as much time in these two-out-of-three-difficulties sweet-spots as much as possible.

                                The main thing that values are deeply immutable and acyclic. They are strictly separated from variables (of various kinds) that are mutable containers of such values. Cycles must be achieved via indirecting through a symbol or ID. Stateful objects may employ internal mutability for performance, but then must perform copy-in/copy-out to preserve the value illusion.

                                A) A reduction in aliasing of mutable state across procedure calls is achieved in two ways: 1) defaulting stateful constructs to be second-class. You need to use explicit syntactic constructs to reify them as objects, like address-of (&) in C or Go; quite unlike Java or Ruby, etc. 2) state has structured lifetime. First-class procedures can close over state, but they will throw if the procedure is executed outside the dynamic extent of that state. If you want to share state more widely, you need to allocate it explicitly from higher up in the process tree.

                                B/C) Procedures interact with stateful aliasing less frequently with deeply immutable values and explicit mutable aliasing. While call-by-value is standard now, it’s less useful when you have pervasive aliasing in order to pass large values or values of dynamic size or type. In Go for example, you get much more milage from call-by-value behavior than Java, but still frequently create pointers in order to escape the overhead of excessive copying. Not a problem with deeply immutable values.


                                  We offload all of our date/time processing into PG. so many languages SUCK at it, or have 100 libraries that all do different things, sometimes in non-sane ways. We just decided it’s not worth it, and just let PG handle it for us. PG’s date/time handling is very sane compared to most language implementations, in our experience.


                                    The same way I monitor prod. For us that’s Prometheus (https://prometheus.io/), but there are many, many OSS options.


                                      I detest paying for software except when it occupies certain specialized cases or represents something more akin to work of art, such as the video game Portal.

                                      I detest this attitude. He probably also uses an ad blocker and complains about how companies sell his personal information. You can’t be an open source advocate if you detest supporting the engineers that build open source software.

                                      But only when it’s on sale.

                                      I’m literally disgusted.


                                        I’m not sure, but the fact that they have logins and are centralized means they can probably do a better job than a random mailing list.

                                        Dreamhost also probably has enough data to do something good about spam, but they apparently don’t. It costs engineers and money to fight e-mail spam.

                                        Here’s a good link about spam: Modern Anti-Spam and E2E Crypto (2014)


                                        In my experience, Google’s spam filters have gotten significantly worse lately. I’m on the busybox mailing list and Gmail routinely marks those messages as spam. And it routinely rejects e-mails I sent from a cron job. So they’re having problems with both false positives and negatives.


                                          The circular shape of the letters hints at the eyes of the Go gopher

                                          They’re really stretching with that line. Two circles could be a lot of things and the first things that come to mind don’t have anything to do with the Go gopher. I don’t like this. It’s generic corporate, and I’ll miss the ode to Plan 9.