1. 39

    And then this is my claim: The ownership model, standard library, emphasize on iterators, and numerous other things do lead to code that is usually most idiomatic when written in a functional style. Thus, the positive properties listed above that are associated with functional programming and immutability are present in Rust programs, and, to a degree, even statically enforced by the compiler. For me, that is a joy!

    I guess I don’t really agree with all of this. I agree that Rust gets the nice properties of functional languages at least when it comes to mutation, but if someone read the body of Rust code I have written and came back and said, “it really resembles the code I typically write on Lisp/ML/Haskell,” then I would be really surprised. I rarely write higher order functions. I rarely write recursive functions or folds. I rarely use persistent data structures and instead use mutation without reservation.

    I think that if I were forced to choose, I would just say that Rust is a procedural language. But of course, that leaves so much out. Such is the inherent problem with categorization in the first place.

    1. 11

      I agree with your sentiment. When I started Rust, I was using OCaml as my main programming language and I tried to emulate the OCaml style of functions taking immutable values in and returning immutable values and I quickly found myself struggling and talked about it on Reddit. I found that, although Rust does have functional programming genetic material (being bootstrapped in OCaml, GC from way-back-when that lent itself better to FP, current closure-strong collection APIs, etc.), Rust is best used as a procedural language. Trying to go against that is like writing imperative code in a functional language: it can be done, but the resulting software will be uglier, slower, and less maintainable for contributors.

      I rarely use persistent data structures and instead use mutation without reservation.

      Functional languages take away mutability from the “mutability + aliasing = bugs” equation; Rust takes away the aliasing — or at least, the uncontrolled aliasing. To me, it then makes sense to feel comfortable using mutability (and the advantages it offers), safe in the knowledge that the compiler is helping you avoid bugs.

      1. 4

        Functional languages take away mutability from the “mutability + aliasing = bugs” equation; Rust takes away the aliasing — or at least, the uncontrolled aliasing. To me, it then makes sense to feel comfortable using mutability (and the advantages it offers), safe in the knowledge that the compiler is helping you avoid bugs.

        Yes, that’s what I meant with, “I agree that Rust gets the nice properties of functional languages at least when it comes to mutation.” The OP also makes this point. My point here was to contrast typical Rust code with typical code written in a functional programming language. That is different from contrasting the goals and problems solved by the respective paradigms.

        I think we’re in agreement FWIW. Just seemed like a small misunderstanding here worth correcting.

        1. 2

          Well said! You should go with the grain of the language. Rust encourages mutation instead of superfluous copies. The borrow checker stops you from mutating in ways that are likely to cause bugs, so there’s no point in making copies when you don’t need to.

          FP is going to be like OOP in a few years. Right now everyone is trying to “FP all the things”. Monads in JavaScript? Are you kdding me? Stahp.

          I remember trying to OOP-ize everything I came across. I tried to write OO Python: a bunch of classes with “private” (double underscore) fields and side-effects, etc. It worked about as well as you’d expect.

        2. 4

          I understand your point and I tried to make clear that Rust code will unlikely equal a Haskell solution for the same problem. Not in terms of control structures and maybe not even from the overall design (Haskell maybe “world loop” deriving new state, Rust maybe event loop modifying application state directly which is probably not held in one single place).

          That said, from approaching program design in Clojure, I tend to create data structures similarly. Favoring acyclic graphs and being picky with where I change it. Rather to have some unidirectional control flow (thinking about UIs now, e.g. yew.rs, tui.rs or even gtk). In other “imperative” languages (like C++) I would approach it differently.

          As of functions: I love the versatility of iterators. Mapping or folding a sequence of results into a Result and not having to resort to for loops is nice. But I agree with higher order functions being used rarely.

          As you point out, pressing a rubber stamp on things has issues. This was meant more to provoke some thought and discussion ;)

        1. 19

          A half-hour to learn Rust

          51 minute read

          ;)

          1. 14

            Truncation vs rounding

          1. 21

            The job of the OS is to schedule which software gets what resources at any given time. Kubernetes does the same thing. You have resources on each one of your nodes, and Kubernetes assigns those resources to the software running on your cluster.

            ehh, who’s the you here? This is starting from the assumption that I have a lot of nodes, which is only true in the context of me running infrastructure for a corporation; the you here is a corporation.

            The first difference is the way you define what software to run. In a traditional OS, you have an init system (like systemd) that starts your software.

            again, again, define traditional. Who’s tradition? In what context? In a traditional OS, software starts when you start using it, and then it stops when you stop using it. The idea that everything should be an always-running, fully-managed service is something that’s only traditional in the context of SAAS.

            The thing that makes me feel cold about all this stuff is that we’re getting further and further away from building software that is designed for normal people to run on their own machines. So many people that run Kubernetes argue that it doesn’t even make sense unless you have people whose job it is to run Kubernetes itself. So it’s taken for granted that people are writing software that they can’t even run themselves. I dunno. All this stuff doesn’t make me excited, it makes me feel like a pawn.

            1. 12

              You’re right, you probably wouldn’t use Kubernetes as an individual.

              I’ll take the bait a little bit though and point out that groups of people are not always corporations. For example, we run Kubernetes at the Open Computing Facility at our university. Humans need each other, and depending on other people doesn’t make you a pawn.

              1. 8

                Given the millions and millions spent on marketing, growth hacking, and advertising for the k8s ecosystem, I van say with some certainty we are all pawn-shaped.

                1. 5

                  totally fair criticism. I think “corporation” in my comment could readily be substituted with “enterprise”, “institution”, “organization” or “collective”. “organization” is probably the most neutral term.

                  Humans need each other, and depending on other people doesn’t make you a pawn.

                  so I think this is where my interpretation is less charitable, and we could even look at my original comment as being vague and not explicitly acknowledging its own specific frame of reference:

                  In a traditional OS, software starts when you start using it, and then it stops when you stop using it.

                  again, who’s tradition, and in what context? Here I’m speaking of my tradition as a personal computer user, and the context is at home, for personal use. When thinking about Kubernetes (or container orchestration generally) there’s another context of historical importance: time-sharing. Now, I don’t have qualms with time-sharing, because time-sharing was a necessity at the time. The time-sharing computing environments of the sixties and seventies existed because the ownership of a home computer was unreasonably expensive: time-sharing existed to grant wider access to computing. Neat!

                  Circling back to your comment about dependency not inherently making someone a pawn and ask: who is dependent on whom, for what, and why? We might say of time-sharing at a university: a student is dependent on the university for access to computing because computers are too big and expensive for the student to own. Makes sense! The dependent relationship is, in a sense, axiomatic of the technology, and may even describe your usage of Kubernetes. If anything, the university wishes the student wasn’t dependent on them for this because it’s a burden to run.

                  But generally, Kubernetes is a different beast, and the reason there’s so much discussion of Kubernetes here and elsewhere in the tech industry is that Kubernetes is lucrative. Sure, it’s neat and interesting technology, but so is graphics or embedded programming or cybernetics, etc, etc, etc. There are lots of neat and interesting topics in programming that are very rarely discussed here and elsewhere in programming communities.

                  Although computers are getting faster, cheaper, and smaller, the computers owned by the majority of people are performing less and less local computation. Although advances in hardware should be making individuals more independent, the SAAS landscape that begat Kubernetes has only made people less independent. Instead of running computation locally, corporations want to run the computation for you and charge you some form of rent. This landscape of rentier computation that is dominating our industry has created dependent relationships that are not inherently necessary, but are instead instruments of profit-seeking and control. This industry-wide turn towards rentier computation is the root of my angst, and I would say is actually the point of Kubernetes.

                2. 10

                  we’re getting further and further away from building software that is designed for normal people to run on their own machines

                  This resonates with me a lot. At work, we have some projects that are very easy to run locally and we have some that are much harder. Nearly all the projects that can be run locally get their features implemented more quickly and more reliably. Being able to run locally cuts way down on the feedback loop.

                  1. 2

                    I’m really looking forward to the built-in embed stuff in Go 1.16 for this reason. Yeah, there’s third-party tools that do it, but having it standardized will be great. I write Go servers and one thing I’ve done is implement every storage layer twice: once with a database, and once in process memory. The utility of this has been incredible, because I can compile a server into a single .exe file that I can literally PM to a colleague on Slack that they can just run and they have a working dev server with no setup at all. You can also do this with sqlite or other embedded databases if you need local persistence; I’ve done that in the past but I don’t do it in my current gig.

                    1. 2

                      I write Go servers and one thing I’ve done is implement every storage layer twice: once with a database, and once in process memory.

                      In my experience the overhead of implementing the logic twice does not pay out since it is very easy to spin up a MySQL or Postgres database, e.g. using docker. Of course this comes with the disadvantage of having to provide another dependency but at least the service then runs in a similar environment to production. Usually spinning up a test database is already documented/automated for testing.

                      1. 1

                        That was my first thought, but upon reflection - the test implementation is really just an array of structs, and adds very little overhead at all.

                        1. 1

                          yeah, very often the implementation is just a mess of map[string]*Book, where there’s one Book for every model type and one map for every index, and then you slap a mutex around the whole thing and call it a day. It falls apart when the data is highly relational. I use the in-mem implementation for unit tests and for making debug binaries. I send debug binaries to non-developer staff. Asking them to install Docker alone would be a non-starter.

                  2. 4

                    Suppose that you, an individual, own two machines. You would like a process to execute on either of those machines, but you don’t want to have to manage the detail of which machine is actually performing the computation. At this point, you will need to build something very roughly Kubernetes-shaped.

                    The difficulty isn’t in having a “lot” of nodes, or in running things “on their own machines”; the difficulty is purely in having more than one machine.

                    1. 16

                      you don’t want to have to manage the detail of which machine is actually performing the computation

                      …is not a problem which people with two machines have. They pick one, the other, or both, and skip on all the complexity a system that chooses for you would entail.

                      1. 3

                        I struggle with this. My reflex is to want to have that dynamic host management, but fact of the matter is my home boxes have had less pages than my ISP in the past five years. Plain old sysadmin is more than enough in all of my use cases. Docker is still kinda useful to not have to deal with the environment and setup and versions, but like. A lot of difficulty is sidestepped by just avoiding to buy into the complexity.

                        I wonder if this also holds for “smaller” professional projects.

                        1. 1

                          Unfortunately, I think that your approach is reductive. I personally have had situations where I don’t particularly care which of two identical machines performs a workload; one example is when using CD/DVD burners to produce several discs. A common consumer-focused example is having a dual-GPU machine where the two GPUs are configured as one single logical unit; the consumer doesn’t care which GPU handles which frame. Our operating systems must perform similar logic to load-balance processes in SMP configurations.

                          I think that you might want to consider the difficulty of being Buridan’s ass; this paradox constantly complicates my daily life.

                          1. 3

                            When I am faced with a situation in which I don’t particularly care which of two identical machines performs a workload, such as your CD burner example, I pick whichever, or both. Flip a coin, and you get out of the buridan’s ass paradox, if you will. Surely the computer can’t do better than that, if it’s truly the buridan’s ass paradox and both choices are equally good. Dual-GPU systems and multicore CPUs are nice in that they don’t really require changing anything from the user’s perspective. Moving from the good old sysadmin way to kubernetes is very much not like that.

                            I’m sure there’s very valid use-cases for kubernetes, but not having to flip a coin to decide which of my two identical and equally in-reach computers will burn 20 CDs tonight is surely not worth the tradeoff.

                            1. 3

                              To wring one last insight from this line of thought, it’s interesting to note that in the dual-GPU case, a CPU-bound driver chooses which GPU gets which drawing command, based on which GPU is closer to memory which is also driver-managed; while in the typical SMP CPU configuration, one of the CPUs is the zeroth CPU and has the responsibility of booting its siblings. Either way, there’s a delegation of the responsibility of the coin flip. It’s interesting that, despite being set up to manage the consequences of the coin flip, the actual random decision of how to break symmetry and choose a first worker is not part of the system.

                              And yet, at the same time, both GPU drivers and SMP kernels are relatively large. Even when they do not contain inner compilers and runtimes, they are inherently translating work requests from arbitrary and often untrusted processes into managed low-level actions, and in that translation, they often end up implementing the same approach that Kubernetes takes (and which I said upthread): Kubernetes manages objects which represent running processes. An SMP kernel manages UNIX-style process objects, but in order to support transparent migration between cores, it also has objects for physical memory banks, virtual memory pages, and IPC handles. A GPU driver manages renderbuffers, texturebuffers, and vertexbuffers; but in order to support transparent migration between CPU and GPU memory, it also has objects for GPU programs (shaders), for invoking GPU programs, for fencing GPU memory, and that’s not even getting into hotplugging!

                              My takeaway here is that there is a minimum level of complexity involved in writing a scheduler which can transparently migrate some of its actions, and that that complexity may well require millions of lines of code in today’s languages.

                        2. 5

                          I mean, that’s not really an abstract thought-experiment, I do have two machines: my computer and my phone. I’d wager that nearly everyone here could say the same. In reality I have more like seven machines: a desktop, a laptop, a phone, two Raspberry Pi’s, a Switch, and a PS4. Each one of these is a computer far more powerful than the one that took the Apollo astronauts to the moon. The situation you’re talking about has quite literally never been a thing I’ve worried about. The only coordination problem I actually have between these machines is how I manage my ever-growing collection of pictures of my dog.

                        3. 5

                          My feelings exactly. Kubernetes is for huge groups. Really huge. If you only have one hundred or so staff, I am not convinced you get much benefit.

                          If you’re happy in a large company, go wild. Enjoy the kubernets. It isn’t for me - I’m unsure whether I will ever join a group with more than ten or so again, but it won’t be soon.

                        1. 6

                          My five favorite packages:

                          1. Ivy. The list-and-narrow model has radically changed how I use Emacs and I think this model should be emulated more widely. It’s a great way to deal with hundreds even thousands of options in an easy manner.

                          (I tried selectrum and I like it and am philosophically more aligned with it, but Ivy is what I know and it requires fewer configs to get it to work the way I want it. Ivy also allows matching from external programs, for example ripgrep, which I use constantly.)

                          1. Magit. No suprise there, everyone who uses Emacs pretty much loves Magit, it’s just an awesome git porcelain.

                          2. dumb-jump. For programming modes where I don’t want to bother with configuring LSP, dumb-jump offers a very workable jump-to-def approach. It uses regular expressions to find lines that look like they may define the word under the cursor. Not always completely accurate, but it requires 0 configuration and works well enough for most cases.

                          3. move-dup and shift-text. I’m cheating here by including both, but they do similar jobs and allow me to manipulate lines of text similarly to how I got used to in Eclipse (Ctrl+Alt+Down/Up to copy the current line down/up, Alt+Down/Up to move a line down/up, etc.)

                          4. deadgrep. When using counsel-rg gives to many results, deadgrep offers a very clean interface to view and navigate the results of a search. I especially love that by default it finds the root of the project (a Git project for example) so it DWIMs pretty well.

                          1. 2

                            I keep trying magit but I can never get good enough at it to the point where it makes sense to actually use it. The documentation is sort of laughably impenetrable too (I say this as a long time emacs user, but not a power user). The git blame docs for instance are just hilariously unhelpful.

                            Anywhoo, would be interested in giving it yet another go and would appreciate any resources you’re aware of that might be helpful.

                            1. 1

                              Anywhoo, would be interested in giving it yet another go and would appreciate any resources you’re aware of that might be helpful.

                              I found this introductory presentation from Howard Abrams helpful. Mostly though, once I bound C-x g to magit-status and got into the habit of using it, the combination of my existing (basic) git knowledge and the ? key-bind allowed me to learn by doing. Hope that helps :-)

                              1. 1

                                Thanks, I’ll give it a watch! I pretty much do the same thing but it never seems to work out (for example, I still can’t find blame anywhere).

                                1. 1

                                  I’ve used magit for many years yet I rely on the ? shortcut for using it.

                            2. 2

                              Great choices!

                              I’ve been using ag for ages, but I should try rg one of those days. I’m always using such search packages via Projectile, though, as they seem to be most useful within a project context.

                              Btw, how fast is the dumb-jump for you on bigger projects?

                              1. 2

                                I tried dumb-jump on GCC with ripgrep and it was unbearably slow. I reverted to ggtags as it is considerably faster.

                              2. 1

                                deadgrep was really nice, thanks for sharing!

                                1. 1

                                  I tried selectrum and I like it and am philosophically more aligned with it, but Ivy is what I know and it requires fewer configs to get it to work the way I want it.

                                  Selectrum is new to me! I’ve been using Helm, but am considering switching, as Helm’s display doesn’t quite seem to be configurable enough – sometimes when switching buffers, it truncates the buffer name! Ridiculous!

                                  Is this the philosophy you’re talking about?

                                  The design philosophy of Selectrum is to be as simple as possible, because selecting an item from a list really doesn’t have to be that complicated, and you don’t have time to learn all the hottest tricks and keybindings for this. What this means is that Selectrum always prioritizes consistency, simplicity, and understandability over making optimal choices for workflow streamlining. The idea is that when things go wrong, you’ll find it easy to understand what happened and how to fix it.

                                  How does that differ from Ivy, which says it “aims to be more efficient, smaller, simpler, and smoother to use yet highly customizable”? I haven’t tried Ivy either, so high-level info is great.

                                  1. 2

                                    The section on Ivy in the Selectrum README is where they go into most of the detail. Essentially, Ivy was originally designed for Swiper, and got abstracted out of that some point along the way, but not very cleanly, resulting in a lot of hardcoded special cases in the code for different functions. Selectrum on the other hand only wants to plug in to completing-read and let a user choose from that list in a more convenient way, which allows the codebase to be something like ten times smaller.

                                1. 36

                                  Not a small change, but: getting a dog.

                                  With a dog, I have to go on walks outside way more often than I did before, and it’s doing me a lot of good physically, but also mentally: I don’t know what it is, but I don’t think about work when I’m on the trail with my boy. It’s also an amazing source of love and comfort after a tense Scrum planning or refinement.

                                  1. 1

                                    We’ve a puppy for 1.5month and yes, he is a life-changer. But before getting a dog I suggest to dig into this topic and understand: can you take care of someone who doesn’t understand your language(s) (not only the voice, but gestures so on).

                                  1. 5

                                    Looking forward to other such posts! I changed M-SPC to use cycle-spacing and I was reminded of beacon.

                                    1. 1

                                      Same, except I switched beacon for the pulse recipe shown. I aim to do more with less, so fewer modules to install is right up my alley.

                                    1. 2

                                      Small suggestion: take a screenshot for https://opendocs.github.io/texme/examples/content-in-textarea.html and put the image in the README just under the Euler example code. People who scroll down quickly will see the result and not miss the link.

                                      1. 2

                                        Thank you for this great suggestion. I have added a screenshot of it now to this section: https://github.com/susam/texme#content-in-textarea. Indeed the README looks much better now and lets the reader get a feel of the output without having to leave the page.

                                      1. 23

                                        I strongly disagree with the author, but I posted it here nonetheless because it’s certainly something I’ve seen and heard from other developers.

                                        1. 7

                                          I went to two universities (one for undergrad, one for grad) and both of them had a similar intro to programming curriculum. Teach Java, go really quickly over variables, types, conditionals and loops, and then, as quickly as possible, introduce OO design. The students have not even written a program more than 50 lines of code that they are immediately told that OO is the right way to structure very large programs.

                                          1. 50

                                            Regardless of whether you currently think your existing tools need replacing, I urge you to try ripgrep if you haven’t already. Its speed is just amazing.

                                            1. 7

                                              I’ll second this sentiment. Your favored editor or IDE probably has a plugin to use ripgrep and you should consider trying that too.

                                              1. 6

                                                As an experiment I wrote a tiny Go webservice that uses ripgrep to provide a regex aware global code search for the company I work at. The experiment worked so well over a code base of ~30GB that it will probably replace hound which we use for this purpose at the moment. I did not even use any form of caching for this web service, so there is still performance to squeeze out.

                                                1. 5

                                                  https://github.com/phiresky/ripgrep-all comes with caching, it’s a wrapper around rg to search in PDFs, E-Books, Office documents, zip, tar.gz, etc

                                                2. 3

                                                  ripgrep and fd have changed the way I use computers. I’m no longer so careful about putting every file in its right place and having deep, but mostly empty, directory structures. Instead, I just use these tools to find the content I need, and because they’re so fast, I usually have the result in front of me in less than a second.

                                                  1. 5

                                                    You should look into broot as well (aside, it’s also a Rust application). I do the same as you and tend to rotate between using ripgrep/fd and broot. Since they provide different experiences for the same goal sometimes one comes more naturally than the other.

                                                    1. 2

                                                      broot is sweet, thanks for mentioning it. Works like a charm and seems super handy.

                                                  2. 1

                                                    3 or 4 years ago it was announced that the vs code “find in files“ feature would be powered by ripgrep. Anyone know if that’s still the case?

                                                    1. 1
                                                  1. 3

                                                    Another monospace coding font, yay?

                                                    I’m really fond of Go Regular, a proportional font, but I really wish it wasn’t the only one made with coding in mind.

                                                    1. 2

                                                      The download here seems to also contain a variable font, unless I’m reading it wrong

                                                      1. 3

                                                        Variable fonts are a way to package the regular, italic, bold, etc. versions of a font in a single file.

                                                    1. 3

                                                      I’m not familiar with client-side web dev, can anyone explain why the author claims that .sql cannot be synchronous? Would it not just hang the page until the value is returned or is there something more complex going on?

                                                      1. 2

                                                        Synchronous HTTP requests are deprecated … well, pretty much shunned … as they play havoc with the browser’s behaviour. It’s pretty tricky to wrap an asynchronous request up into a synchronous API, so the assumption here is that the synchronous HTTP option must be what’s being used.

                                                        See: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests#Synchronous_request

                                                        1. 1

                                                          Browsers simply do not expose blocking APIs to javascript code. There isn’t a syscall (or equivalent) you can make that will block.

                                                        1. 3

                                                          What’s the rationale for wanting HKTs in Rust? I see a lot of people wanting to drag Rust closer to Haskell, but I don’t understand why.

                                                          My personal opinion is that adding HKTs to Rust would further complexify Rust — which is already nearly busting its strangeness budget with the ownership system — and offer little practical value in return. The only place where we might want such flexibility is with arrays, and this is being addressed with const generics.

                                                          1. 4

                                                            I think HKT are one of the things that shouldn’t be missing in modern languages – it’s one of the few features where I gladly pay the complexity cost (alongside turbo-charged if-expressions to get rid of separate pattern matching constructs).

                                                            I don’t think you can blame HKT’s for blowing up Rust’s complexity budget – they could have easily avoided earlier mistakes to not push their complexity budget to the current point. (Hello ::<>!)

                                                            1. 4

                                                              Turbofish is just a small syntax-level quirk. I don’t think it’s even in top 20 of complexity in Rust.

                                                              There are much worse things, like patterns trying to be dual of expressions, which made & mean either reference or dereference depending on context (and in function and closure args, that context is hard to notice). Or the very subtle meaning of implied 'static bounds. Especially when the compiler says something has to live for a 'static lifetime, which isn’t the same thing as lifetime of a &'static reference.

                                                              1. 0

                                                                Turbofish is just a small syntax-level quirk.

                                                                I like the ::<> example because it fits the “easily avoided” descriptor best.

                                                                (Rust got it right in the beginning, then changed it to current worse “design”.)

                                                                I don’t think it’s even in top 20 of complexity in Rust.

                                                                Yep, but if “getting these trivial-to-get-right things wrong” gets a pass, the floodgates are pretty much open.

                                                                A language community either decides “yes, we have a quality standard, and we enforce it” or it doesn’t; there is little in between.

                                                                There are much worse things, like patterns trying to be dual of expressions, which made & mean either reference or dereference depending on context (and in function and closure args, that context is hard to notice). Or the very subtle meaning of implied ’static bounds. Especially when the compiler says something has to live for a ’static lifetime, which isn’t the same thing as lifetime of a &’static reference.

                                                                Are there any write-ups that document “and this is how we should have done it properly, if we were allowed to”, because otherwise the mistake will be repeated over and over.

                                                                1. 1

                                                                  You’ve got Turbofish entirely backwards. It is an intentional design that fixed a parsing ambiguity. C++ doesn’t have such disambiguator, and has well-documented problems with parsing < in templates vs comparisons & shifts. Rust avoided having such mistake, and it was a conscious design decision. It fixed a fundamental problem in the grammar at cost of a quirk that is just an aesthetic thing.

                                                                  Putting “design” is scare quotes makes you sound like a troll.

                                                                  1. 3

                                                                    And the turbo-fish operator does not significantly affect how people interact with the language — HKTs would.

                                                                    I used to be into Haskell and I always wanted to abstract my code more and more (for what reason, I do not know). As I’ve gotten older, I’ve gotten grumpier and more conservative in what I want out of a programming languages: I now put more value in simplicity and solutions that solve an actual problem rather than a generalization of a problem.

                                                                    My worry is that Rust will become too complex if we say yes to every proposition for a new way to create abstractions. My 27 year-old self would probably be ecstatic, but my 37 year-old self is dubious that this is desirable.

                                                                    Rust is already taking quite a gamble that programmers will accept to change the way they work with memory in exchange for safety; I would hate for that gamble to become riskier by introducing every GHC extension into Rust.

                                                                    1. 1

                                                                      I’m not sure what you are trying to argue against.

                                                                      Picking a design that does not require working around parsing ambiguities as Rust did in the beginning is vastly superior to having 4 different syntax variations for generics and pretending that’s a good thing.

                                                              2. 1

                                                                I like that ‘strangeness budget’ idea. I think we agree on why it would be dubious to add higher-kinded types to Rust.

                                                                I see a lot of people wanting to drag Rust closer to Haskell, but I don’t understand why.

                                                                I’m not advocating to make Rust more like Hakell, but the reason why I want a language “like Rust, with higher-kinded types” is because I think such a language is strictly better for building robust, maintainable software. To me, Rust is strictly better than C in the same way. I find it hard to believe that we have this all figured out after only ~70yrs of innovation.

                                                              1. 8

                                                                I don’t really have this problem; almost everything that ends up in my inbox generally matters, and the unread emails is almost always below a few dozen emails (but usually smaller, 10 at the most). I’m not sure what I’m doing different from you or what kind of unwanted emails you’re getting?

                                                                1. 3

                                                                  One problem I have is that I receive more emails meant for other people who think their email address is my email address. I get reminders for choir practice in Atlanta, deals at a Ford dealership in West Virginia, dentist appointment reminders, and tons of advertising for stores I don’t know about. It’s a never-ending fight to mark as spam or unsubscribe. In a single day, I get one or two emails meant for me and 20-25 meant for other people.

                                                                  1. 2

                                                                    I never get these. I think it depends on the address how many of these you get.

                                                                1. 7

                                                                  fd and ripgrep have affected how I work with my computer. Because of their speed, I no longer am so careful about putting every file into its proper directory. I now have a much looser and flatter directory structure, but it works because I can very easily and quickly find the information I need.

                                                                  1. 8

                                                                    Most of the bitmap fonts are just too small to be comfortable for me to read. Look at Proggy, a good-quality bitmap font, but it’s just too small to be readable by anyone old enough to drink.

                                                                    1. 8

                                                                      I hold a bit of a quarrel with 1 pixel wide bitmap fonts, the Atarist, Spleen and Terminus (at large sizes) fonts all solve the issue of being too small.

                                                                      1. 1

                                                                        That just means your monitor’s resolution is too high.

                                                                        1. 6

                                                                          High DPI screens are here to stay. There’s no reason for bitmap fonts not to come in appropriate sizes for it.

                                                                          There are huge readability advantages to well-designed bitmap fonts and I really wish I could get them on a nice screen.

                                                                          1. 3

                                                                            Terminus at 16x32, its largest available size, will give you 1mm-wide glyphs on a 200 PPI display. That’s about the physical size I use for code, YMMV of course. I don’t expect to switch to a bitmap font for everyday use, because I just don’t see any real advantage on a high-PPI display, and plenty of disadvantages. Seems like an aesthetic preference to me, and those kinds of debates are super boring; de gustibus non est disputandum. But there’s no technical reason not to have larger-sized bitmap fonts, if someone’s willing to do the work.

                                                                            1. 3

                                                                              This post has inspired me to try a few bitmap fonts and I’ve gotta say, if you can find one that fits your normal working size then it’s fantastic!

                                                                              Why? All else being equal, I can read smaller glyphs using a bitmap font, which means I can put more content on-screen at once.

                                                                            2. 2

                                                                              High DPI screens are here to stay.

                                                                              Are they? As of 2020, 1366x768 is still the most common display resolution. If I browse a store selling new laptops, 1080p is by far the most common resolution, and smaller resolutions than that are readily available. After ten years. High-DPI displays have taken over phones, but on laptops/desktops, they remain a niche choice, and niche choices can be taken away once manufacturers lose interest.

                                                                              High-DPI CRTs (although they were anything but ‘crisp’) used to be very common, until everyone switched to LCDs and happily stared at chunky 1280x1024 pixels for a good decade.

                                                                              1. 2

                                                                                Huh. I guess it’s just my sector then - my users are overwhelmingly on high DP screens.

                                                                        1. 7

                                                                          Uhm… I understand it’s a contrived example, but why on earth any such function that only needs to read something from HeavyThing’s state wouldn’t take it by reference? This would avoid the whole problem.

                                                                          1. 2

                                                                            Then the caller would be the one that’s slow. The problem is still there, just in a different place.

                                                                            1. 2

                                                                              But this wouldn’t happen at that critical time. Which is basically what a thread would do, too.

                                                                              I mean, I believe there’s a problem, I’m just having a hard time imagining a practical example.

                                                                          1. 5

                                                                            I really hope the “Git commit message length 50/72” rule also get revised, as some editors, vim for example, keeps breaking my lines when committing from the command line.

                                                                            1. 11

                                                                              I’ve often not respected the “50-line summary” and I think it’s made for better commit messages. Without being excessive, 60-70 characters can give you some wiggle room to be more precise.

                                                                              1. 0

                                                                                I have total disregard for this “rule” (except for projects where there is a hard policy, of course. For public PRs, eg. github, the project owner will create the final commit message anyway).

                                                                                A short single sentence overview of the commit must for. Also automatically added stuff must fit (eg. Merge PR12345). If it is 200 characters, who cares? Usually it will be displayed on an ultrawide monitor in a web browser. Also my terminals are able to line wrap. At least history can be grepped/searched properly

                                                                                Note: I prefer to work in squash-merge-by-default workflows which I think create the need for a bit longer commit messages, as commits a usually a bit bigger complete feature level closed entites.

                                                                                1. 9

                                                                                  I think there’s some good reasons for keeping short summaries though: it’s like the Subject: field of your email: a concise description of what it’s about. It makes things like git log much easier to read IMO. Having terminals line-wrap all of that makes stuff much harder to read.

                                                                                  Another (more pragmatical) reason is that some tools don’t like long commit summaries. Specifically, GitHub doesn’t show it very well.

                                                                                  I think 50 is a good rule-of-thumb, which can be broken occasionally when it makes sense.

                                                                                  1. 2

                                                                                    I don’t mean to write the complete changelog in one line, but if it gets longer, then it is longer. It must be legible text, momentary presentation trends/limitations don’t matter so much, as Github theme, terminal size, etc. changes, but the commit stays there, and must be quickly understandable maybe years later.

                                                                                    1. 5

                                                                                      200 characters doesn’t strike me as “quickly understandable” though; that’s a long sentence instead of a concise summary. I can’t really think of any situation where that can’t be shorter?

                                                                                      1. 1

                                                                                        It was just a number from the top of my head, to illusttate the magnitude of my not caring for the convention, not result from exact resarch of my project histories.

                                                                                        Also when doing releases/deploys I preferred to have extensive “top leves description” of changes of my “underlings” when reviewing hundreds of commits since last release, to be able to recap the extent and risks of the changes.

                                                                                        Having PR 54321: ComponentX: Framework update to x.y.z. Fixes BUG-12345. WARNING: Config format changed! WARNING: SSL 1.0 support dropped! WARNING: Requires OS update level ZZZZ

                                                                                        Instead of PR 54321: ComponentX: Framework update to x.y.z. totally made sense becuse in operations several OS versions were in use at OPS, several teams were working on the code (monorepo), and ops people could instantly get info from oneliner changelogs about the risks of updates.

                                                                                        Monorepo, squash merge, and this potentially long commit summary convention improved the workflow in our situation, and overall the quality of our service. Adhering to arbitrary line length rules because “some guy uses tiny terminal windows and does stuff in the console that we do other way” would have beared no fruit for our customers.

                                                                                        The commit messages had other lines detailing these changes, to let ops team know where the docs are, what are the other implications, etc.

                                                                            1. 2

                                                                              I finished Dune last weekend. After giving up on The Lord of the Rings 15+ years ago, I will attempt reading it again. This weekend, I will start reading Fellowship and hopefully this time I will get into the literary style of Tolkien.

                                                                              1. 1

                                                                                From my memory of The Fellowship of the Ring, the first 2 or 3 chapters can be a slog. It picks up after that.

                                                                                1. 2

                                                                                  I love the Lord of the Rings… but it is a glacial book. Maybe that’s just because the last time I read it I was trying to get back into the habit of reading and my attention span was shot, though… do read it, though :-)

                                                                              1. 8

                                                                                Hopefully, I can get enough reading time to finish my first read-through of Dune. It’s a much better book than I expected, I’m really glad I picked it up.

                                                                                1. 1

                                                                                  I really need to sit down and read these books. I’ve seen every movie / TV adaptation that exists, and I bounced off of them when I was about 15, but I had the attention span of a gnat at that point.

                                                                                  Now I have the attention span of a dung beetle. At least :)

                                                                                  1. 2

                                                                                    The first book was good, the second two were interesting, and I never got through the two after that. And then Frank’s son took over and I hear it’s time to stop before that.