1. 2

    I started using Colemak before actually starting to use Vim, and when I switched to (neo)vim and started learning that, I rebound the keys in an…interesting way: nest. On QWERTY, that’d be jkdf—so it is still on the home row but split across both hands.

    st are up/down, and are easily usable left-handed to browse. When I used vimium, I liked that because it meant I could left-hand scroll while still using the mouse with my right hand. Now it’s just habit.

    ne are right/left (in that order—i.e. they’re inverted: the leftward key moves right). I don’t know why I inverted them. Maybe because M-n for “next window” in xmonad was an easy mnemonic. Maybe just because it felt right at the time. I don’t really use these keys when editing text, but my xmonad keybindings are the same for 2d window navigation and they do get used there.

    1. 8

      I think the author of this post is correct in surmising that the proliferation of feature-rich, graphical editors such as Visual Studio Code, Atom, and Sublime Text have a direct correlation to the downturn of Emacs usage in recent years. This might seem a little simplistic, but I think the primary reason for most people not even considering Emacs as their editor comes from the fact that the aforementioned programs are customizable using a language that they are already familiar with, either JS or Python. Choosing between the top two interpreted languages for your editor’s scripting system is going to attract more people than choosing a dialect of Lisp. The fact that Emacs Lisp is one of the most widely-used Lisp dialects tells you something about how popular Lisp is for normal day-to-day programming. It’s not something that most are familiar with, so the learning curve to configuring Emacs is high. Meanwhile, VS Code and Atom let you configure the program with JSON and JavaScript, which is something I believe most developers in the world are familiar with at least on a surface level. If you can get the same features from an editor that is written in a familiar language, why would you choose an editor that requires you to learn something entirely different?

      I use Emacs, but only for Org-Mode, and I can tell you with experience that editing the configs takes a bit of getting used to. I mostly use Vim and haven’t really compared it to Emacs here because I don’t feel like the two are easily comparable. Although Vim’s esoteric “VimL” scripting language suffers from the same problems as Emacs, the fact that it can be started up and used with relatively minimal configuration means that a lot of users won’t ever have to write a line of VimL in their lives.

      1. 14

        I might be mistaken, but I don’t think that most “feature-rich, graphical editors”-users don’t customize their editor using “JS or Python”, or at least not in the same way as one would customize Emacs. Emacs is changed by being programmed, your init.el or .emacs is an elisp program that initializes the system (setting the customize-system aside). From what I’ve seen of Atom, VS Code and the like is that you have JSON and perhaps a prettier interface. An Emacs user should be encouraged to write their own commands, that’s why the *scratch* buffer is created. It might just be the audience, but I don’t hear of VS Code users writing their own javascript commands to program their environment.

        It’s unusual from outside, I guess. And it’s a confusion that’s reflected in the choice of words. People say “Emacs has a lot of plugins”, as that’s what they are used to from other editors. Eclipse, Atom, etc. offer an interface to extend the “core”. The difference is reflected in the sharp divide between users and (plugin) developers. Compare that to Emacs where you “customize” by extending the environment. For that reason the difference “users” and “developers” is more of a gradient, or that’s at least how I see it. And ultimately, Lisp plays a big part in this.

        It was through Emacs that I learned to value Free Software, not as in “someone can inspect the code” or “developers can fork it”, but as in “I can control my user environment”, even with it’s warts. Maybe it’s not too popular, or maybe there are just more easy alternatives nowadays, but I know that I won’t compromise on this. That’s also probably why we’re dying :/

        1. 13

          Good defaults helps. People like to tweak, but they don’t want to tweak to even get started. There’s also how daunting it can appear. I know with Vim I can get started on any system, and my preferred set of tweaks is less than five lines of simple config statements (Well, Vim is terse and baroque, but it’s basically just setting variables, not anything fancy.). Emacs, there’s a lot to deal with, and a lot has to be done by basically monkey-patching - not very friendly to start with when all you want is say, “keep dired from opening multiple buffers”.

          Also, elisp isn’t even a very good Lisp, so even amongst the people who’d be more in-tune with it could be turned off.

          1. 3

            Also, elisp isn’t even a very good Lisp, so even amongst the people who’d be more in-tune with it could be turned off.

            I agree on the defaults (not that I find vanilla Emacs unusable, either), but I don’t really agree with this. It seem to be a common meme that Elisp is a “bad lisp”, which I guess is not wrong when compared to some Scheme and CL implementations (insofar one understands “bad” as “not as good as”). But it’s still a very enjoyable language, and perhaps it’s just me, but I have a lot more fun working with Elisp that with Python, Haskell or whatever. For all it’s deficiencies it has the strong point of being extremely well integrated into Emacs – because the entire thing is built on top of it.

            1. 1

              I also have a lot more fun working with Elisp than most other languages, but I think in a lot of regards it really does fail. Startup being significantly slower than I feel that it could or should be is my personal biggest gripe. These days, people like to talk about Lisp as a functional language, and I know that rms doesn’t subscribe to that but the fact that by default I’m blocked from writing recursive functions is quite frustrating.

          2. 3

            It’s true, emacs offers a lot more power, but it requires a time investment in order to really make use of it. Compare that with an editor or IDE where you can get a comfortable environment with just a few clicks. Judging by the popularity of macOS vs Linux for desktop/workstation use, I would imagine the same can be said for editors. Most people want something that “just works” because they’re busy with other problems during the course of their day. These same people probably aren’t all that interested in learning a the Emacs philosophy and getting to work within a Lisp Machine, but there are definitely a good amount of people who are. I don’t think Emacs is going anywhere, but it’s certainly not the best choice for most people anymore.

            1. 8

              Most people want something that “just works” because they’re busy with other problems during the course of their day.

              This has been my experience. I learned to use Vim when I was in school and had lots of free time to goof around with stuff. I could just as easily have ended up using Emacs, I chose Vim more or less at random.

              But these days I don’t even use Vim for programming (I still use Vimwiki for notes) because I simply don’t have time to mess around with my editor or remember what keyboard shortcuts the Python plugin uses versus the Rust plugin, or whatever. I use JetBrains IDEs with the Vim key bindings plugin, and that’s pretty much all the customization I do. Plus JB syncs my plugins and settings across different IDEs and even different machines, with no effort on my part.

              So, in some sense, I “sold out” and I certainly sacrificed some freedom. But it was a calculated and conscious trade-off because I have work to do and (very) finite time in which to do it.

              1. 7

                I can’t find it now, but someone notes something along those lines in the thread, saying that Emacs doesn’t offer “instant gratifications”, but requires effort to get into. And at some point it’s just a philosophical discussion on what is better. I, who has invested the time and effort, certainly think it is worth it, and believe that it’s the case for many others too.

                1. 3

                  IDEs are actually quite complicated and come with their own sets of quirks that people have to learn. I was very comfortable with VS Code because I’ve been using various Microsoft IDE’s through the years, and the UI concepts have been quite consistent among them. But a new user still needs to internalize the project view, the editing view, the properties view, and the runtime view, just as I as a new user of Emacs had to internalize its mechanisms almost 30 years ago.

                  It’s “easier” now because of the proliferation of guides and tutorials, and also that GUI interfaces are probably inheritably more explorable than console ones. That said, don’t underestimate the power of M-x apropos when trying to find some functionality in Emacs…

                2. 3

                  Yeah, use plugins in every editor, text or GUI. I’ve never written a plugin in my life, nor will I. I’m trying to achieve a goal, not yak-shave a plugin alone the way.

                  1. 3

                    I’m trying to achieve a goal, not yak-shave a plugin alone the way.

                    That’s my point. Emacs offers the possibility that extending the environment isn’t a detour but a method to achieve your goals.

                    1. 5

                      Writing a new major mode (or, hell, even a new minor mode) is absolutely a detour. I used emacs for the better part of a decade and did each several times.

                      I eventually got tired of it, and just went to what had the better syntax support for my primary language (rust) at the time (vim). I already used evil so the switch was easy enough.

                      I use VSCode with the neovim backend these days because the language server support is better (mostly: viewing docstrings from RLS is nicer than from a panel in neovim), and getting set up for a new language is easier than vim/emacs.

                      1. 1

                        It’s not too surprising for me that between automating a task by writing a command and starting an entire new project that the line of a detour can be drawn. But even still, I think it’s not that clear. One might start by writing a few commands, and then bundle them together in a minor mode. That’s little more than creating a map and writing a bare minimal define-minor-mode.

                        In general, it’s just like any automation, imo. It can help you in the long term, but it can get out of hand.

                  2. 2

                    Although I tend to use Vim, I actually have configured Atom with custom JS and CSS when I’ve used it (it’s not just JSON; you can easily write your own JS that runs in the same process space as the rest of the editor, similar to Elisp and Emacs). I don’t think the divide is as sharp as you might think; I think that Emacs users are more likely to want to configure their editors heavily than Atom or VSCode users (because, after all, Elisp configuration is really the main draw of Emacs — without Elisp, Emacs would just be an arcane, needlessly difficult to use text editor); since Atom and VSCode are also just plain old easy-to-use text editors out of the box, with easy built-in package management, many Atom/VSCode users don’t find the need to write much code, especially at first.

                    It’s quite easy to extend Atom and VSCode with JS/CSS, really. That was one of the selling points of Atom when it first launched: a modern hackable text editor. VSCode is similar, but appears to have become more popular by being more performant.

                  3. 7

                    but I think the primary reason for most people not even considering Emacs as their editor comes from the fact that the aforementioned programs are customizable using a language that they are already familiar with, either JS or Python

                    I disagree.

                    I think most people care that a healthy extension ecosystem that just works and is easy to tap in to is there - they basically never really want to have to create a plugin. To achieve that, you need to attract people to create plugins, which is where your point comes in.

                    As a thought experiment, if I’m a developer who’s been using VS Code or some such for the longest time, where it’s trivial to add support for new languages through an almost one-click extension system, what’s the push that has me looking for new editors and new pastures?

                    I can see a few angles myself - emacs or vim are definitely snappier, for instance.

                    EDIT: I just spotted Hashicorp are officially contributing to the Terraform VS Code extension. At this point I wonder if VS Code’s extension library essentially has critical mass.

                    1. 3

                      Right: VS Code and Sublime Text aren’t nearly as featureful as Emacs, and they change UIs without warning, making muscle memory a liability instead of an asset. They win on marketing and visual flash for their popularity, which Emacs currently doesn’t have, but Emacs is what you make of it, and rewards experience.

                    1. 2

                      I started using Debian Stable for my desktop after I unexpectedly had Arch fail to boot to X* (again) right as I was struggling to hit a major paper deadline.

                      Previously, I’d switched from Ubuntu to Arch because it let me keep up-to-date packages without the headache of Ubuntu’s dist-upgrade (and incredibly premature use of things like pulseaudio and Unity). It worked 99% of the time, but that 1% nearly fucked me over in a big way.

                      I’ve been running Debian Stable for two years now and have yet to ever have it fail to boot to X. During paper deadlines, this is wonderful because if I happen to need to update a library in order to make someone’s code compile, I can just do it and be confident that it won’t cost hours of time getting my system to boot up again.

                      (* When Arch broke, it was because I had to update a library (libigraph if memory serves), which in turn necessitated updating libc, which cascaded into updates everywhere and then lo-and-behold the system couldn’t fully boot until I tracked down a change in how systemd user units worked post-update.)

                      1. 2

                        We do have graph editors, they’re just all proprietary and/or use some arcane format that isn’t nearly as straightforward as text encoding.

                        1. 2

                          Yup, I faced this problem 6 months ago.

                          Even though they are far from being perfect, there are some FOSS graph editors.

                          Gephi: https://github.com/gephi/gephi is really nice. The problem I have with it is its stability. Other than that, you can edit and analyse gigantic graphs from a single tool, which is really nice.

                          I agree with you about the format problem. Even though graphml is quite widely supported, getting interoperability using that format is quite hard, mostly because of poor implementations. For instance, pygraphML (https://github.com/hadim/pygraphml) which is the de facto standard graphml for python and gephi’s graphml importer are not compatible (problem with the nodes labels if I remember correctly).

                          1. 3

                            Depending on what area you’re in, GraphML may not even be feasible. I work with a good deal of social network data, and encoding any remotely large dataset in GraphML would be insane both in terms of parsing time and disk usage.

                            A large/medium (depending on who you ask) dataset I use for testing is nearly 50GB in ye olde edge list format, which takes about 20 minutes to read and parse. GraphML would take even longer. I use a binary format which reduces it to about 12GB and which takes 15-30 seconds to read depending on disk speed.

                            This is fundamentally why there are so many different formats for graph/tree data: such a general structure sees usage in a huge variety of fields, and therefore there are a huge variety of requirements for what it can represent, how efficiently it needs to do so, etc. No one format can possibly meet all of these requirements.

                          2. 1

                            Which programs do you have in mind?

                          1. 47

                            It’s a new language, you have to go slow. I don’t get why people think they should automatically be productive in a new language. Yes, it sucks to be slapped upside the head by the ownership model. But if you want the safety advantage (which, presumably, you do, given that you’re using Rust), that means a compiler that you have to please and think like in order to make it to executable code. If you don’t, use C.

                            I guess I find it immensely confusing that people clamor for tools to save them from themselves and then summarily reject them when they aren’t lenient enough. It’s clear that, collectively, we believe less in good practices than good tools. After all, what is React but a way to enable lots of [junior] coders to work on parts of a page without stepping on each other?

                            Edit: misremembered quote, removed that, thanks angersock

                            1. 12

                              I don’t get why people think they should automatically be productive in a new language.

                              I agree. I am by no means a Rust expert, but I have done some small projects in Rust. The ownership problem in the last example seems trivial to solve. Either you do an early return if you have cached the entry, something like:

                              if let Some(cached) = self.cache.get(host) {
                                  return Ok(cached);
                              // Now the immutable borrow is gone.

                              Or you use the entry function of HashMap. If you find that the entry is occupied, you use get on OccupiedEntry. Otherwise, you have your mutable handle via VacantEntry and you can use it to insert the results into the cache.

                              I understand the author’s frustration. I have also been through the two week ‘how do I please the borrow checker’-hell. But once you understand the rules and the usual ways to address ownership issues, it’s relatively smooth sailing. The guarantees that the Rust ownership provides (like no simultaneous mutable/immutable borrows) allow you to do really cool things, like the peek_mut method of BinaryHeap:


                              Basically it restores the heap property through the ‘Drop’ (destructor) trait. It is safe to do this, because it’s a mutable borrow, blocking immutable borrows and thus inconsistent views of the data when the top of the heap is changed and the sift-down operation has not been applied yet. The other day, I used the same trick in some code where I have a sparse vector data structure, which is a wrapper around BTreeMap that automatically removes vector components when their values are changed to 0.

                              1. 11

                                I get your point, but you are blatantly misquoting the author here. They spent multiple evenings learning rust for their true toy, not an hour. The hour was spent getting 30 lines of code for a different toy example into something that looked like reasonable rust, which didn’t then compile.

                                Please don’t quote people out of context, especially when it is so easy to catch.

                                1. 7

                                  You are right, I double-checked the context and fixed my mistake. Point still stands, however.

                                2. 4

                                  I’m thinking about Haskell and wondering why this is, because the difference is pretty stark. I didn’t expect to ever actually learn Haskell, so when it finally seemed to be happening after months of messing with it, I was pretty overjoyed. Nobody had told me it would be easy; in fact, I had assumed it would be really difficult and I might not be able to get there. I’m not paying close enough attention, are they marketing Rust as something you can learn pretty easily? That’s not the sense one gets from reading blogs.

                                  A lot of people move between Java and JavaScript or C#, and they’re just not prepared for something totally different. This guy is pretty competent though, I don’t think that’s what’s happening here.

                                  Maybe systems people are just really impatient for progress?

                                  Ultimately, all this negative press will work to its benefit. Programmers like to be elitist about knowing hard things (like Haskell, in years past) so if Rust develops a reputation of impenetrability, that just means in two or three years there will be a lot of young Rust programmers.

                                  1. 23

                                    Ultimately, all this negative press will work to its benefit.

                                    Unfortunately, this was exactly the sort of negative press (right down to an ESR hatchet job quoted endlessly) that killed Ada in industry.

                                    For whatever reason “I spent a whole hour writing code that in the end didn’t compile” is considered a damning indictment of a language, whereas “I spent a whole hour writing code that in the end compiled with many subtle correctness issues unaddressed that later bit me in the ass” is considered “wow so productive!”.

                                    Our industry is still very immature.

                                    1. 15

                                      whereas “I spent a whole hour writing code that in the end compiled with many subtle correctness issues unaddressed that later bit me in the ass” is considered “wow so productive!”.

                                      Incidentally, the C++ code that the OP wrote actually contains undefined behavior for violating the strict aliasing rule. (There’s a cast and dereference from char* to trie_header_info*, where trie_header_info has a stricter alignment than char.)

                                      Actually, the Rust code from the OP (linked in comments) contains the same error, but it is annotated with unsafe. :-)

                                      1. 2

                                        ESR definitely ran into the issue described by @brinker above, but apart from that I thought he was reasonably fair about what issues he had. I did not read as particularly spiteful to me, just that he found Go more productive and Rust frustrating and immature. That makes sense; Rust is younger and less mature and Go prioritizes productivity above many other things.

                                        I’m not sure what “killed” Ada in industry but lots of good languages failed during that era for reasons that had nothing to do with technical superiority. Personally I find Ada kind of dull to read and the situation with GNATS has always confused me. I thought Eiffel looked more like “Ada done right” when I was in college; now I think Ada probably had a better emphasis on types than Eiffel did that probably lent itself to reliability more directly than Eiffel’s design-by-contract stuff. But this is way out on the fringe of anything I know about really; I would greatly enjoy hearing more about Ada now.

                                        1. 17

                                          ESR’s post generated substantial frustration in the Rust community because a number of the factual claims or statements he makes about Rust are wrong. Members of the Rust community offered corrections for these claims, but ESR reiterated them without edit in his follow up post. Reasonable criticism based on preference or disagreement with design decisions is one thing, claiming untrue things about a language as part of an explanation of why not to use it is another thing entirely.

                                          1. 4

                                            Well, if it makes you feel better, the big thing I got from it was “NTPsec relies on a certain system call that isn’t a core part of Rust” which isn’t going to weigh on my mind particularly hard. I agree with you that criticism should be factual.

                                          2. 14

                                            ESR definitely ran into the issue described by @brinker above, but apart from that I thought he was reasonably fair about what issues he had.

                                            That wasn’t really my impression: he was factually incorrect about a number of issues in both his initial rant and his follow up (string manipulation in particular), and he mostly seemed to be criticizing Rust for not being what he wanted rather than on its own terms (ok, Rust doesn’t have a first-class syntax for select/kqueue/epoll/IO completion ports – but that’s obviously by design, because Rust is intended to be a runtimeless C replacement, not something with a heavy-weight runtime that papers over the large semantic differences between those calls. If you went into Rust just wanting Go, then just use Go).

                                            I’m not sure what “killed” Ada in industry but lots of good languages failed during that era for reasons that had nothing to do with technical superiority.

                                            If I had a nickel for every time someone quoted ESR’s “hacker dictionary” entry on Ada being lol so huge and designed by comittee, I’d have…well, a couple of dollars, anyways. You’ll still hear them today, and Ada is still on the small side of languages these days, and still isn’t designed by committee, while a lot of popular languages are.

                                            It had some attention on it in the late ‘90s, but every time a discussion got going around it, you’d hear the same things: people parroting ESR with no direct experience of their own, people (generally students, which is understandable, but also working professionals who should have known better) complaining that the compiler rejected their code (and that’s obviously bad because a permissive compiler is more important than catching bugs), etc.

                                            Just a general negative tone that kept people from trying it out, which kept candidates for jobs at a minimum, which kept employers from ever giving it serious thought.

                                            1. 5

                                              I encountered Ada in the early 90s (in college). At that time, Ada was considered a large, bondage and discipline language where one fought the compiler all the way (hmm … much like Rust today). At the time, C has just been standardized and C++ was still years away from its first real standard so Ada felt large and cumbersome. At the time, I liked the idea of Ada, just not the implementation (I found it a bit verbose for my liking). ESR was writing about Ada around this time, and he was was parroting the zeitgeist of the time.

                                              Compared to C++ today? It’s lightweight.

                                              1. 4

                                                Yeah, it’s funny the way initial perceptions sink something long after they cease to be true.

                                                I was having these conversations in 1998-1999, after the Ada ‘95 and C++ '98 standardizations, which were really easy to point at and say that “no, actually, Ada’s a lot smaller than C++ at the moment”, but it didn’t really matter, because an off-the-cuff riff on Ada as it had stood in maybe the late '80s was the dominant impression of the language, and no amount of facts or pointing out that paying a bit of cost up front in satisfying the compiler was easily better than paying 10x that cost in chasing bugs and CVEs were capable of changing that.

                                                This is more or less what I’m concerned is the unclimbable hill Rust now faces.

                                                1. 7

                                                  I have one Ada anecdote. When I was an undergrad, one of the CS contests in my state had a rule that you had to use C, but you could use Ada instead if you had written a compiler for it and you used that compiler. Apparently students from the military institute would occasionally show up to the contest with their own Ada compiler, and they were allowed to use it.

                                        2. 20

                                          In my experience, the problem is thus:

                                          C and C++ programmers, hearing “systems language,” expect Rust to be a lot easier for them than it is. Yes, Rust is a systems language that offers the same degree of performance that they do, but it is in many respects wildly different. This difference between expectation and reality leads to a lot of confusion and frustration.

                                          On the other hand, people coming from languages like Python and Ruby expect Rust to be hard, and often find that it is easier than they anticipated. Not easy, but easier.

                                          1. 7

                                            I might amend that to hearing “systems language that will make you unbelievably productive”. The common thread to many rust complaints I’ve seen is that “after considerable investment studying the borrow checker” is relegated to a tiny footnote.

                                            There’s quite a gap between “correct” and “provably correct” code. It’s easy to write the former, but convincing the compiler of the latter is difficult. It doesn’t really feel like progress.

                                            1. 10

                                              To your first point, I do think that Rust could be more up front about the complexity. For example, I think that there could be more done to encourage Rust programmers to read The Rust Programming Language book before attempting a new project. There are a number of common stumbling blocks addressed in the book, and things like changes to Rust’s website could encourage more people to read it.

                                              On the second point, I disagree. While there are C and C++ programmers who can write safe code without the ceremony of Rust to back them up, I don’t think this is true for most programmers. That is, while the gap between “correct” and “provably correct” may be large, there is also a large gap between “looks correct but isn’t” and “actually correct” code, and Rust helps the layperson write “actually correct” code with confidence.

                                              1. 2

                                                This book could also be a source of the mismatch between users that didn’t have much trouble getting over the borrow checker (e.g. me) and those that did.

                                                When I was learning rust, I had multiple tabs open with the book all the time and it helped tremendously. However, I’m also familiar with a lot of FP languages (not necessarily fluent; e.g. Haskell, Scala, Clojure, OCaml) so the type system wasn’t an additional point of confusion like it may be for those who’ve spent the majority of their time in C/C++/Java.

                                              2. 2

                                                By “correct” and “provably correct”, you mean “provably correct by hand” and “mechanically provably correct”, respectively, right? I have no idea how you could write code you know is correct without at least an informal proof sketch in your head that it’s indeed correct. (Or are you saying it’s easy to write correct programs by accident? I’m pretty sure that’s not true.) An informal proof might not satisfy a mechanical proof checker, but it’s a proof nevertheless.

                                            2. 3

                                              Maybe systems people are just really impatient for progress?

                                              That statement sounds so weird after a life-time of C/C++ being virtually the only game in town. I know about Ada and the like, but for non-specialized systems programming, yeesh.

                                              1. 4

                                                I mean, impatient to make progress on their problem, not impatient for new languages and paradigms.

                                                1. 1

                                                  Ah, that makes more sense.

                                                  I dunno, webbers I run into seem to have stronger time preference than systems people.

                                                  Systems is harder initially I think.

                                            3. 3

                                              the hacker news comments made a really good point - while on the whole rust may not be more complex than say c++ or haskell, it makes you pay most of that complexity up front as the price of admission, whereas in most languages you can start off by using a small subset of the features relatively easily and have working code while you ramp yourself up to the full language.

                                              1. 2

                                                It’s a new language, you have to go slow.

                                                So true. Going from Java -> Ruby took me months to get something near idiomatic code. Ruby -> Golang was much faster (about a month) but still took time. Like you, I’ve realized it takes weeks or months for these ideas to percolate through your brain and become second nature.

                                              1. 14

                                                As a serious vim user: This is genuinely cool! It might be too late for my fingers to ever abandon Vim, but I applaud any effort to make modal editing more learnable and ubiquitous. The object->verb ordering is probably the single biggest contribution of the modern OOP language world and it just makes sense for text editing commands too.

                                                Some constructive criticism: My visceral reaction to seeing Clippy is so bad that it almost makes me not want to read anything else on your pages or watch your videos. You may consider avoiding Clippy and his negative associations.

                                                1. 2

                                                  The new grammar may bring some benefit, but text objects are IMO not a real problem. Let’s see if I get around to testing this, but I’m wary of it, because Vim takes a long time to master, so this will probably too.

                                                  My gut tells me it’s trading something off for something else and the individual user’s mileage will certainly vary.

                                                  1. 2

                                                    Clippy’s not that bad but it would nice if you could turn him off.

                                                    The code would suggest there’s a cat option if it bothers you that much:


                                                    1. 6

                                                      It is actually possible to turn it off entirely. I’ve been messing with kakoune and this is the first line in my kakrc:

                                                      set global ui_options ncurses_assistant=none
                                                    2. 2

                                                      They’re just following in the footsteps of Clippy for nvi.

                                                    1. 1

                                                      A few short years ago: Emacs. It introduced me to the love of my life: Lisp.

                                                      Also, maybe a bit surprising to some: JavaScript. Getting a job writing JS set me up to get a couple other jobs, and I’ve made a lot of great friends through those experiences. Not to mention that JS also introduced me to FP and subsequently Clojure(Script) and a wider world of programming languages.

                                                      1. 3

                                                        Yeah, I can definitely agree with your choice of Emacs. I recently switched from Vim to Spacemacs and it was a significant jump in my productivity. I almost put it in my above list, but decided not to because although it was a big improvement, it wasn’t quite as paradigm-shattering and career-changing as the other three.

                                                      1. 6

                                                        In light of the (somewhat recent) discussion on pop science writing, I’d like to mention that this is a great example of popsci done right.

                                                        The writing is clear, to the point, and doesn’t speculate beyond the author’s abilities. Really, its like a well-written abstract: after reading it I have a clear idea of what the paper’s conclusion is, how they reached it, and what the significance is. At the same time, I could send this to my non-tech friends and they’d get all the important bits.

                                                        Good on you, Richard Chirgwin. Good on you.

                                                        1. 1

                                                          This actually might solve one of the big problems I’ve had using Luigi to run the same program on a huge number of parameter sets. I can generate the configuration files, store them for reproducibility, and run based on them.

                                                          1. 10

                                                            I agree with the general sentiment of this post, but the example they use seems pretty weak. I don’t even know ruby, but i have a solid idea of what that one-liner does from reading it (upcase the first word of the string; granted, it took more effort to do so than many lines I’ve read – but again i don’t know Ruby).

                                                            A better example of a darling one-liner (imo) would be the one-statement swaps in C

                                                            1. 8

                                                              I actually think this would put the sentence in title case, not upcase the first word of the string.

                                                              1. 9

                                                                Yes, it does put the sentence in title case.

                                                                I agreed with emallson—it did seem pretty easy to read—up until his interpretation of what it did, ironically proving it wasn’t easy to read. ;)

                                                                1. 5

                                                                  In fairness, I did mention that I don’t know ruby. That I got that close without knowing the language seems to indicate that that snippet of code isn’t actually that difficult to read (and, besides, both you and @magikid were able to read it).

                                                              2. 7

                                                                one-statement swaps in C

                                                                These annoy me because people do them in the name of performance, as if the compiler wouldn’t know the best way to swap two integers. Seriously? It’s 2016.

                                                                Clang even optimizes the triple XOR trick into MOV instructions because it’s inane. Most of the time it can optimize the swap out completely into its normal register juggling anyway.

                                                              1. -2

                                                                I’m sorry to be disrespectful, but this is the most disappointing “Emacs clone” I’ve seen to date. I understand it’s a personal project and only a week old, but at least dial back the expectations and call it “Emacs-like”. If it can’t load (and use) my .emacs file then it’s not a clone.

                                                                The whole phenomenon of “cloning” Emacs in different languages makes very little sense to me.

                                                                1. 9

                                                                  Uh, the standard of “it should be able to read my .emacs” is absurdly high for calling something a clone. Even during the heyday often a .emacs would work on GNU Emacs but not XEmacs or visa-versa and they were both fully-fledged implementations.

                                                                  To be honest, I’d be surprised if any non-fork of GNU Emacs could ever (correctly) interpret a .emacs initially written for GNU Emacs without significant modification.

                                                                  1. 2

                                                                    I’m fully aware of that, but that’s what it means to “clone” something.

                                                                    I don’t see the big deal in admitting there’s no intention to support ELisp and calling it Emacs-like. Calling it a “clone” implies a much more impressive level of effort and support for existing Emacs code which most of these projects simply aren’t going to do.

                                                                    I’m also aware that 30 years ago there were numerous incompatible “Emacs” implementations, but that was a long time ago and nowadays Emacs is synonymous with GNU Emacs (and maybe XEmacs), and everybody who sees “Emacs clone” will be thinking of GNU Emacs.

                                                                  2. 2

                                                                    The word ‘possibly’ tells me that @dwc has a refined definition of ‘clone of emacs’. EDIT: actually the phrase came from larsbrinkhoff, the author of this fmacs… And that guy is also the author of ‘emacs-cl’!

                                                                    I could be wrong!

                                                                    For me, ‘an emacs’ is something that binds key-presses to functions, and some of those functions can edit functions… Really that’s it. That basic functionality is what emacs is built on top of. (See ‘temacs’[0].)

                                                                    Forth is known to be good at building a lot from a little… :)

                                                                    0: https://www.gnu.org/software/emacs/manual/html_node/elisp/Building-Emacs.html

                                                                    1. 2

                                                                      The word ‘possibly’

                                                                      I took this to mean “maybe in the future”

                                                                    2. 2

                                                                      OK so people are mis-using the word “clone” a bit. Probably should be saying “an emacs-LIKE editor written in FORTH.”

                                                                    1. 2

                                                                      I’m (J.) David Smith, currently a Ph.D. student at the University of Florida. Graduated with a B.S. in Comp Sci and Math from University of Kentucky last year, and did a couple of internships at IBM during undergrad.

                                                                      My area of research is currently online social networks, particularly optimization problems on them. I’m helping with some work on viral marketing (eww) right now, but will be moving more towards a combination of theory and more socially-valuable applications in the near future.

                                                                      The main hobby I indulge in is video gaming, which right now basically means XCom 2, LoL and WoW. I also have been doing Brazilian Jiu-Jitsu for close to a year now (highly recommended! a great art, and a great community around it) and spent not a little time working on programming projects on the side.

                                                                      My current (latent) project is a scheme-scriptable wayland window manager (Gram). I’m super-dissatisfied with how the low-level experience in wow is right now as well, so I’m working on some math models to see if I can come up with constructive recommendations to improve it.

                                                                      1. 2

                                                                        Say, did you know a guy named Connor Greenwell at the University of Kentucky? I was in a research program with him at UNC Charlotte in 2014. (I have no idea how large the UK CS department is, so it may be that this question is silly.)

                                                                        1. 2

                                                                          It’s a small world! I worked with him under Dr. Jacobs at UK. (The UK CS dept. is large enough that there are a lot of people I didn’t know, but Connor is one I do) I’ve semi-kept-in-touch with him. Were you working with the same professor at UNCC or just fellow REU members?

                                                                          1. 2

                                                                            Oh cool! We were working in the same lab, different advisors. He was working with Dr. Souvenir on using facial information to assist with determining the location a picture was taken. I was working with Dr. Zhang on automated correction of cell boundary results for breast cancer biopsy images. I haven’t talked with him in a while. We have a Facebook group for the REU people, and I periodically post there to see how everyone is doing.

                                                                      1. 11

                                                                        There’s a collective of quixotic Mexican software developers and users that is quite active. I wonder why is it that FSF’s philosophy with its exhortation to viciously defend freedom resonates so well in some parts of Mexico. It was those groups, which congregate on the Hackmitin[1], Hacklab Autónomo and Rancho Electrónico that helped Jacobo Nájera with his legal proceedings against Secure Boot.

                                                                        I went a couple of times to the Hacklab. It’s an interesting place. At the time, it looked like they were squatting in an abandoned building and they looked like Hollywood hacker stereotypes. If it weren’t for the proliferation of hardware with Debian and Trisquel logos, their appearance make you would think these were just ordinary anarchist punks. In a way, that’s what they are, except they are technoanarchist punks, and obviously not completely anarchist as they know how to work with the legal system. They were very left-leaning, distrustful of all corporations, completely aligned with FSF philosophy; radical, feminist, and fiercely protective of their rights.

                                                                        I rather miss that scene. I haven’t found quite something like it here in Canada.

                                                                        I hope Nájera manages to get somewhere, but it seems like a hopeless fight against MSFT, the one that is really ensuring that installing the OS of your choice is impossible. The whole “security” thing is a sideshow; the real goal here with “Secure” Boot is to make it harder to install unlicensed copies of Windows.

                                                                        [1] (“mitin” in Spanish is from English “meeting” but has left-leaning political connotations such as protests and marches.)

                                                                        1. 4

                                                                          I wonder why is it that FSF’s philosophy with its exhortation to viciously defend freedom resonates so well in some parts of Mexico.

                                                                          I suspect it’s because much of Latin America has deeper roots with leftist politics, and Free Software hems closely (if not explicitly) to much of the same underlying philosophy. I’m more disappointed it doesn’t resonate with the majority of the tech crowd here in the states. Instead, the reactionary “Open Source” movement is the cultural juggernaut, often with an explicit rejection of Free Software. I’d love to get involved with what they’re doing. Granted, my Spanish is mediocre at best. I do like this bit from your second link though (at the bottom):

                                                                          NO NOS ENCONTRARÁS EN FACEBOOK

                                                                          1. 1

                                                                            I hope Nájera manages to get somewhere, but it seems like a hopeless fight against MSFT, the one that is really ensuring that installing the OS of your choice is impossible. The whole “security” thing is a sideshow; the real goal here with “Secure” Boot is to make it harder to install unlicensed copies of Windows.

                                                                            Or you enroll your own keys, sign your own kernels, and remove the OEM provided ones. Or enroll the keys of some other entity you trust.

                                                                            Secure Boot is a tool. That’s all it is. Properly used, it can be a major boon in preventing rootkits at the bootloader and kernel module level.

                                                                            1. 4

                                                                              That assumes you can add your own keys. I don’t believe that’s possible on my current laptop, though I haven’t investigated extensively. I can do it on my desktop, but I feel that that’s likely a rarity.

                                                                              1. 5

                                                                                Microsoft mandates that for a computer to be sold with OEM Windows 10, you must be able to configure the chain of trust. If it doesn’t do that, it’s buggy firmware. (completely unsurprised with OEMs)

                                                                          1. 3

                                                                            Work: trying to make heads/tails of my direction on this next paper. It is a continuation of the topic of an accepted paper, but I feel like I’m in a bit over my head. I’ve been asked by my advisor to try to parallelize our greedy approximation algorithm, which would be relatively straightforward giving recent literature except for a catch: in the interesting case our greedy function is not submodular. Submodularity (aka adding elements to the solution suffers diminishing returns) is key to most general works on greedy approximations. We worked around this in previous work by proving bounds in a submodular special case and then showing that in practice the supermodular case had similar results. However, I want to omit the (uninteresting) submodular case from further work, so we’d lose that comparison. Besides, proving bounds for the supermodular case would be valuable.

                                                                            Unfortunately, doing so means we’d be unable to directly re-use work on stochastic optimization and the parallelization linked above (along with every other greedy parallelization I’ve seen). I may be able to re-phrase the objective as submodular, but the result will not be as obviously correct as the supermodular function.

                                                                            Personal: I fought a bunch with gram/guile over the past week trying to implement (jump-to-workspace) in terms of dmenu selecting a workspace by name from a list. Guile has a really nice IO-port system that opaquely manages subprocesses, but unfortunately lacks the ability to close the input and output separately, so I worked around it with some /tmp/file shenanigans. The result was inexplicably causing Gram to hang when called by a keybinding, but not when used at the REPL. I figured out the reason this morning: calling waitpid (indirectly, which is why it took so long to figure out) on the render thread (aka by keybinding) prevents dmenu from ever being rendered, and therefore from exiting, and thereby prevents waitpid from returning. Oops. Temporary workaround is to count on Guile to automatically reap the process when the handle for it goes out of scope, but I’m planning to move hooks into their own threads so that in general a runaway hook won’t cause the WM to hang.

                                                                            1. 2

                                                                              By volume, I mostly play video games (these days that means lots and lots of XCOM 2; fast approaching 200 hours).

                                                                              By energy expended, definitely Brazilian Jiu-Jitsu. I try to maintain a schedule of 3 classes a week so that I make noticeable improvement (good for my morale), but often I only make two.

                                                                              In the time between XCOM campaigns and BJJ I work on miscellaneous programming projects (most recently a tiling WM) and read. I’ve been finishing up Tufte’s Envisioning Information and will probably switch over to the SFF short story anthologies I’ve got loaded onto my Kindle after that.

                                                                              1. 5

                                                                                This week I’m wrapping up work on the paper we were trying to submit to ICDM (but didn’t quite make). One of the experiments (that I wrote, oops) didn’t come out as expected and we believe it’s an implementation error.

                                                                                From there, I’m going to be reading on a problem I want to work on and starting to work on a continuation of previous projects with an eye towards INFOCOM submission. Also still toying with alternate implementation languages. I’ve made some progress on a Rust implementation that is pretty exciting. Should be enough done today to actually do some performance tests!

                                                                                On the personal side, I’ve set gram up as the window manager on my laptop and already run into usability issues. I wrapped the C-level hooks in Guile catch code to prevent them from causing a crash when exceptions are thrown, but it needs further improvement (doesn’t log stack traces at the moment – though that should be trivial, and doesn’t continue running other hooks in the list after the first one throws). But it’s actually surprisingly usable at this point. I need to implement some userland workspace and output handling code and then write a wrapper for either xmobar or i3bar to give a status bar, but its usable (though not recommended yet for any interested).

                                                                                1. 3

                                                                                  I made a lot of progress in all of my projects except for work last week. I have added most of the mouse and floating layer support to gram. Last night I hit the milestone that the following worked:

                                                                                  (define-key! default-keymap (kbd "Mouse1") view-focus)

                                                                                  I’ll bet that even without much context, most of you can guess what that does. The only remaining thing for floating layer support is to actually implement drag-to-move/resize, which at this point will be done in Guile as I’ve exposed all the necessary primitives there. EDIT: Done! ^.^

                                                                                  Once I finish that, I’m going to figure out how I intend to document this and write the remainder of the user-motion functions.

                                                                                  I’ve also been experimenting with writing greedy algorithms for work (grad student) in other languages. My first stop was Scheme (specifically Chez because of its reputation for speed, which is crucial for this). Writing the reverse-influence-sampling (sampling probabalistic graphs backwards to estimate node influence) code and the BFS implementation that supports it was surprisingly easy and performs reasonably well. Sampling 1M times from a 5000 node graph takes about 3 seconds, which is more than the C++ implementation I have but also far more readable. Unfortunately, when implementing the greedy algorithm itself things fell apart and performance is abyssmal. I may take a second pass at it because my code is frankly horrifying (lots of do, little recursion), but I’m not confident that it will reach an acceptable level of performance.

                                                                                  Probably the biggest disappointment in this is seeing how the other issues I’ve had with it make this work far more difficult than it needs to be. The Scheme community is extremely fragmented and aside from the relatively few actual R6RS libraries I’ve seen (random R6RS code on github does not count) it seems like that would be the biggest problem with actually using Scheme – even in a situation like this where most of the code needs to be written from scratch to satisfy research requirements (e.g. many graph libraries don’t (easily) support graphs with arbitrary numbers of attributes on nodes and edges). That and the lack of either Clojure-style protocols, Rust-esque traits, or any form of interfaces makes trying out alternate data structure implementations far more time consuming than it should be. I was switching back and forth between SRFI-1 list-sets and ijp’s purely functional sets for performance comparisons and it required almost a full rewrite of the related code.

                                                                                  Once I have some more languages to compare, I’m going to write a (likely lengthy) blog post about the options for this kind of work (pros/cons, etc).

                                                                                  Do any fellow crustaceans have language suggestions? My next stop is probably either going to be OCaml or Rust before I take a look at Common Lisp and maybe (very maybe) Haskell.

                                                                                  1. 4

                                                                                    I’m considering learning by Guile by rewriting my Vala program. I want to learn a lisp dialect language and I guess I don’t have any attachment to Guile but it seemed the most convenient given my intended project.

                                                                                    1. 4

                                                                                      If you want to embed Guile in a C/C++ application and have any questions, I’ll be happy to help. I’ve managed to make it work (maybe not in the best of ways …) in a C++ multi threaded application https://github.com/rjmacready/Stockfish/tree/master, just check these 2 commits here and here, There’s still a lot of work to be done scripting-wise, but the boilerplate to call scheme from c++ and vice-versa is working.

                                                                                      1. 2

                                                                                        That wasn’t part of my original plan, but now that you mention it, it might be! Thanks for the offer!

                                                                                      2. 3

                                                                                        What are you planning on doing with it? I’ve been using Guile for a project and its been nice to work with.

                                                                                          1. 1

                                                                                            So what would guile be used for there? User scripting of calculations? It’s hard to tell from the Readme / site. You mention rewriting stuff that’s in Vala, would you be building the UI in guile?

                                                                                            1. 2

                                                                                              I hadn’t made it that far into the process. My initial thought was to attempt to move “LibBalistica” to Guile then just integrate that in instead of the existing Vala code.

                                                                                              Edit: The code for libbalistica is already compiled separately and statically linked within CMake, so if I were only going to do part of the project up front, it would be this or the GUI.

                                                                                      1. 48
                                                                                        • The process turns a request for binary DNS data into into XML, feeds it into the sytemd/dus ecosystem, which turns it into binary DNS to send it to the forwarder. The binary DNS answer then gets turned into XML goes through systemd/dbus, then is turned back into binary DNS to feed back into glibc.

                                                                                        That’s certainly one way to do things.

                                                                                        1. 27

                                                                                          It’s things like that which make me question if people understand that software is entirely man made and doesn’t need to be complicated. The Standard Model isn’t forcing XML on us.

                                                                                          1. 17

                                                                                            “It was like this when I got here.”

                                                                                            1. 1

                                                                                              “It just works.”

                                                                                            2. 5

                                                                                              Apropos, one [of many] great Henry Baker’s Quotes

                                                                                              Physicists, on the other hand, routinely decide deep questions about physical systems–e.g., they can talk intelligently about events that happened 15 billion years ago. Computer scientists retort that computer programs are more complex than physical systems. If this is true, then computer scientists should be embarrassed, considering the fact that computers and computer software are “cultural” objects–they are purely a product of man’s imagination, and may be changed as quickly as a man can change his mind. Could God be a better hacker than man?

                                                                                            3. 19

                                                                                              Where does XML supposedly come in? D-Bus does not use XML for serialization.

                                                                                              Also the original announcement at https://lists.ubuntu.com/archives/ubuntu-devel/2016-May/039350.html says resolved does not require D-Bus.

                                                                                              1. 5

                                                                                                It’s on the internet, it must true. :)

                                                                                                1. 19

                                                                                                  I’ve thought about this some more. (As a small matter, the choice of serialization format wasn’t really the big wtf for me.) But it does illustrate systemd has an image problem. I’m willing to believe just about anything. Its detractors have certainly been hard at work, and they haven’t been entirely fair. But then Lennart “haha, fuck BSD and tmux too for good measure” has been a rather poor defender of his choices. Everything I’ve read by him leads me to conclude he doesn’t believe software can be too complicated, only not complicated enough. So presented with a claim that systemd does something extraneously silly, my default response is not to reject it.

                                                                                                  Asking for evidence is exactly what one should do.

                                                                                                  1. 8

                                                                                                    But then Lennart “haha, fuck BSD and tmux too for good measure” has been a rather poor defender of his choices.

                                                                                                    He also has very poor attackers. Most of the criticism I read basically boils down to “everyone hates on systemd and believes it’s not POSIX”. (from our recent discussions, I’d happily exclude you there)

                                                                                                    No one wants to engage with that crowd in a nuanced argument, lowering the quality of support and the quality of criticism at the same time.

                                                                                                    This is also why I regularly call out non-complex arguments, because that is the road they lead down to.

                                                                                                    We happily use systemd in a lot of deployments and like it in practice. It works and is approachable to newcomers. Software and new software have bugs (also critical ones), so it doesn’t help to call out “systemd implemented a base service” - that’s the way the project works, deal with it. All of the components systemd now replaces will be replaced at some point.

                                                                                                    Criticism must be phrased in terms of whether the pace is healthy or different approaches would work better or in platform-wide solutions lost along the way.

                                                                                                    You have to break an egg to make an omelette, but there’s always the question what kind of omelette it should be.

                                                                                                    1. 4

                                                                                                      Yeah, it’s been more heat than light all around.

                                                                                                2. 3

                                                                                                  According to this post on lwn:

                                                                                                  is really as easy as it gets

                                                                                                  But looking at the source it is using lots of sd_bus_message* calls so for something doesn’t require D-bus it seems to have a dependency problem…

                                                                                                  1. 2

                                                                                                    I was wondering this myself.

                                                                                                  2. 14

                                                                                                    To be fair, turning things into an internal representation for processing before serializing back into the original format is not at all uncommon.

                                                                                                    1. 1

                                                                                                      This is true, and I expect this to be done especially when the original format is a binary blob. But there are better formats than XML! Especially if this is only used internally for processing, why not make it some kind of object? XML is rigid and prone to breakage, and is meant to be something barely amenable to both humans and machines. Seems extraneous here.

                                                                                                      1. 1

                                                                                                        “some kind of object” still has to be serialized which was the point of contention.

                                                                                                    2. -4

                                                                                                      *drops mic*

                                                                                                    1. 3

                                                                                                      This is actually the main area I’ve applied property-based testing to. While I wasn’t working in python and so couldn’t use hypothesis, the idea is the same and it worked extremely well.

                                                                                                      1. 3

                                                                                                        That’s good to know, because I thought I invented the technique! Or, rather, I knew it was very unlikely that I was the first to invent the technique, but I’d not seen anything on the subject so had to independently reinvent it. Stopping other people from having to go through that is one of the things I’m trying to fix by having lots of documented examples of how to use property based testing.

                                                                                                        Do you have a link to anything public you can share?

                                                                                                        1. 1

                                                                                                          Unfortunately, I don’t think I can share what I wrote. I’ll double check, because it actually turned out quite well.

                                                                                                          In one case I was working with a JS method to transform an image so that a particular plane faced the camera. I used jsverify to test this by generating a solution, and then building a problem that would have that solution. Then, I’d run my method on the problem and check if it was close enough to the solution to be feasible.

                                                                                                          This of course requires that the problem has a unique solution or that you can enumerate all the possible solutions and have some rule for deciding whether the given one is good enough w.r.t. the one originally generated, but a lot of problems have unique solutions.

                                                                                                          On thinking about it, this isn’t quite the same optimization that you’re talking about in that blog post (numerical vs greedy, though both end up being approximations) but is still in the same vein I think.

                                                                                                          I’ve been wanting to write this sort of test for my current work (mostly greedy approximations), but I’m stymied by the fact that 1) the solution-first approach is hard here, and 2) all the code I’ve had in the past is in C++, which makes building models harder than hypothesis or jsverify.

                                                                                                          1. 2

                                                                                                            In one case I was working with a JS method to transform an image so that a particular plane faced the camera. I used jsverify to test this by generating a solution, and then building a problem that would have that solution. Then, I’d run my method on the problem and check if it was close enough to the solution to be feasible.

                                                                                                            Oh, that a nice trick. I don’t think it’s the same one at all, but I like it!

                                                                                                            the solution-first approach is hard here

                                                                                                            I do think you might find it useful to try the modify-based-on-a-solution approach described here: You don’t need to build a solution, you just need to be able to build inputs and them tweak them in a way with a known effect on the solution.

                                                                                                            I’ve had in the past is in C++, which makes building models harder than hypothesis or jsverify.

                                                                                                            Ah, well, I don’t have the Hypothesis C++ port ready yet I’m afraid so I can’t help you there :-)

                                                                                                            (This is seriously on the agenda, but I don’t have a time frame)

                                                                                                            1. 1

                                                                                                              Ah, well, I don’t have the Hypothesis C++ port ready yet I’m afraid so I can’t help you there :-) (This is seriously on the agenda, but I don’t have a time frame)

                                                                                                              iirc there is a decent one that uses template metaprogramming to build models, but I don’t want to mess with that :P

                                                                                                              In the future I’m hoping to switch languages, and existence of a good property-testing framework is something I’ll be factoring into my decision.

                                                                                                              1. 2

                                                                                                                iirc there is a decent one that uses template metaprogramming to build models, but I don’t want to mess with that :P

                                                                                                                I’m sortof of the opinion that if a property based testing system fails to see widespread adoption that’s probably a sign that it’s not decent (it may of course be a sign of really bad marketing, but given the amount of work required to write a good property based testing system it’s rare to have a good system with marketing that bad).

                                                                                                                Usually the culprit is that it’s a bit scary, or that it fails to feel language native. Most of the C++ attempts I’ve seen definitely fail to feel language native.

                                                                                                                Even if there were a decent one I’d still probably do a Hypothesis port. I need to port the engine to C at some point, at which point it’s easy to do a C++ wrapper, and Hypothesis is sufficiently better for testing imperative programs than classic QuickCheck that I think it would be a clear win.

                                                                                                                In the future I’m hoping to switch languages, and existence of a good property-testing framework is something I’ll be factoring into my decision.

                                                                                                                Long-term my goal is to make that no longer a factor which distinguishes languages :-)