1. 2

    What are the actual gains in performance? A benchmark would be really interesting.

    1. 4

      We ran benchmarks at my place of employment. The read time increases linearly in relation to the offset, and is problematic for some of our customers with large numbers of entities. We have a small minority of calls that take over 500ms due to large offsets, which is terrible. This ruins our p999 times. The benchmarks were run sequentially & randomly, doesn’t seem to affect the performance much (Postgres)

      On the other hand, using a cursor is constant in relation to the offset.

      Unfortunately we’re going to have to go through a deprecation process now to sort this out :(

      1. 2

        Here is more information on the topic including a reference to slides with benchmarks (see page 42 for the comparison).

        1. 2

          I added some really simple comparison and a link to Markus’ article to a post.

          1. 1

            Great! Thank you :)

        1. 1

          Heh, I thought this would be about storing an unpacked docx (all the office XML stuff) in git :D

          1. 2

            FTR, OpenDocument has flat XML variants (like .fodt) of their usually ZIPped XML formats. They should be easier to keep under version control.

            1. 1

              Same! That might allow some measure of branching and merging, but this way is basically git as read-only diff generator, I think.

            1. 1

              What advantage does this have to Emacs’ built in client?

              1. 1

                Which emacs built-in client? :D There are three different ones that come bundled with emacs by default.

                1. 1

                  Last I checked all the built-in MUAs in Emacs are written synchronously; any operation which takes a nontrivial amount of time blocks the entire UI. They’re also not built with the assumption of search in mind; at beast search is tacked on afterwards as an advanced optional feature rather than being used as the heart of the entire interface.

                  1. 1

                    I haven’t looked into the workings of any of the three (MH, Rmail, and Gnus, I’m assuming) in particular detail, but I think this really depends on how you use them. MU shells out to an external process for most of its features; so does MH, and when I used it I never experienced any slowdown in usage. The same with both Rmail and Gnus, working on a local spool, and I’ve heard that connecting to a local mail server in Gnus also has fine speed.

                    Search is the big one. I never found advanced search particularly advantageous, but by default all three are pretty bad. All of them can have something like Mairix or Notmuch tacked on, but that’s it - it feels tacked on, and is less cohesive than something designed around search.

                  2. 1

                    mu is search based. Maybe you’ve heard of notmuch - it’s a similar approach, but different^^

                    1. 1

                      As a long time notmuch user: How do they differ? From when I quickly tried mu4e a while back it mainly seemed to offer a slightly different UI. The workflows mostly seemed to be the same.

                      1. 1

                        I haven’t really used notmuch, so I cannot really speak to that. I mentioned it, because with regards to ‘search based’ mail workflows (that are not Gmail or similar) notmuch seems to be more well known.

                        Having said that, the search interfaces are similar to what I know of it.

                  1. 1

                    find “$DIR” | sort | uniq | gzip > “$out”

                    Go and Python:

                    Excercise for the reader ;)

                    I find this example rather unconvincing. A large part of shell scripts is usually file and string processing. And while shell scripts need to invoke external programs to accomplish these tasks most regular programming languages don’t. The above pipeline can be easily implemented directly in Python. The resulting program will certainly be more verbose, but it will also be easier to maintain and be less prone to errors from various shell quirks and corner cases.

                    1. 3

                      I don’t think it can be implemented in python (easily) in a way that runs with multiple processors or handles hundreds of gigabytes of data like coreutils can.

                      I would love to see it though.

                    1. 6

                      The paper for Rash, a shell scripting DSL for Racket lists a few libraries of interest:

                      1. 1

                        I’ve always been ambivalent about fish, zsh and other shells but I could see myself using rash as my system shell and turtle also looks incredibly useful for nix-shell scripts.

                      1. 10

                        I’ve never used the mouse with the terminal, but I’m not sure why

                        I don’t use it either, but I do know why: I like having the default copy/paste text behaviour work anywhere in terminals. For example, the other day I had to copy a PID from htop on a remote machine, and the mouse support prevented that. It’s like web pages where you can’t select and copy text.

                        You can reimplement this, but a lot of apps don’t, or have different behaviour. In general, I find it more annoying than helpful.

                        1. 11

                          At least in urxvt and xterm Shift+Mouse allows you to select text even if the application tries to react the the mouse events.

                          1. 4

                            Ah, this works in st as well; I didn’t know this, thanks!

                            1. 3

                              In konsole, all gtk teminals and mintty this works as well. I would be surprised to find a terminal emulator that did not support this.

                              1. 1

                                At least in urxvt and xterm Shift+Mouse allows you to select text even if the application tries to react the the mouse events.

                                Is it possible to invert this behavior? I would prefer if the mouse worked normally, except when shifted, then it can be captured by the application.

                              2. 2

                                I wonder if you can disable it on a per-operation basis? In iTerm for example, if you alt-drag, it doesn’t send anything to the program, it just highlights the displayed text.

                                1. 1

                                  Yeah, I use option-drag on iTerm to use the terminal native selection when in e.g. vim with mouse mode on.

                                  Also I’d highly recommend knowing about using option-cmd-drag — it will let you do block selection of text.

                                  This can be very useful when you’re in e.g. tmux or vim with split windows and you only want to select a block of text not whole sets of “lines” in the terminal if that makes sense.

                                2. 1

                                  Yes that makes sense, and is something I didn’t quite get. The author of ble.sh [1] brought that up on the Oil zulip. ble.sh contains a terminal parser in pure bash and so it could also handle the mouse :)

                                  [1] http://www.oilshell.org/blog/2020/03/release-0.8.pre3.html#serendipity-with-blesh

                                1. 3

                                  Virtualenv is nice if you develop a server, but what would you use for some small command line tools?

                                  For example, I have this 200 lines script which parses some CI artefacts and generates a CSV output. This is fine with just the Python standard lib. Now I would like to generate an Excel sheet but that requires an external package. The easiest way would be copy & paste? Better options?

                                  1. 1

                                    Depends how much upfront work the users are willing to do. If you can convince/guide them to install pipsi, then you’re good: just package your utilities as python packages, and have them install it with pipsi. I think you can even send them the package as a zip, so you don’t even need PyPI.

                                    If that’s too much (and sometimes it is), then you’re just screwed. Either take the time to use something like PyInstaller, for Windows, and snap/flatpack/deb/rpm, for Linux, or give up.

                                    1. 3

                                      pipsi is basically unmaintained. I would suggest giving pipx a try instead.

                                    2. 1

                                      This is the use case for packaging: you have some code, it has at least one dependency that isn’t part of Python itself, and you want to distribute it for use on other computers (possibly, though not necessarily, by other people).

                                      The answer there is to build a package, and then install that package on each computer that needs to run the code. The standard approach to this is to use setuptools, write a setup.py file which specifies what to package and what the dependencies are, and use it to generate the package (preferably a wheel), then use pip to install that package on other machines (you do not need to upload the package to a public index; you can distribute the package any way you like, and people who have a copy of it can pip install from it). There are alternative packaging tools out there, and fans of particular ones will recommend them, but this is the standard workflow for Python.

                                    1. 12

                                      This looks like something that was written just so Microsoft can have something to reference when they for the umpteenth time have to explain why CreateProcessEx is slow as balls on their systems.

                                      1. 4

                                        I doubt MS commissioned a research paper to defend against one argument made about their OS.

                                        Besides, the very view that process spawning should be very cheap is just a trade-off. Windows prioritizes apps designed around threads rather than processes. You can argue one approach is superior than the other, but it really reflects different priorities and ideas about how apps are structured.

                                        1. 7

                                          Besides, the very view that process spawning should be very cheap is just a trade-off. Windows prioritizes apps designed around threads rather than processes.

                                          Windows was a VMS clone and iteration designed by the same guy, Dave Cutler. The VMS spawn command was a heavyweight one with a lot of control over what the resulting process would do. Parallelism was done by threading, maybe clustering. Unlike sebcats’ conspiracy, it’s most likely that the VMS lead that became Windows lead just reused his VMS approach for same reasons. That the remaining UNIX’s are doing non-forking, monolithic designs in highest-performance apps corroborates that he made the right choice in long term.

                                          That is not to say we can’t improve on his choice carefully rewriting CreateProcessEx or using some new method in new apps. I’ve been off Windows for a while, though. I don’t know what they’re currently doing for highest-performance stuff. Some lean programmers are still using Win32 API based on blog articles and such.

                                          1. 3

                                            That the remaining UNIX’s are doing non-forking, monolithic designs in highest-performance apps corroborates that he made the right choice in long term.

                                            Assuming that you believe the highest performing apps should be the benchmark for how everyone writes everything – which I don’t agree with. Simplicity and clarity are more important for all but a small number of programs.

                                            1. 3

                                              I think that’s important, too. Forking fails that criteria as not being most simple or clear method to do parallelism or concurrency. It would loose to things like Active Oberon, Eiffel’s SCOOP, Cilk for data-parallel, and Erlang. Especially if you add safety or error handling which makes code more complex. If heterogenous environments (multicore to clusters), Chapel is way more readable than C, forks or MPI.

                                            2. 4

                                              despite all the bloat in Linux, process creation in Linux is a lot faster than process creation in Windows. Rob Pike was right, using threads is generally an indication that process creation and ipc is too slow and inconvenient and the answer is to fix the OS. [ edited to maintain “no more than one ‘and yet’ per day” rule]

                                              1. 3

                                                Which led to faster designs than UNIX before and right after UNIX by using approaches that started with a better, concurrency-enabled language doing threads and function calls instead of processes and IPC. Funny you quote Rob Pike given his approach was more like Hansen (1960’s-1970’s) and Wirth’s (1980’s). On a language level, he also said a major inspiration for Go was his experience with Oberon-2 workstation doing rapid, safe coding. You including Rob Pike just corroborates my opinion of UNIX/C being inferior and/or outdated design from different direction.

                                                Back to process creation, its speed doesn’t matter outside high availability setups. The processes usually get created once. All that matters is speed of handling the computations or I/O from that point on. Processes don’t seem to have much to do with that on modern systems. They prefer close to the metal with lots of hardware offloading. The designs try to bypass kernels whether Windows or UNIX-like. Looking back, both mainframe architectures and Amiga’s used hardware/software approach. So, their models proved better than Windows or UNIX in long term with mainframes surviving with continued updates. Last press release on System Z I saw claimed over 10 billion encrypted transactions a day or something. Cost a fortune, too.

                                                1. 1

                                                  The threads/function call design is good for some things and bad for others. The memory protection of fork/exec is very important in preventing and detecting a wide range of bugs and forcing design simplication. Oberon was, like much of Wirth’s work, cool but too limited. I think Parnas’ “Software Jewels” was inspired by Oberon.

                                                  As for performance, I think you are 100% wrong and have a funny idea of “better”. System Z is clearly optimized for certain tasks, but as you say, it costs a fortune. You should have to write JCL code for a month as penance.

                                                  1. 2

                                                    “The memory protection of fork/exec is very important in preventing and detecting a wide range of bugs and forcing design simplication.”

                                                    Those programs had all kinds of security bugs. It also didn’t let you control privileges or resource usage. If security is goal, you’d do a design like VMS’s spawn which let you those things or maybe a capability-oriented design like AS/400. Unless you were on a PDP-11 aiming to maximize performance at cost of everything else. Then you might get C, fork, and the rest of UNIX.

                                                    “As for performance, I think you are 100% wrong and have a funny idea of “better”. System Z is clearly optimized for certain tasks”

                                                    The utilization numbers say I’m 100% right. It comes from I/O architecture, not just workloads. Mainframe designers did I/O differently than PC knowing mixing computation and I/O led to bad utilization. Even at CPU level, the two have different requirements. So, they used designs like Channel I/O that let computation run on compute-optimized CPU’s with I/O run by I/O programs on dedicated, lower-energy, cheaper CPU’s. Non-mainframes usually ditched that since cost was main driver of market. Even SMP took forever to reach commodity PC’s. The shared architecture had Windows, Mac, and UNIX systems getting piles of low-level interrupts, having one app’s I/O drag down other apps, and so on. The mainframe apps responded to higher-level events with high utilization and reliability while I/O coprocessors handled low-level details.

                                                    Fast forward to today. Since that model was best, we’ve seen it ported to x86 servers where more stuff bypasses kernels and/or is offloaded to dedicated hardware. Before that, it was used in HPC with the API’s splitting things between CPU’s and hardware/firmware (esp high-performance networking). We’ve seen the software side show up with stuff like Octeon processors offering a mix of RISC cores and hardware accelerators for dedicated, networking apps. Inline-Media Encryptors, RAID, and Netezza did it for storage. Ganssle also told me this design also shows up in some embedded products where the control logic runs on one core but another cheaper, lower-energy core handles I/O.

                                                    Knockoffs of mainframe, I/O architecture have become the dominant model for high-performance, consistent I/O. That confirms my hypothesis. What we don’t see are more use of kernel calls per operation on simple hardware like UNIX’s original design. Folks are ditching that in mass in modern deployments since it’s a bottleneck. Whereas, mainframes just keep using and improving on their winning design by adding more accelerators. They’re expensive but their architecture isn’t in servers or embedded. Adding a J2 core for I/O on ancient node (180nm) costs about 3 cents a CPU. Intel added a backdoor, err management, CPU to all their CPU’s without any change in price. lowRISC has minion cores. I/O-focused coprocessors can be as cheap as market is willing to sell it to you. That’s not a technical, design problem. :)

                                                    1. 2

                                                      Since cores are so cheap now (we’re using 44-core machines, and I expect the count to keep going up in the future), why are we still using system calls to do IO? Put the kernel on its own core(s) and do IO with fast disruptor-style queues. That’s the design we seem to be converging toward, albeit in userspace.

                                                      1. 1

                                                        Absolutely true that the hugely expensive hardware I/O architecture in IBM mainframes work well for some loads if cost is not an issue. A Komatsu D575A is not better or worse than a D3K2 - just different and designed for different jobs.

                                                        1. 2

                                                          I just quoted you something costing 3 cents and something that’s $200 on eBay for Gbps accelerators. You only focused on the hugely, expensive mainframes. You must either agree the cheaper ones would work on desktops/servers or just have nothing else to counter with. Hell, Intel’s nodes could probably squeeze in a little core for each major subsystem: networking, disk, USB, and so on. Probably cost nothing in silicon for them.

                                                          1. 1

                                                            call intel.

                                                            1. 2

                                                              Nah. Enjoying watching them get what they deserve recently after their years of scheming bullshit. ;) Also, ISA vendors are likely to patent whatever I tell them about. Might talk to SiFive about it instead so we get open version.

                                                2. 1

                                                  Windows NT was VMS-inspired, but I didn’t think Dave Cutler had any influence over Windows prior to that. Wasn’t CreateProcess available in the Win16 API?

                                                  I suspect the lack of fork has more to do with Windows’s DOS roots, but NT probably would have gained fork if Microsoft had hired Unix developers instead of Cutler’s team.

                                                  1. 1

                                                    Windows NT was VMS-inspired, but I didn’t think Dave Cutler had any influence over Windows prior to that. Wasn’t CreateProcess available in the Win16 API?

                                                    You could have me on its prior availability. I don’t know. The Windows API article on Wikipedia says it was introduced in Windows NT. That’s Wikipedia, though. I do know that Windows NT specifically cloned a lot from VMS via Russinovich’s article.

                                                    1. 2

                                                      I think was mistaken; looking at the Windows 95 SDK (https://winworldpc.com/product/windows-sdk-ddk/windows-95-ddk), CreateProcess was at the time in Win32 but not Win16. I guess that makes sense – what would CreateProcess do in a cooperatively multitasked environment?

                                                      Most of what I know about NT development comes from the book Showstoppers.

                                              2. 2

                                                Any source on performance issues with CreateProcessEx? A quick search didn’t yield anything interesting. Isn’t CreateProcessEx very similar to the posix_spawn API which the authors describe as the fast alternative to fork/exec in the paper?

                                                1. 2

                                                  Alternatively they could just implement fork(). It’s nowhere near as hard as they’re making it out to be.

                                                  1. 3

                                                    fork is a bad API that significantly constrains OS implementation, so it is very understandable why Microsoft is reluctant to implement it.

                                                    1. 3

                                                      Meh, says you but if you already have a process abstraction it’s not really that much harder to clone a memory map and set the IP at the point after the syscall than it is to set up a new memory map and set the IP at the start address. I don’t buy that it “significantly” constrains the implementation.

                                                      1. 8

                                                        Effort isn’t the argument here, but the semantics of it. fork is a really blunt hammer. For example, Rust has no bindings to fork in stdlib for various reasons, one being that many types (e.g. file handles) have state attached to their memory representation not known to fork and that state becomes problematic in the face of fork. This is a problem also present in programs written in other programming languages, also C, but generally glossed over. It’s not a problem for “plain old data” memory, but once we’re talking about copying resource handles, stuff gets messy.

                                                        1. 1

                                                          Can you elaborate? FILE* responds to being forked just fine. What Rust file metadata needs to be cleaned up after fork()?

                                                          1. 2

                                                            Files are the simplest case. They are just racy in their raw form and you need to make sure everyone closes them properly. You can work against that by using RAII and unique pointers (or Ownership in Rust), but all those guarantees break on fork(). But even files with fork have a whole section in POSIX and improper handling may be undefined.

                                                            It gets funnier if your resource is a lock and your memory allocator has locks.

                                                            Sure, all this can be mitigated again, but that adds ton of complexity. My point is: fork may seem clean and simple, but in practice is extremely messy in that it does not set up good boundaries.

                                                            1. 1

                                                              Files are the simplest case. They are just racy in their raw form and you need to make sure everyone closes them properly. You can work against that by using RAII and unique pointers (or Ownership in Rust), but all those guarantees break on fork().

                                                              There are no close() races on a cloned file descriptor across forked processes, nor are there file descriptor leaks if one forked process does not call close() on a cloned file descriptor.

                                                              It gets funnier if your resource is a lock and your memory allocator has locks.

                                                              How so? malloc() and fork() work fine together.

                                                    2. 1

                                                      But one of the main issue with fork that the authors describe is that it gets really slow for processes that use a large address space because it has to copy all the page tables. So I don’t see how implementing fork would help with performance?

                                                      1. 0

                                                        Nobody calls fork() in a loop. An efficiency argument isn’t relevant.

                                                        1. 2

                                                          In shell scripts usually most of the work is done by external programs. So shells use fork/exec a lot.

                                                          1. 0

                                                            But not “a lot” to the point where progress is stalled on fork/exec.

                                                          2. 1

                                                            I mean, the parent comment by @sebcat complains about process creation performance, and you suggest that implementing fork would help with that, so you do argue that it is efficient. Or am I reading your comment wrong?

                                                            1. 1

                                                              Ah okay, I see. I was under the impression that he was referring to the case where people complain about fork() performance on Windows because it is emulated using CreateProcessEx() (which he may not have been doing). My point was that If they implemented fork() in the kernel, they wouldn’t have to deal with those complaints (which are also misled since CreateProcessEx / fork() performance should never be relevant).

                                                            2. 1

                                                              A loop isn’t necessary for efficiency to be come relevant. Consider: most people abandoned CGI because (among other reasons) fork+exec for every HTTP request doesn’t scale well (and this was after most implementations of fork were already COW).

                                                              1. 1

                                                                I can’t blame you, but that’s an excessively literal interpretation of my statement. By “in a loop,” I mean that the program is fork+exec bound, which happens on loaded web servers, and by “nobody” I mean “nobody competent.” It isn’t competent to run a high trafficked web server using the CGI model and expect it to perform well since process creation per request obviously won’t scale. CGI was originally intended for small scale sites.

                                                          3. 1

                                                            The bureaucracy at large, slow and old corporations partly explains this. This paper maybe took 6 months - 1 year. Adding fork() (with all the formalities + technical details + teams involved) would take 5-10 years IMHO.

                                                            Much easier to just include it as “yet another app”, e.g. WSL:

                                                            When a fork syscall is made on WSL, lxss.sys does some of the initial work to prepare for copying the process. It then calls internal NT APIs to create the process with the correct semantics and create a thread in the process with an identical register context. Finally, it does some additional work to complete copying the process and resumes the new process so it can begin executing.

                                                            https://blogs.msdn.microsoft.com/wsl/2016/06/08/wsl-system-calls/

                                                        1. 9

                                                          Moving to Docker unfortunately ruled out FreeBSD as the host system.

                                                          Was there a reason you couldn’t use FreeBSD’s Jails for this? It seems you could have accomplished pretty much the same setup.

                                                          1. 8

                                                            My experience with jails is that they are much harder to use as the tooling around them is not as extensive or fully featured. It may well be possible but I imagine it would require much more manual effort.

                                                            1. 3

                                                              Can you provide an example? Because I pretty much always find jails less opaque/hand-wayvy/etc than docker.

                                                              1. 4

                                                                Having used both docker and jails, the big difference for me is the Dockerfile. A single place with a single command to build my “server” is a killer feature. A jail in my mind is more like a VM than a container.

                                                          1. 1

                                                            First, I wanted to create a cross-language mailing list to bring everyone together. I’ve even written a kick-off email. But then I realized that Nathaniel beat me by few days by creating this forum.

                                                            This forum is clearly intended for “Trio, a friendly async concurrency library for Python”. Why would this be a good place for more general discussions?

                                                            1. 3

                                                              From https://trio.discourse.group/t/structured-concurrency-kickoff/55/2:

                                                              I want to say – I think the forum has some nice features, and it was easy to set up this way since we maintain the forum for Trio anyway, but if people feel uncomfortable with using a “project branded” space like this then please speak up. And to be 100% clear, the intention is that the “Structured concurrency” category here is totally open to any project, not specific to any one in particular.

                                                            1. 14

                                                              My problem with make is not that there is a bad design. It is not THAT bad when you look at things like CMake (oops, I did not put a troll disclaimer, sorry :P).

                                                              But it has only very large implementations that has a lot of extensions that all are not POSIX. So if you want a simple tool to build a simple project, you have to have a complex tool, with even more complexity than the project itself in many cases…

                                                              So a simple tool (redo), available with 2 implementations in shell script and 1 implementation in python does a lot of good!

                                                              There is also plan 9 mk(1) which support evaluating the output of a script as mk input (with the <| command syntax), which removes the need for a configure script (build ./linux.c on Linux, ./bsd.c on BSD…).

                                                              But then again, while we are at re-designing things, let’s simply not limit outself to the shortcomings of existing software.

                                                              The interesting part is that you can entirely build redo as a tiny tiny shell script (less than 4kb), that you can then ship along with the project !

                                                              There could then be a Makefile with only

                                                              all:
                                                                  ./redo
                                                              

                                                              So you would (1) have the simple build-system you want, (2) have it portable as it would be a simple shell portable shell script, (3) still have make build all the project.

                                                              You may make me switch to this… ;)

                                                                1. 1

                                                                  Nice! So 2 shell, 1 python and 1 C implementation.

                                                                  1. 5

                                                                    There is also an implementation in C++. That site also has a nice Introduction to redo.

                                                                    I haven’t used any redo implementation myself, but I’ve been wondering how they would perform on large code bases. They all seem to spawn several process for each file just to check whether it should be remade. The performance cost of that not a particularly fast operation might be prohibitive on larger projects. Does anyone happen to have experience with that?

                                                                    1. 1

                                                                      The performance cost of that not a particularly fast operation might be prohibitive on larger projects. Does anyone happen to have experience with that?

                                                                      No experience, but from the article:

                                                                      Dependencies are tracked in a persistent .redo database so that redo can check them later. If a file needs to be rebuilt, it re-executes the whatever.do script and regenerates the dependencies. If a file doesn’t need to be rebuilt, redo can calculate that just using its persistent .redo database, without re-running the script. And it can do that check just once right at the start of your project build.

                                                                      Since building the dependencies is usually done as part of building a target, I think this probably isn’t even a significant problem on initial build (where the time is going to be dominated by actual building). OTOH I seem to recall that traditional make variants do some optimisation where they run commands directly, rather than passing them via a shell, if they can determine that they do not actually use shell built-ins (not 100% sure this is correct, memory is fallible etc) - the cost of just launching the shell might be significant if you have to do it a lot, I guess.

                                                                  2. 3

                                                                    The biggest problem with Make (imo) is that it is almost impossible to write a large correct Makefile. It is too easy for a dependency to exist, but not be tracked by the Make rules, thus making stale artefacts a problem.

                                                                    1. 1

                                                                      I had given serious thought to using LD_PRELOAD hooks to detect all dependencies dynamically (and identify e.g. dependencies which hit the network), but never got around to trying it.

                                                                      Anyone heard of anything trying that approach?

                                                                    2. 2

                                                                      Why this obsession with “simple tools for simple projects” though? Why not have one scalable tool that works great for any project?

                                                                      (Yeah, CMake is not that tool. But Meson definitely is!)

                                                                      1. 3

                                                                        Because I wish all my projects to be kept simple. Then there is no need for very powerful tool to build them.

                                                                        On the other hand, if you already need a complex tool to do some job, having another simple tool sum up the complexity of both as you will now have to understand and maintain both !

                                                                        If we aim for the most simple tool that can cover all situations we face, this will end up with different tools according to what we expect.

                                                                        1. 3

                                                                          Meson isn’t a simple tool, it requires the whole Python runtime in order to even run --help.

                                                                          CMake is a lot more lightweight.

                                                                          1. 4

                                                                            Have you appreciated how huge CMake actually is? I know I had problems compiling it on an old machine since it required something like a gigabyte of memory to build. A two-stage build that took its precious time.

                                                                            CMake is not lightweight, and that’s not its strong suit. To the contrary, it’s good in having everything but the kitchen sink and being considerably flexible (unlike Meson, which has simplicity/rigidity as a goal).

                                                                            1. 2

                                                                              CMake is incredibly heavyweight.

                                                                            2. 1

                                                                              I would like to see how it would work out with different implementations and how “stable” meson as a language is.

                                                                              1. 1

                                                                                Meson is nice, but sadly not suitable for every project. It has limitations that prevent some from using it, limitations neither redo nor autotools have. Such as putting generated files in a subdirectory (sounds simple, right?).

                                                                            1. 1

                                                                              Is there a way to unveil everything in ~ except .folders like .ssh and .config/<other programs.?

                                                                              1. 2

                                                                                You could iterate across directory entries, calling unveil on each matching a pattern. Or use glob

                                                                                1. 2

                                                                                  That was an interesting read. What’s a good critique of pain points for rust development? For instance, I’ve heard that it lacks incremental compilation, and this article failed to mention that.

                                                                                  1. 4

                                                                                    Apparently incremental compilation is available and enabled by default since Rust 1.24.

                                                                                    1. 1

                                                                                      Thanks!

                                                                                  1. 2

                                                                                    There’s functionality for XMonad which automatically pauses the processes of all windows except for the active one. It also uses SIGCONT and SIGSTOP.

                                                                                    1. 12

                                                                                      On one hand, I don’t particularly find the Linux code particularly pleasant to work on, so I probably wouldn’t be contributing in my spare time regardless.

                                                                                      On the other hand, I think that this reduces the chance that I’ll send any patches in the future; I find these “welcoming” cultures make me feel less at ease, for whatever reason, which is a second strike against my involvement.

                                                                                      For me, the code reviews I got from Theo were a highlight of sending in patches to OpenBSD.

                                                                                      In the end, it doesn’t matter much – not everything needs to be for everybody, and the Linux community isn’t run for for me. This will bring some people in, push others out, and the world will go on.

                                                                                      1. 8

                                                                                        True. I’m also concerned that code quality (and therefore users) will suffer.

                                                                                        1. 24

                                                                                          I am honestly at a loss to see who abiding to a bland CoC could lead to code quality suffering.

                                                                                          Nothing in the CoC that I have read is in any way unremarkable. It’s simply normal professional behavior codified, with some additions to address the peculiarities of mostly online communications.

                                                                                          1. 5

                                                                                            It’s simply normal professional behavior codified.

                                                                                            That ship has sailed, but I am not convinced Open Source should be held to the standards of “professional behavior”. For instance, should we stop accepting underage contributors? What about anonymous or pseudonymous contributions?

                                                                                            Moreover what constitutes “professional behavior” differs wildly between countries and even companies within countries. For instance, “don’t ask don’t tell”-style policies are still the norm at some workplaces; do we want that in our communities? Or should we just accept that the average (non-Trump voter) U.S. sentiment should be the norm in Open Source?

                                                                                            Regarding Linus, he does (did?) have a very strong way of reacting when people disregarded things that he considered important principles of the kernel such as “do not break userspace”. He isn’t shy to use strong language to criticize companies either :)

                                                                                            Whether this has a positive or a negative effect is hard to say. It certainly antagonizes some people, and especially some potential new contributors, but at the scale of Linux should that still be the main concern of the project?

                                                                                            In any case Linus knows he reacts too strongly too fast already. This is not the first time he says something like that. We should wait and judge the long-term effects in a few months or years.

                                                                                            1. 14

                                                                                              Treating people professionally does not imply employment. A proprietor of a store treats a customer professionally by not insulting them, or refusing service. A teacher treats a student professionally by not verbally denigrating them, for example. A maintainer of an open source project treats bug reports professionally by attempting to reproduce them and applying a fix, even though the submitter of the issue may as well be anonymous.

                                                                                              It’s basically the 21st century formulation of the Categorical Imperative, as far as I am concerned.

                                                                                          2. 23

                                                                                            Why? Do you truly believe that it is impossible to reject bad patches without telling someone that they should be “retroactively aborted”?

                                                                                            1. -3

                                                                                              Language as harsh as that is used daily in normal speech between developers. I’ve seen much worse slack channels in terms of use of language, and you wouldn’t believe the language I’ve seen used on IRC to describe bad code.

                                                                                              I do indeed think that if you start censoring peoples’ language they’re going to change the way they contribute for the worse. If all you did was ban the absolute worst things like that, nobody would complain. But the reality is that’s not what will happen. Anything ‘offensive’ will be banned. Offensiveness is completely subjective.

                                                                                              1. 20

                                                                                                Language as harsh as that is used daily in normal speech between developers

                                                                                                That’s a rash generalisation. At none of the places I’ve worked as a developer would that sort of language be acceptable.

                                                                                                Offensiveness is completely subjective

                                                                                                That’s also untrue. While there will be grey areas, there are some things that are objectively offensive if interpreted literally - and if they’re not meant literally, why not use another expression?

                                                                                                1. 3

                                                                                                  I’m going to guess you’re an American, correct me if I’m wrong. EDIT: stand corrected

                                                                                                  The American cultural norm of ‘compliment sandwiches’ and being obsequiously polite is cancer to the ears of most people that aren’t Americans. I find it quite funny that Americans have this idea of Japanese as being very polite culturally, while Americans are insanely polite culturally compared to most other English-speaking countries.

                                                                                                  The typical British, Australian or Kiwi software developer swears like a trooper. It’s not uncommon, it’s not offensive. You wouldn’t do it in an email, but this is the key point: my emails are not Linus’s emails. The context is different. All his communication is by email, so naturally email carries a much lower average level of formality.

                                                                                                  That’s also untrue. While there will be grey areas, there are some things that are objectively offensive if interpreted literally - and if they’re not meant literally, why not use another expression?

                                                                                                  I don’t even know how to respond to this. Why would one only ever say things you mean literally? Speaking entirely literally is something I would expect of someone with extreme levels of Asperger’s syndrome, I believe it’s a common symptom.

                                                                                                  1. 11

                                                                                                    I’m going to guess you’re an American, correct me if I’m wrong

                                                                                                    The typical British, Australian or Kiwi software developer swears like a trooper

                                                                                                    You are wrong; I’m Australian, currently working in England, and I disagree. Regardless, swearing by itself is not something that I find offensive.

                                                                                                    Why would one only ever say things you mean literally?

                                                                                                    That’s not what I suggested. If you have a choice between a highly offensive figurative or metaphorical expression and some other expression - whether literal or also figurative - which is not highly offensive, why go for the former?

                                                                                                    1. 2

                                                                                                      You are wrong; I’m Australian, currently working in England, and I disagree. Regardless, swearing by itself is not something that I find offensive.

                                                                                                      I see

                                                                                                      That’s not what I suggested.

                                                                                                      I must have misinterpreted you. Sorry.

                                                                                                      If you have a choice between a highly offensive figurative or metaphorical expression and some other expression - whether literal or also figurative - which is not highly offensive, why go for the former?

                                                                                                      People say things that others find offensive, sometimes on purpose and sometimes not. Offensiveness is subjective. I genuinely don’t think I’ve ever been offended. Why go for one expression over another knowing that someone will get their knickers in a twist over it? Because you don’t care if someone finds it offensive? Because you enjoy it?

                                                                                                      I have to admit that I actually quite enjoy knowing that someone got self-righteously offended over something I’ve said. It hasn’t happened too often, but when it does it’s just great.

                                                                                                      EDIT: to be clear, there is ‘offensiveness’ that I don’t like. If someone is racist, I’m not offended, I just think that being racist is wrong and stupid and that they are wrong and stupid. I guess you could call this ‘offense’ but it’s really not the same thing.

                                                                                                      1. 5

                                                                                                        Why go for one expression over another knowing that someone will get their knickers in a twist over it? Because you don’t care if someone finds it offensive? Because you enjoy it?

                                                                                                        I was not intending for you to provide an answer for the “why” - it was a rhetorical question. The point was that I do not think you should say something that may well offend someone, when there is a way to communicate without doing so.

                                                                                                        Offensiveness is subjective. I genuinely don’t think I’ve ever been offended

                                                                                                        I suspect this is why you’re having difficulty seeing the problem, and while I envy you never having experienced the feeling of being offended I can see that this could lead to lack of empathy for those who were.

                                                                                                        Maybe you wouldn’t get offended by something, but that doesn’t mean it’s “not offensive” per se. I don’t agree that offensiveness is entirely subjective. Implying (or stating directly) that someone is stupid in communication to them, for example, is generally considered offensive. Statements can be intended to cause offense. There may be disagreement on specific cases, but I think in general that there would be good agreement in a survey of a random portion of the population that certain statements were offensive.

                                                                                                        1. 1

                                                                                                          I think the reality is that I would be closest to feeling hurt or offended by someone calling me stupid if I really had done something stupid. I’ve been called stupid when I haven’t been stupid many times, doesn’t bother me. I’ve been called stupid when I really have been stupid, and it does indeed make you feel bad.

                                                                                                          I’ll acknowledge that the best way to deal with some bad code getting into the Linux kernel isn’t to make the person that wrote it feel bad.

                                                                                                    2. 5

                                                                                                      The typical British, Australian or Kiwi software developer swears like a trooper.

                                                                                                      As a kiwi, I have not had this experience at all, quite the opposite. Everyone I work with is polite and respectful. This is just my experience, but I’m very surprised by your comment.

                                                                                                      it’s not offensive

                                                                                                      Sure, if it’s just swearing in general (though I’d still prefer to keep it to a minimum). The problem is when it becomes personal. Your argument is that people use ‘language just as harsh is used daily’, but there’s a line between bad language and abusive language. I don’t think the latter should be acceptable in a professional environment (at least one I’d want to work in). You can’t use one to justify the other.

                                                                                                      1. 4

                                                                                                        The typical British, Australian or Kiwi software developer swears like a trooper. It’s not uncommon, it’s not offensive.

                                                                                                        I work in software development in the UK and many of Linus’ comments would be seen as completely unprofessional in either emails or conversation - certainly far past the bar where HR would get involved. There’s a massive gap between swearing and direct personal insults.

                                                                                                2. 16

                                                                                                  No one said you have to be an asshole when being firm about rejecting patches.

                                                                                                  1. -1

                                                                                                    A lot of people will interpret anything firm as being an arsehole. If you don’t put smiley faces at the end of every sentence, some people will interpret it as you being an arsehole. If you don’t couch every negative thing you say between two positive things, people will react very aggressively.

                                                                                                    1. 17

                                                                                                      But saying someone should be “retroactively aborted” for some bad code?

                                                                                                      1. 11

                                                                                                        If you don’t put smiley faces at the end of every sentence, some people will interpret it as you being an arsehole. If you don’t couch every negative thing you say between two positive things, people will react very aggressively.

                                                                                                        This sounds like a very broad generalization to me.

                                                                                                    2. 22

                                                                                                      I think there’s no causal link between “being nicer when responding to patches” and code quality going down. If anything I’d suspect the opposite; you get people who learn and improve rather than giving up after feeling insulted, and then continue to submit quality improvements.

                                                                                                      1. 3

                                                                                                        Linus Torvalds is nearly always nice when responding to patches. In 0.001% of emails he’s rude. Unfortunately he sends a lot of emails, and people cherry-pick the worst of the worst.

                                                                                                        1. 21

                                                                                                          His own apology and admission of a problem would indicate that the issue is significant. That “0.001%” is a made-up number, isn’t it? While I’m sure that only a small number of his emails are insulting, that small number still has - and has had - a detrimental effect on the mind-state of other developers. This is what’s come out of a discussion between Linus and a number of developers.

                                                                                                          Don’t get me wrong, I like Linus generally (not that I know him personally) and I think he does a great job in general, but it’s clear that this personality problem has been a growing problem. A number of people - even quite prominent developers - have left the kernel development arena because of this kind of behaviour from Linus and others and/or issues around it.

                                                                                                          I think this is a great step on Linus’ behalf, it must have been hard to make the admissions that he has and it’s a sign that things really could be better going forward.

                                                                                                          1. 5

                                                                                                            His own apology and admission of a problem would indicate that the issue is significant.

                                                                                                            I disagree. I think the issue is massively overblown and that he’s been worn down by the endless bullshit about something that really isn’t an issue.

                                                                                                            That “0.001%” is a made-up number, isn’t it?

                                                                                                            If you’d like to go do sentiment analysis on every LKML email he’s sent, be my guest. I’d love to see the real numbers. But I chose the number to make a point: it’s a vanishingly small number of emails. It’s about half a dozen well known rude emails over two decades or more. They’re really not that bad taken in the context of the number of emails he sends and the context in which he sends them. He doesn’t say ‘this code is shit’ out loud to his coworker and then send a nice polite email. The LKML is the entire communication layer for all of Linux kernel development (plus the other lists of course). The context of those emails includes a lot more than what you’d normally include in emails in a normal development environment.

                                                                                                            While I’m sure that only a small number of his emails are insulting, that small number still has - and has had - a detrimental effect on the mind-state of other developers. This is what’s come out of a discussion between Linus and a number of developers.

                                                                                                            I mean frankly I think that if someone is going to be detrimentally affected by a few emails they are no great loss. I’ve seen a few people that say things like ‘I’d never contribute to Linux even if that were in my skill set, because they’re always rude to new people’ and then cite Linus’s emails as evidence of this. I’ve seen that sort of comment a lot. Dozens of times on /r/linux, dozens of times on /r/programming, many times on HN. It’s rubbish! The LKML isn’t obsequious: the email culture there is the traditional techy one of saying what you need to say straightforwardly rather than the traditional corporate one of layering everything in sugar to avoid sounding rude to people that expect every criticism to be wrapped in three layers of compliments.

                                                                                                            The LKML is especially not rude to newcomers. Linus has been rude, in the past, sure, but only to people that are expected to know better. Long term, hardcore maintainers that have been around for years. Is it okay? No, but it’s not anything to get worked up about. It’s a really really minor issue.

                                                                                                            There are way bigger issues in Linux kernel development, like the really scary amount of control and input some companies have in its development.

                                                                                                            Don’t get me wrong, I like Linus generally (not that I know him personally) and I think he does a great job in general, but it’s clear that this personality problem has been a growing problem. A number of people - even quite prominent developers - have left the kernel development arena because of this kind of behaviour from Linus and others and/or issues around it.

                                                                                                            They probably would have left anyway. People don’t change careers because someone said ‘retroactively aborted’ in an email once.

                                                                                                    3. 10

                                                                                                      funny, I almost avoided a potential security report to OpenBSD because I saw the contact is theo. I didn’t want to get flamed.

                                                                                                      1. -8

                                                                                                        Yeah I’m sure you did.

                                                                                                    1. 1

                                                                                                      FreeBSD had a Google Summer of Code project for improving its boot environment management. I’m not sure about its status though.

                                                                                                      1. 3

                                                                                                        OK, OK, great, nice, really helpful.

                                                                                                        But all of those tutorials about desktopping on *BSD lack a single convincing point, which I don’t need (I use OpenBSD on desktop more or less actively) but others would appreciate:

                                                                                                        How such BSD desktop solution would be appealing for some casual Ubuntu user who just clicks “ok” button and gets on with things? I don’t want to deprecate or make it feel worse in any way, just looking for some points or features which can be nice for people using some mainstream Linuxes (Ubuntu, RHEL, CentOS, Fedora) on they work/private machines just to click things?

                                                                                                        The only thing like that I’ve seen was “OpenBSD is not for you if…” paragraph in OpenBSD desktop practives howto. But it’s actually an opposite for what I’m looking for :)

                                                                                                        1. 5

                                                                                                          How such BSD desktop solution would be appealing for some casual Ubuntu user who just clicks “ok” button and gets on with things?

                                                                                                          I think we need to find a difference between a ‘desktop’ term for regular people (not IT related) and a ‘desktop’ term for technical IT people.

                                                                                                          My guide is definitely for the second group, such FreeBSD Desktop is not suited for a regular user, the NomadBSD may be suited that way, the TrueOS Desktop may be suited that way but definitely such ‘custom’ setup.

                                                                                                          I am sharing this knowledge as I use FreeBSD on the ‘desktop’ since 15 years and when I wanted to have FreeBSD desktop it was not such easy task as it is now, but still requires some configuration and that I wanted to share.

                                                                                                          Is CentOS/RHEL better suited for the ‘desktop’ then FreeBSD? Depends, Linux has the advantage here that a lot of software out of the box supports these distributions, yet when you compare the freshness and count of packages between these system families its on the FreeBSD side - https://repology.org/statistics/newest - you have to configure many additional repositories with CentOS/RHEL like EPEL and on FreeBSD you just type pkg install so its more friendly here.

                                                                                                          CentOS/RHEL has graphical installer on which You can select to install X11 desktop which is easier for less advanced users, that is the CentOS/RHEL advantage over FreeBSD, but when we compare it that way, OpenIndiana based Illumos distribution is even easier to use and install then CentOS/RHEL as its installer is more easy then the CentOS/RHEL one ;)

                                                                                                          So its a long discussion without end really :>

                                                                                                          1. 4

                                                                                                            How such BSD desktop solution would be appealing for some casual Ubuntu user who just clicks “ok” button and gets on with things?

                                                                                                            The real selling point is “fearless upgrades”. Pushing the upgrade button in Ubuntu feels like russian roulette, you never know what’s going to break this time.

                                                                                                            ZFS is nice - RAID-like resilience, LVM-like convenience, and filesystem snapshotting for history/“undo” for the same amount of admin effort it would take to set up one of those things on Linux - but the biggest feature of BSD for me is more of an anti-feature: they just don’t keep randomly breaking everything.

                                                                                                            1. 3

                                                                                                              The real selling point is “fearless upgrades”. Pushing the upgrade button in Ubuntu feels like russian roulette, you never know what’s going to break this time.

                                                                                                              A somewhat relevant data point: the Fedora folks have been working for a while on atomic workstation, now Team Silverblue. It uses OSTree for atomic updates/downgrades. You pretty much boot in an OS version, similarly to FreeBSD boot environments (of course, the implementation is very different). The idea is to use Flatpak for installing applications, though you can still layer RPMs with rpm-ostree.

                                                                                                              Although it is probably not a solution for a tech user’s desktop. It seems interesting for the ‘average’ user in that it provides updates that don’t fail when yanking out the plug in the middle of an update and offers rollbacks. The OS itself is immutable (which protects against certain kinds of malware) and applications are sandboxed in by Flatpak.

                                                                                                              ZFS is nice - RAID-like resilience, LVM-like convenience, and filesystem snapshotting for history/“undo” for the same amount of admin effort it would take to set up one of those things on Linux

                                                                                                              Ubuntu also supports ZFS out of the box. With some work, you can also do ZFS on root.

                                                                                                              but the biggest feature of BSD for me is more of an anti-feature: they just don’t keep randomly breaking everything.

                                                                                                              I think this is the biggest selling point for BSD. I have given up on Ubuntu for my personal machines a long time ago. Stuff breaks all the time and Ubuntu/Debian/etc. are so opaque that it takes a long time to get to the bottom of a problem. Arch Linux is a reasonable compromise, stuff breaks sometimes due to it being a rolling release, but at least it’s fairly clear where to look. Moreover, the turnaround time of submitting reports/patches upstream and trickling down to Arch is pretty short.

                                                                                                              But I would switch back to BSD in a heartbeat if there was good out-of-the-box support for amdgpu, Intel MKL, CUDA, etc. But apparently (haven’t verified) the Linux amdgpu tree has more lines of code than the OpenBSD kernel.

                                                                                                              1. 2

                                                                                                                I order to be able to easily undelete files I’ve setup zrepl to snapshot my system every 15 minutes. I have these snapshots expired after a while. In combination with boot environments this means I can mess with my system without having to worry about breaking it. I can simply reset it quickly and easily. This is very convenient.

                                                                                                              2. 1

                                                                                                                It’s been so long since I used it that it’s changed names, but TrueOS is the “I just want to have FreeBSD with a desktop and don’t want to learn how to edit kernel modules with vi” answer.

                                                                                                              1. 3

                                                                                                                The AutoAddDevices option is set to restore the old bahavior of handling the input devices (keyboard/mouse/…). Without this there is big chance that You will have to mess with hald(8) which is PITA.

                                                                                                                This is no longer needed. Xorg now has a devd(8) backend it can use to get informed about hotplugged devices instead of hald(8): https://lists.freebsd.org/pipermail/freebsd-x11/2017-March/018978.html That’s working fine for me, even without moused(8).

                                                                                                                1. 1

                                                                                                                  Thank you for that information, I added UPDATE 1 to the post regarding that case.

                                                                                                                  I also modified the original post to not confuse future readers.

                                                                                                                1. 4

                                                                                                                  Thanks for posting this.

                                                                                                                  A general problem I have with mercurial (I started using it for pet projects I work on at home, never at work), that a lot of material you can google is fairly old and lots of it outdated. Whenever someone refers to a hg extension, one needs to further investigate if this extension is still alive, and still the prefered way of doing things.

                                                                                                                  1. 1

                                                                                                                    The feature that this article describes is in core.

                                                                                                                    1. 9

                                                                                                                      Just to elaborate, because this is the third or fourth Mercurial discussion coming up in as many days, and I’m getting tired of the same discussion happening ad nauseam:

                                                                                                                      1. Out-of-the-box, with no configuration done, Mercurial doesn’t allow editing history–but ships with all the functionality required to do that. You just have to turn it on. Turning it on takes up to three lines in a config file and requires no third-party tools whatsoever.
                                                                                                                      2. Out-of-the-box, Mercurial does come with phases (discussed here) and the associated phase command that allows explicitly altering them. You don’t actually use the phase command that much; phases are actually more for consumption by history editing commands.
                                                                                                                      3. If you enable any of the history editing extensions–again, which ship with Mercurial–including rebase, which is probably all you need, and histedit, if you do really need the equivalent of git rebase -i, you will find they are phase-aware. In particular, they will allow you to alter changesets that are secret or draft, but not public. Because changesets will become public on push by default, this is by itself awesome, as it can trivially help you avoid accidentally rebasing something someone else might’ve pulled. Having this would’ve eliminated quite a few Git horror stories.

                                                                                                                      All of the above ships in core. You need to add at most three lines to your .hgrc or equivalent to get all of it. Which is fine, because you also need at least two lines just to set your email and name, much like you’d have to at least do git config --global user.email and git config --global user.name. A couple extra lines isn’t a big deal.

                                                                                                                      The only thing interesting in this space that doesn’t yet ship in Mercurial, and which I’m really excited about, is something called changeset evolution, which will allow cleanly and easily collaborating on in-progress, frequently-rebased/collapsed/edited branches. But that’s not baked yet, and Git doesn’t have anything equivalent to it yet anyway.

                                                                                                                      1. 5

                                                                                                                        The problem is making it clear to new users or users coming from git how to enable those extensions. There’s also the problem that the new tweakdefaults option is trying to solve: that hg’s backward compatibility guarantees mean that new features (e.g. new since hg 1.0) don’t get surfaced in the hg UI unless you’ve customized your setup (or had it customized for you as in a corporate setup).

                                                                                                                        git’s out-of-the box experience enables a lot of power user features. This certainly isn’t great for safety but it is great for discovery - thus these perennial discussions on forums like lobsters and HN.

                                                                                                                        I’m hoping with evolve getting upstreamed we might see more projects using mercurial. On the other hand, for open source projects the only real place to host them is either to use hgweb and roll a custom hosting and development workflow (basically what mercurial itself does) or use bitbucket, which is run by people who don’t prioritize or think much about open source fork-based workflows. It would be amazing if there were more competition in this space. Kallithea doesn’t support pull requests. Rhodecode has a questionable past participating in the free software community. I’m not aware of much else in this space.

                                                                                                                        What would really change things is if one of the big players like github or gitlab decided to add support for other VCS tools although I’m not exactly holding my breath for that to happen.

                                                                                                                        1. 4

                                                                                                                          Unfortunately, I agree. I have noodled with basically rewriting Kiln (only not called that because I’m not at Fog Creek) based on changeset evolution and with an explicit fork-based workflow focus, but I’ve been waiting to see if evolution really does get fully into core, and then what the story is with hg absorb, since properly supporting Mercurial and evolution looks really different in an hg absorb-based world than one without it.

                                                                                                                          In particular, my idea is that anyone can push to a repository, but it’ll automatically go into a new pull request in draft phase. At that point, if hg absorb stays a thing, and a normal user can be expected to just run hg absorb repeatedly as they address issues, then I can rely on obsmarkers. Otherwise, the story empirically gets a lot more interesting; I’ve had trouble on prototypes not requiring the user to know they’ve got a custom branch, basically doing the same kluge as Gerrit, albeit with a different mechanism.

                                                                                                                          Edit: just to be clear, I’ve prototyped bits of this a few times, but nothing I want to release—doubly so since it’s all in Smalltalk anyway. But it’s been helpful to try to think through what a proper approach to this would work like.

                                                                                                                          1. 2

                                                                                                                            AFAIK the only blocker on absorb getting upstreamed is the need to rewrite the linelog interface in C.

                                                                                                                          2. 2

                                                                                                                            I’d like to add that RhodeCode is actively supporting usage of Evolve/Phase with changeset evolution. Based on feedback from our users we started to ship evolve extension enabled and within the CE and EE editions.

                                                                                                                            This works with Pull requests, can be enabled globally or per repository.

                                                                                                                            You might question the past, but we since almost 2 years provide a open-source free, no limitation version of CE edition of RhodeCode (similar to Gitlab CE/EE). You can use evolve there and it works :) I said it many times, and I’ll said it again. We did mistakes with certain past releases, but currently, our business model is based on a free GPL open-source version. This will be here to stay, we always try to promote Mercurial, and it’s great workflows using mentioned extensions.

                                                                                                                            I doubt Gitlab/Github will ever support mercurial. They openly said they won’t for many reasons.

                                                                                                                            We currently work on a simple app for Digitalocean, we hope it’ll make usage of Mercurial hosting much easier for people that don’t want to host it themselves.

                                                                                                                            1. 1

                                                                                                                              Kallithea doesn’t support pull requests.

                                                                                                                              Looks like they now do.

                                                                                                                              I’m not aware of much else in this space.

                                                                                                                              Phabricator, Redmine and Trac also support Mercurial as well.

                                                                                                                              However none of them are as convenient as the hosted and “free for open source” offerings of Bitbucket, GitHub and GitLab.

                                                                                                                            2. 2

                                                                                                                              I feel the need to fork hg2k5, which will be mercurial with only the original features. :)

                                                                                                                              1. 2

                                                                                                                                You’d have to start with forking Python 2.4 so that you could run it.

                                                                                                                        1. 1

                                                                                                                          Can I ask a potentially ignorant question? Why would someone who’s not already using Subversion choose to run it at this point? What are some of its advantages over Git or Fossil or Mercurial?

                                                                                                                          1. 6

                                                                                                                            For software version control? Probably very little (especially as you included mercurial in the alternatives)

                                                                                                                            I think however, that SVN could be the basis of quite a good self-hostable blob/file storage system. WebDAV is a defined standard and accessible over HTTP and you get (auto-)versioning of assets for ‘free’.

                                                                                                                            1. 1

                                                                                                                              Why would Mercurial in particular stand out on this list? Are you extrapolating from your own experience? I don’t think there are complete and reliable global usage statistic about any of these systems, are there?

                                                                                                                              1. 2

                                                                                                                                On top of what stephenr says, Mercurial has an increasingly solid story for large assets from things like remotefilelog and other similar work from Facebook. That means I’d feel comfy using it for e.g. game asset tracking, at least to a point. Git is getting there too (specifically the work from Microsoft), but it’s a bit less mature at the moment.

                                                                                                                                1. 0

                                                                                                                                  Git is not the easiest thing in the world to learn/use.

                                                                                                                                  If you just day “why use svn when git exists” it’s easy: because svn is easier to learn and understand.

                                                                                                                                  Mercurial muddies that because you get the benefits of dvcs with usability that’s pretty close to svn.

                                                                                                                                  I’ve worked in the last few years with entire teams that used no vcs.

                                                                                                                                  1. 1

                                                                                                                                    Yeah, very much agreed that hg hits a rather nice middle ground. Their UI design is great.

                                                                                                                                    Still, I don’t think we could infer anything from this about the actual number of users across the various vcs. Not sure though if I simply misunderstood what you meant.

                                                                                                                                    1. 1

                                                                                                                                      Oh I’m not at all claiming to have stats on actual usage.

                                                                                                                                      It was a hypothetical: if hg wasn’t an option, some developers will be more productive with svn than git.

                                                                                                                                    2. 1

                                                                                                                                      why use svn when git exists

                                                                                                                                      I think this sums it up well: https://sqlite.org/whynotgit.html

                                                                                                                                      Not about subversion in particular though, just a bash at git.

                                                                                                                                  2. 1

                                                                                                                                    Are you referring to mod_dav_svn? The last time I tried it it was pretty unreliable. It often truncated files silently. That’s probably not Subversion’s fault. Apache HTTPd’s WebDAV support doesn’t seem to be in a great state.

                                                                                                                                    1. 1

                                                                                                                                      That’s the only subversion http server that I’m aware of.

                                                                                                                                      I suspect that post is about mod_dav - WebDAV into a regular file system directory.

                                                                                                                                      Mod_dav_svn provides WebDAV + Svn into a repo backend.

                                                                                                                                  3. 4

                                                                                                                                    I know some game studios still run subversion, because of large art assets along side code, and the ability to check out specific subdirectories.

                                                                                                                                    1. 3

                                                                                                                                      SVN is still heavily used by companies that are not pure software dev shops yet still produce their own software, e.g. in the financial sector and manufacturing industry.

                                                                                                                                      I don’t think many people on lobsters still encounter SVN at their place of work, but that is due to lobster’s demographic rather than everyone on the planet migrating away from SVN. That is not the case. (Some people still use tools like ClearCase at work!)

                                                                                                                                      1. 2

                                                                                                                                        For something closer to my heart, LLVM’s official repository is still SVN.