1. 2

    I wish there were some examples. How often does this forking happen? If it’s something rare that happens only when the stars align (person knows to program, has expertise in the source code but chooses to outsource the service in the first place, doesn’t have a viable competitor to exit to for less effort than forking), does it matter?

    1. 3

      I think private forking happens all the time. I keep patches for several programs on my system. What happens rarely is organizational forking, ie. an attempt to set up a competing development team. A historic example would be the EGCS GCC fork, which was so successful that it just became the official GCC project.

      A maintainer has to mess up pretty bad for that to happen, and most projects that attempt this just starve for lack of people caring about their version.

      That said, Wikipedia has a list (of course): https://en.wikipedia.org/wiki/List_of_software_forks

      1. 2

        I’m very curious to hear about such private forks! It seems to me that the current state of open source doesn’t make them easy or convenient. Creating a fork isn’t just a one-time expenditure of effort, it’s an ongoing effort thereafter of keeping up with upstream. I’ve personally tried to do this multiple times and invariably given up. So I’m curious to hear how long you’re able to continue maintaining a fork for, and any tricks you have to minimize the overheads involved.

        (I actually don’t care about organizational forking. Organizations are certainly more likely to be able to afford the ongoing opex of maintaining a fork. But the original Exit vs Voice concerns people in a civic organization, and that tends to be my focus as well.)

        1. 4

          It seems to me that the current state of open source doesn’t make them easy or convenient

          It depends on what you’re doing with a private fork. If your changes are relatively minor it’s just merging from mainline periodically which modern VCSs are pretty good at.

          As an example, I participate in an OSS project that includes a third party library to provide a rich text editor in-browser. IME is very important to us, but not as high-priority for the library. We maintain a “private” fork (it’s publicly readable, but I no one else really cares that it exists) that differs mostly in the form of disabling a couple of features that interfere with IME. Occasionally we’ll merge code from a branch bound for release that hasn’t made it to mainline yet because we need it sooner. Maintenance involves pulling from upstream and re-evaluating our patches every couple of months. The most inconvenient part of it is having to self-host the built packages, which all things considered really isn’t bad.

          I mean it’s a kludge, we’d obviously rather not have to spend the small amount of effort required to maintain a “private fork”, but I’d much rather have the option than not.

          1. 1

            Oh certainly, it’s nice to have the option. Taking it back to OP, I just wonder if your example is worth considering on par with “exit and voice.” It seems rather the equivalent of putting dinner on a plate after purchasing it.

          2. 3

            IME, Gentoo makes this sort of thing pretty easy, at least for basic changes. You don’t have to maintain the repo, you can just stick your patches in /etc and have Gentoo autoapply them on rebuild. So you only need to do maintenance if they stop working, and in that case you will already have the repo checked out in the state Gentoo is trying to build from. So you just copy the build folder, reapply your patch, take a diff and stick it in /etc again.

      1. 30

        One cannot help but notice the author omitting the dog’s breakfast more “intelligent” languages encourage young, excited programmers to come out with. Capping the amount of bad you can do in a language is just as, if not more important, than increasing the amount of good.

        1. 18

          I agree and to that end I propose banning all programming languages. Let’s be done with all this nonsense.

          1. 3

            Hear, hear! Turing completeness is the biggest footgun of them all.

          2. 6

            Capping the amount of bad you can do in a language is just as, if not more important, than increasing the amount of good.

            Is it really? Sure, limiting the amount of damage a non-expert can do is a desirable property of any language, but Go has to trade things for its particular brand of “capping the amount of bad”.

            Depending on context this may be more or less important e.g. in a large team with a lot of inexperienced developers I might want to trade some “sharp tools”-ness for guard rails. But that’s not at all the tradeoff I want to make with a small experienced team on on weekend projects.

            I’m also not sure how successful Go actually is at this goal. Show me a turning complete language and I’ll show you some really garbage code written in it. I don’t really think the problem of young, excited programmers producing poor code is solvable at the language level.

            1. 11

              I don’t really think the problem of young, excited programmers producing poor code is solvable at the language level.

              It’s not solvable, but you can limit the type of poor code. Like, it’s relatively straightforward (not easy, mind you, but straightforward) to DRY up exceptionally redundant code, trace boring procedural routines, and so forth–but a novice just learning the power of macros, low-level OS and threading primitives, monkeypatching, AST manipulation, and other forms of metaprogramming can mess up a codebase to the point where burning it down and starting it over is the only way to be sure.

              From the people side, it’s also more likely you can get other engineers and seniors to help each other out if they are dealing with a language that scale horizontally instead of vertically; you may have a bright young dev who creates super clever things in Lisp or Haskell that nobody else can really help them with.

              1. 10

                Of course Go won’t prevent writing “garbage code”; this article actually contains an example of such code, but as friendlysock said, it does limit the scope of it. And of course this is a trade-off, because pretty much every feature added to every language ever is useful and solves real problems for people. The problem with stuff like this is the assumption is that people aren’t aware of this trade-off and are just blubbering idiots who don’t know any better. That’s not really the case.

                Some actual examples of this:

                Years ago when I worked as Ruby on Rails developer I tasked one of our new developers with modifying something on a login screen on his first or second day. We use devise, which is a “authentication system in a box”-kind of library; it’s pretty nice. Something or the other on the login screen didn’t fit our needs, so you can override the controller methods being used in the config and point to your own.

                I told him to use that, but when he sent me the patch for review it monkey patched the Devise code instead. Why? He wasn’t sure either, but it sure wasn’t a better way of doing things. I told him to try again.

                Monkey patching is a great feature, and something I sometimes miss in Go. A big reason I run my site on Jekyll and am not planning to switch to Hugo or whatnot is because it’s written in Ruby so I can fairly easy just modify parts without much effort; it can be really helpful and really allows you to quickly solve problems that are hard to solve otherwise, and for something like Jekyll it’s a fine solution (although the Stack Overflow crowd doesn’t seem to like it, well, whatever).

                But to replace methods in some library in a real-world production app, especially when there’s already an option for it? Not such a great fit.


                Another example is when we had this Gulpfile for building the frontend; the last step/plugin being run was replacing the links in HTML with versions (i.e. src="script.js" to src="script.js?v=deadbeef", and for some reason that sometimes took a full minute. I tried looking at it to solve that, and the code was inscrutable: it’s a straightforward thing to do, but everything was using promises all over the place and even with various debug printfs I couldn’t even figure out the basic control flow, much less where this minute was being spent. After half an hour I just gave up and accepted the fact that the s/.../.../ step in JS took twice as long as compiling our entire Go application 🤷

                1. 1

                  After half an hour I just gave up and accepted the fact that the s/…/…/ step in JS took twice as long as compiling our entire Go application 🤷

                  Did you try profiling?

                  1. 4

                    This was five years ago. And spending more than 30 minutes on this wasn’t really worth the time.

                  2. 1

                    The problem with stuff like this is the assumption is that people aren’t aware of this trade-off and are just blubbering idiots who don’t know any better.

                    Sure, totally. But I think the animus in the article is not from the assumption that Go’s designers (or Go users) are unaware of this tradeoff, probably the opposite. Like if you go and work in a go shop that’s consciously made this tradeoff it’s kind of like being told “we don’t trust you with more powerful/expressive/dangerous languages” which again I don’t think is always the wrong thing but I understand how it’s kind of hard not to take that as an insult, especially if you’re a more experienced developer and believe (correctly or not) that you can be trusted with sharper tools.

                    1. 3

                      From the article:

                      In my opinion, Go was developed by people who have used C all their lives and by those who did not want to try something new.

                      This kind of stuff does not demonstrate a deep understanding. It’s just a slightly nicer way of saying “blubbering idiots who don’t know any better”.

                      it’s kind of like being told “we don’t trust you with more powerful/expressive/dangerous languages”

                      I find that’s a fairly inaccurate way to describe it, and that someone might take this as an “insult” is frankly just weird. There are all sorts of reasons why Go is designed the way it is, and it really doesn’t all boil down to one 15-year old quote from Rob Pike. People get so hung up on this they stop looking any further.

                2. 5

                  Capping the amount of bad you can do in a language is just as, if not more important, than increasing the amount of good.

                  Maybe, but since Go encourages use of pointers, which might be null, and which can be shared between concurrent processes and race and crash on trivial examples, I don’t think it has the guard rails one might be looking for :)

                1. 6

                  I have to be honest, I don’t find the ethical appeal of the first two points very compelling. I worked at a company where full time employees were, by policy, always called “team members” and I promise you this did nothing to encourage teamwork or deter anyone from screwing anyone else over. I’m not convinced that changing the name of a table, or even forcing everyone to call users some particular thing, is really going to change attitudes or policies.

                  The third point I’m a bit more sympathetic to. I’ve seen a lot of different data models grapple with exotic relationships between human being, credentials, and role. But even then, I really don’t think the word “user” is to to blame here. You don’t want to, and shouldn’t, rewrite auth logic a bunch of times, so “user” usually resolves to the table that holds auth data, and that’s fine. Where it gets hairy is when you have a human who can be both buyer and seller in a marketplace kind of setup, or when you have multi-user subscriptions and some of what you can and can’t do falls out of what subscription you’re on. Or maybe you can have multiple. It can get messy. But if you step back and dream up the ideal design, I can almost promise you there’s still going to be a table that holds everyone’s login credentials somewhere and if you call that “user” or anything else it doesn’t really matter, “User” (plus other related things) is still the model that makes sense. The hard part here is the “plus other related things”.

                  I’ve worked in a codebase where a single set of login credentials can have multiple “profiles”, it was implemented very badly. But the reason for it wasn’t that anyone said “hey, let’s make users exist in a hierarchy with themselves where only ‘roots’ have credentials and ‘leaves’ have preferences because the ‘User’ idea is just so good we want to double down on it” (it was bad).If you look at what functionality the product had to support at the time profiles were a feature no one would had said that.

                  The issue was that initially there were no profiles. Then there was something akin to a “parent mode” and “kid mode”. Maybe when that happened someone should have seen the writing on the wall and made a correct auth/profile M2M. But either someone’s crystal ball was faulty or because it was decided there wasn’t time for that refactor, it didn’t happen, and we got a session variable that said what mode a user was in. Then “we need an arbitrary number of profiles per user” happened, it was decided that it was urgent, and a mess ensued. It’s regrettable, but that’s how iterative product development works, and I really don’t see how a. calling what was the “user” table anything else would have helped, or b. how you’re supposed to anticipate which of the many, many configurations of credentials/role/profile/group/subscription/etc is the correct one at the start of an iterative process. “A user is a set of credentials” sure seems to me like the natural starting point. Sometimes it’s all you end up needing, sometimes it’s not, but I don’t really see anything in this post that offers a better initial design that anyone could deploy without knowing what features a product team is going to produce in the future.

                    1. 9

                      Just as a heads-up, papers we love does not love the areas of Computer Science equally. There is a lot more coverage of Programming Languages and Systems papers compared to say Machine Learning and Theory.

                      1. 7

                        Very true, the SF chapter was kind of notorious for having a lot of FAANG folks involved and the papers tended to skew heavily towards distributed systems. I can’t say I really have an issue with that, seeing as that’s what people wanted to read, but PWL does definitely have a bit of a bias.

                        1. 7

                          PWL organizer here. Every chapter is left to its own devices to run itself, so yeah, the local organizers have a lot of sway in picking speakers. If you dig through the repository you can find a lot of interesting papers spanning many topics.

                          @zxtx - PRs welcome for more Machine Learning and Theory papers.

                    1. 73

                      Honestly, for a general-purpose laptop recommendation, it’s hard to recommend anything but the new ARM MacBooks. […] I just hate the cult of personality around built around a ThinkPad that only exists as a shadow in a cave.

                      Do you want to tell him or shall I?

                      1. 17

                        Tell me about what?

                        My recommendations are tempered by things like Mac OS (every OS sucks in its own unique ways), but they’re the fastest laptops you can get, get actual all-day battery life without ceremony, are lightweight, and have good build quality. This is based around actually using one as my everyday laptop - Apple really has made significant improvements. Unless someone has other requirements (i.e pen, x86 virt, etc.), they’re good all-around.

                        1. 50

                          The quote is just kind of funny to read since Apple products have been almost synonymous with fanboyism and cultish followings for decades, while the thinkpad crowd has levied that exact same criticism.

                          I mean personally I don’t actually disagree with you, I think Apple makes good hardware and “thinkpad people” have gotten just as bad as “apple people” in terms of misguided brand loyalty. It’s just funny because what was quoted feels like very much a role reversal in a very long standing trend.

                          1. 27

                            Maybe it’s just my circles but I don’t see Apple fanboyism as much as I see “anti-Apple” fanboyism.

                            1. 44

                              That’s because you hang out on sites like Lobsters.

                              1. 3

                                Honestly, the “Apple fanboys” are nowadays mostly one of those things that “everybody knows” despite not really bring true. Sure, you can find the occasional example, but you’re more likely to find a handful of mildly positive comments about Apple and then a hundred-comment subthread shitting on both Apple and “all these fanboys posting in here”. And basically any thread about laptops will have multiple subthreads of people loudly proclaiming and getting upvoted and lots of supportive replies for saying Apple is evil, Apple’s hardware and software are shit, everybody should run out and switch to Thinkpads.

                                Which is just kind of amusing, really.

                            2. 16

                              The quote is just kind of funny to read since Apple products have been almost synonymous with fanboyism and cultish followings for decades

                              Yes, and I think the M1 is a prime example of the hype, further boosted by Apple’s following. The M1 is a very impressive chip. But if you were only reading the orange site and some threads here, it is many generations ahead of the competition, while in reality the gap between recent AMD APUs and the M1 is not very large. And a substantial amount of the efficiency and performance gap would be closed if AMD could actually use 5nm production capacity.

                              From the article:

                              Honestly, for a general-purpose laptop recommendation, it’s hard to recommend anything but the new ARM MacBooks.

                              Let’s take a more balanced view. The M1 Macs are great if you want to run macOS. ThinkPads (and some other models) are great if you want to run Windows or Linux.

                              1. 12

                                Do the competitors run fanless?

                                I’m happy with my desktop so I don’t have a stake in this game, but what would appeal to me about the M1 non-Pro Macbook is the fanless heat dissipation with comparable performance.

                                1. 6

                                  I mean are there actually laptops that are running super long like the M1? Even back with the Air for me Macs having reliable long batteries was a huge selling point compared to every other laptop (I know they throttle like crazy to do this, but at least the battery works better than other laptops I have owned) . I think Apple deserves loads of praise for shopping laptops that don’t require you to carry your charger around (for decent time frames relative to competition until maybe super recently)

                                  Full disclaimer: despite really wanting an M1’s hardware I’m an Ubuntu user so…

                                2. 5

                                  I don’t have any brand loyalty towards thinkpads per se but rather the active community of modifications and upgrades. There are things like the nitropad (from nitrokey) that is preinstalled with HEADs and has some minor modifications or refurbishing as well as many other companies are selling second hand thinkpads in this way, but I think nothing beats xyte.ch (where I got my most recent laptop).

                                  The guy is an actual expert and will help you chose the modifications you want (for me I wanted to remove bluetooth, microphone, put an atheros wifi so I can use linux-libre, change the CPU to be more powerful, also the monitor changed to 4k and there were other options also.. like maybe putting an FPGA like fomu in the internal USB of the bluetooth or choices around the hard drives and ports you want) after choosing my mods and sending him 700$ he spent a month doing all my requested changes, flashing libreboot/HEADs and then fedexed it to me with priority.

                                  This was my best online shopping experience in my life and I think this kind of stuff will never exist for apple laptops.

                                  1. 3

                                    Hmm fanboyism. Must fight… urge to explain… why PCs are better than laptops. :-p

                                    1. 1

                                      Oh, I know all about the dumb fanboy shit. I’ve at least outlined my reasoning as pragmatic instead of dogmatic, I hope.

                                  2. 12

                                    I just really like running Linux. Natively, not in a VM. I have a recent P14s running Void Linux with sway/wayland and all the hardware works. I know there’s been some effort to get Linux working on the new M1 chips/hardware, but I know it’s going to be mostly out-of-the-box for modern Dell/Thinkpad/HP laptops.

                                    With Microsoft likely jumping ship over to ARM, I’m really hoping Linux doesn’t get completely left behind as a desktop (laptop) OS.

                                    1. 7

                                      It seems like some people mistake the appreciation of quality Apple hardware for a cult.

                                      1. 18

                                        It may seem like that, but isn’t. Of the two Macs I currently own, one is in for repair (T2 crashed, now won’t boot at all) and one has been (CPU would detect under voltage and switch off with only one USB device plugged in). Of the ~80 Macs I’ve ever deployed (all since 2004), five have failed within two years and a further three have had user-replaceable parts DOA. This doesn’t seem like a great strike rate.

                                        BTW I’ve been lucky and never had any of the recall-issue problems nor a butterfly keyboard failure.

                                        1. 2

                                          While I strongly prefer my Dell Precision (5520), I haven’t really had the same experiences as you.

                                          I have a work laptop which is a MacBook and gets a bit toasty - but I use it every day and have not had any issues so far.

                                          My own laptop was a 2011 MacBook Pro and it took spilling a glass of wine on it to kill it, prior to that there were no problems. Once I did break the keyboard by trying to clean it and had to get it repaired. Maybe it was getting slow and there was some pitting on the aluminium where my hands laid (since I used it every day for 6 years). It died in 2017.

                                          Those are the two MacBooks I owned

                                        2. 8

                                          There might be some selection bias at work, but I have been following Louis Rossmann’s youtube channel and I absolutely do not associate Apple with good quality.

                                          1. 5

                                            Louis Rossman has a vested interest in repairable laptops as he runs a repair shop and Apple is actively hostile to third-party repairs.

                                            Not saying what Apple does is good for the consumer (though it’s often why resale value of their laptops is high)- but I would assume that Louis is the epitome of a biased source.

                                          2. 5

                                            I have used MacBooks from 2007-2020. I had two MacBooks with failing memory, one immediately after purchase, one 1-2 months after purchase. I also had a MacBook Air (pre-butterfly) with a failing key. I had a butterfly MacBook Pro with keys that would often get stuck.

                                            The quality of average is very average. I think the number of problems I had with MacBooks is very average for laptops. However, Apple offers really great service (at least here in Western European countries), which made these hardware issues relatively painless to deal with.

                                            1. 5

                                              Apple doesn’t merely make good hardware, it makes Apple hardware, in that its hardware is often different from the mainstream. Butterfly keyboards, for example, or some of their odder mouse designs. it’s possible to appreciate good hardware without thinking Apple’s specific choices are worth buying, even if you concede they’re good implementations of those choices you dislike.

                                          1. 30

                                            The example with Postgres is pretty interesting because it made me realize that there’s an entire generation of programmers who got exposed to async I/O before the threaded/synchronous style, via JavaScript!

                                            It makes sense but I never thought about that. It’s funny because I would indeed think that threads were a convenient revelation if my initial thinking revolved around async :) Although there are plenty of downsides of threads in most environments; it does seem like you need a purpose-built runtime like Erlang / Go to make sure everything gets done right (timeouts, cancellation, easy-to-use queues / sempahores, etc.)

                                            It’s similar to the recent post about Rust being someone’s first experience with systems programming. That will severely affect your outlook for better and worse. There are also a lot of people who learned C++ before C too and I can only imagine how bewildering an experience that is.

                                            1. 4

                                              Yeah, threads really are a “convenient revelation”! Aren’t OS-level threads implemented on top of CPU-level callbacks? https://wiki.osdev.org/Interrupt_Descriptor_Table

                                              1. 10

                                                I wouldn’t call CPU-level interrupt handlers “callbacks”. They’re too low-level of a concept for that. It’d be like calling an assembly-language JMP instruction a CPU-level case statement, just because case statements are ultimately implemented in terms of JMP or a similar CPU instruction.

                                                1. 4

                                                  I was turned on to code when threading was current but recent. This reminds me of the day I finally understood that multiprocessing was previously done by, you guessed it, multiple processes.

                                                  1. 2

                                                    I should have said “synchronous” or “straight line code” and not “threads”. There is a lot of overloading of terms in the world of concurrency which makes conversations confusing. See my other reply:

                                                    https://lobste.rs/s/eaaxsb/i_finally_escaped_node_you_can_too#c_fg9k7y

                                                    I agree with the other reply that the term “callbacks” is confusing here. Callbacks in C vs. JavaScript are very different things because of closures (and GC).

                                                    I’d say if you want to understand how OS level threads are implemented, look up how context switches are implemented (which is very CPU specific). But I’m not a kernel programmer and someone else may give you a better pointer.

                                                  2. 3

                                                    Although there are plenty of downsides of threads in most environments

                                                    How come ? After all, threads are the basic multithread building block exposed directly by the OS.

                                                    1. 10

                                                      I should have said synchronous / “straight line” code – saying “threads” sort of confuses the issue. You can have straight line code with process-level concurrency (but no shared state, which is limiting for certain apps, but maybe not as much as you think)

                                                      It’s very easy to make an argument that threads exposed by the OS (as opposed to goroutines or Erlang processes) are a big trash fire of design. Historically that’s true; it’s more a product of evolution than design.

                                                      One reason is that global variables are idiomatic in C, and idiomatic in the C standard library (e.g. errno, which is now a thread local). Localization also uses global variables, which is another big trash fire I have been deep in: https://twitter.com/oilshellblog/status/1374525405848240130

                                                      Another big reason is that when threads were added to Unix, syscalls and signals had to grow semantics with respect to threads. For example select() and epoll(). In some cases there is no way to reconcile it, e.g. fork() is incompatible with threading in fundamental ways.

                                                      The other reason I already mentioned is that once you add threads, timeouts and cancellation should be handled with every syscall in order for you to write robust apps. (I think Go and node.js do a good job here. In C and C++ you really need layers on top; I think ZeroMQ gives you some of this.)

                                                      So basically when you add threads, EVERYTHING about language has to change: data structures and I/O. And of course C didn’t have threads originally. Neither did C++ for a long time; I think they have a portable threading API now, but few people use that.


                                                      The original concurrency primitive exposed by Unix is processes, not threads. You can say that a thread is a process that allows you to write race conditions :)

                                                      From the kernel point of view they’re both basically context switches, except that in the threading case you don’t change the address space. Thus you can race on the entire address space of the process, which is bad. It’s a mechanism that’s convenient for kernel implementers, but impoverished for apps.

                                                      OS threads are pretty far from what you need for application programming. You need data structures and I/O too and that’s what Go, Erlang, Clojure, etc. provide. On the other hand, if your app can fit within the limitations of processes, then you can write correct / fast / low level code with just the OS. I hope to make some of that easier with Oil; I think process-level concurrency is under-rated and hard to use, and hence underused. Naive threading results in poor utilization on modern machines, etc.

                                                      tl;dr Straight-line code is good; we should be programming concurrent applications with high level languages and (mostly) straight line code. OS threads in C or C++ are useful for systems programming but not most apps

                                                      1. 1

                                                        Race conditions, data races, deadlocks are a bit of an overstated problem. 99% of the cases, people are just waiting for IO, protecting shared data structures with locks is trivial and it often will take 3+ orders of magnitude less time than IO. It is a non issue, honestly.

                                                        Personally, I find the original P() and V() semantics introduced by Dijskstra to be the easiest concurrency idiom to reason about. All these newer/alternative semantics, being them promises, futures, defereds, callbacks, run to completion, async keywords and what have you, feel like a hack compared to that. If you can spawn a new execution flow (for lack of a better name) without blocking your current one, and query it for completion, then you can do it with almost whatever construct you have. Including threads.

                                                        The case for threads being that you can share data thus saving large amounts of memory.

                                                        In all seriousness, which percentage of people uses concurrency for other purposes than circumventing the need to wait for IO?

                                                      2. 2

                                                        There are a lot of ways to shoot yourself in the foot with something like pthreads. The most common is probably trying to share memory between threads, something as simple as adding a value to the end of a dynamic array fails spectacularly if two threads try to do it at the same time and there’s no synchronization mechanism. The same applies for most of your go-to data structures.

                                                        1. 4

                                                          Shared memory has more to do with the language and its memory management model than sync/async though. You can have an async runtime scheduled N:M where it’s up to you to manage resource sharing.

                                                          That’s the case if you use (for example) a libuv in C with a threadpool for scheduling. On the other hand Erlang which works pretty much in async way in all communication would not have the same issue.

                                                          1. 1

                                                            What’s the problem with adding a semaphore right before adding the value? Is it not how everyone does it? (honest question)

                                                        2. 2

                                                          The example with Postgres is pretty interesting because it made me realize that there’s an entire generation of programmers who got exposed to async I/O before the threaded/synchronous style, via JavaScript!

                                                          Your comment made me realize that! Crazy, but very interesting…

                                                          I wonder if that has any impact in how good these new developers are/will-be at parallel or synchronous programming.

                                                          The problem is that JavaScript is such a loosey-goosey language that I’m fairly convinced that people are probably writing incorrect async code in JavaScript, but it works well enough that they might even be worse off. Maybe I’m just being elitist, but I’ve reviewed some of my own Node code recently and caught several mistakes I had when modeling a “Notifier” object that had to manage its own state asynchronously. It never caused an issue “in the field” so I only noticed because I was refactoring from removing a deprecated dependency.

                                                          EDIT: Also, I’m one of those who learned C++ before C (And I don’t claim that I “know” C by any stretch: I understand the differences between the language in a technical sense, but I can’t write or read idiomatic C in real code bases). But I learned C++ before C++11, so I think that might not be what you are talking about. Learning C++98 probably isn’t that bewildering compared to today because we didn’t have smart pointers, or ranges, or variants, etc. The weirdest thing at the time was probably the STL iterators and algorithms stuff. But all of that just felt like an “obvious” abstraction over pointers and for loops.

                                                          1. 2

                                                            Yeah, JS (and Node backend code) has really interesting asynchronous behaviour; when folks start using other languages with better computational concurrency/parallelism, a lot of things that they relied on will no longer be true. Easiest example is the fact that there’s only ever one JS “thread” executing at any given time, so function bodies that would have race conditions don’t (because the function is guaranteed to continue executing before a different one starts).

                                                        1. 1

                                                          Couldn’t help but chuckle at:

                                                          We might have expected the USA to have a high percentage of Java users, but it also makes a lot of sense that they don’t … it could be that developers there don’t need the power or stability of Java and are using languages that allow them to build and test quickly..

                                                          1. 23

                                                            To be honest I find the whole idea of someone adjudicating the worth of another human’s life work to be morally questionable and even perhaps likely to come from a place of extreme privilege.

                                                            Most of my family spent their entire lives slaving over plastic molding machines or doing even more menial labor like digging ditches.

                                                            They were good people who provided for their family and were generally speaking happy to do so. Were their lives not well spent?

                                                            This topic really resonates with me since my choice of employer has resulted in a TON of negative feedback both from randoms on the internet and people I generally admire and aspire to emulate.

                                                            1. 11

                                                              I don’t think anyone is adjudicating the worth of another person’s life here. Even the least charitable interpretation of the comment this article is responding to would be that Percival could have solved more challenging problems than he has. The worst way of reading it would be that he could have done more valuable work, not that his life has any less worth as a result of his career choice.

                                                              And I think the real point isn’t even that. It’s a question of whether the economic conditions we live under discourage optimal allocation of human capital. While Percival seems to respond with “no”, at least in his particular case, I think it’s still a 100% valid question to ask. While the point that the academia has its share of problems and limits on intellectual freedom is fair, it seems almost hard to argue that there aren’t a significant number of promising computer scientists, mathematicians, psychologists, etc. that spend their careers facilitating selling ads or products that the world would probably be better off without. We don’t need to say anything about the worth of these people to say that an economic system that makes the best interests of many highly trained scientists and mathematicians mildly harmful to the public at large has some issues.

                                                              1. 6

                                                                And I think the real point isn’t even that. It’s a question of whether the economic conditions we live under discourage optimal allocation of human capital. While Percival seems to respond with “no”, at least in his particular case, I think it’s still a 100% valid question to ask.

                                                                This is a really excellent point and a hard question I find myself chewing on often in other contexts.

                                                                Capitalism definitely has some really ugly by-products, and I often wonder what society would look like if scarcity wasn’t a thing and the raison d’etre for money eroded entirely.

                                                                I’d LIKE to think it’d be like Ian Banks Culture novels (My #1 voted future I’d love to live in BTW) but I don’t think anyone can know.

                                                              2. 5

                                                                To be honest I find the whole idea of someone adjudicating the worth of another human’s life work to be morally questionable and even perhaps likely to come from a place of extreme privilege.

                                                                You criticise judging somebody, while poisoning the well with a judgement at the same time.

                                                                1. 3

                                                                  If this is the “judging judgemental people is also judgemental” paradox, the tie-breaker is easy. Whoever was first to be judgemental deserves to be judged for being judgemental. Fighting fire with fire is good.

                                                                  1. 3

                                                                    You criticise judging somebody, while poisoning the well with a judgement at the same time.

                                                                    A fair point. I suppose that if I zoom out past my own personal hangups around people casting aspersions at my own choices there’s nothing wrong with having the discussion.

                                                                1. 3

                                                                  I’m honestly thinking you’re trying to solve the wrong problems.

                                                                  People who CAN do this will have no problems reading detailed instructions, or run a docker-compose setup or run ansible.

                                                                  People who can’t do this will be angry if something stops working and they understand nothing at all anyway.

                                                                  But maybe I’m just old and grumpy and have experienced too many years of explaining absolutely basics to users who want to install software and not having a clue at all. It was actually a lot easier if all they had to do was upload a bunch of PHP files and input DB credentials.

                                                                  On the other hand I’ve had my fair share of problems with people providing Docker images that won’t fit into my infra at all (imagine simple things like hardcoded paths for webapps so you couldn’t either run them as /foo or not run them without /foo/).

                                                                  1. 2

                                                                    It was actually a lot easier if all they had to do was upload a bunch of PHP files and input DB credentials.

                                                                    On the other hand I’ve had my fair share of problems with people providing Docker images that won’t fit into my infra at all (imagine simple things like hardcoded paths for webapps so you couldn’t either run them as /foo or not run them without /foo/).

                                                                    Yes! In the world of web forums PHP is king and the thing that makes it difficult to say “hey, use my software” is that uploading a zip of PHP files to shared hosting and plugging in DB credentials that the host gives to you is just simple and easy for users. One of the beautiful things about it is that users don’t have to install some deployment tool intended for professionals to make it work.

                                                                  1. 4

                                                                    Provides the same basic programming metaphor as a dedicated server/VPS hosting, e.g. a filesystem, processes that can live beyond the request/response cycle

                                                                    Sounds to me like you’re about to re-invent two-thirds of a 90s OS, only without 30 years of security fixes. I don’t recommend it as something you then suggest other people use in anger. If you were just doing it for fun, sure. But as a solution to make things “easier” on third parties, just go with Docker.

                                                                    1. 2

                                                                      I’m not really proposing re-inventing anything. Docker does provide the the primitives I’m looking for here, and I think it’s like 90% of what I want. Docker in and of itself doesn’t really solve the issue of “I have some software I want someone who’s not super technical to be able to provision” though. Actually arranging a docker image to be run somewhere and orchestrated with other services isn’t trivial. docker-compose and k8s manage orchestration this but handing config files for those tools to someone looking to self-host isn’t a frictionless experience either, they aren’t really a tool meant to be used by non-programmers. I do think docker would be a great foundation to use since images are most of what I’m looking for, but it seems like there needs to be a (end-user friendly) layer on top of docker if you want a piece of software to be deployable by someone who isn’t experienced with software development and/or ops stuff.

                                                                      1. 4

                                                                        Every cloud provider has varying degrees of DIY marketplaces that make it pretty easy for tech-savy, non-developers to get an application up quickly. Linode has stack scripts, AWS has an actual marketplace, and so on.

                                                                        1. 1

                                                                          There are several such layers, from cloud providers. It’s pretty much the definition of “the value add” and that’s why they charge for it.

                                                                      1. 13

                                                                        I like it but I notice the last commit was 9 months ago while it bills itself as early alpha. Does anyone know if it’s still being developed?

                                                                        1. 11

                                                                          This is from the same developer that brought us cryptocat (circa 2011) and used to argue angrily when security issues were pointed out. He eventually came around on that front IIRC, but as a result of my experience reporting early cryptocat bugs to him, I would treat this more of a source of ideas than any kind of implementation that should be trusted for any reason. Obviously people grow, and I hope my assessment is a gross under-estimation, but it’s probably a healthy approach to anything unproven, regardless of provenance.

                                                                        1. 1

                                                                          I’ve only really used mssql (and a bit of MySQL). Has anyone else made the jump to postgres? How was the transition?

                                                                          1. 3

                                                                            I’ve gone the other way, from postgres to MySQL and a little bit of MSSQL. I found the transition was trivial. Tuning and permissions management are significantly different but that’s something I do once in a while and will consult the docs on anyway. I don’t have very elaborate demands on my DB, mostly your standard CRUD stuff so within that space there’s basically no perceptible difference between the popular traditional SQL DBs. Every now and then I find there’s some DB specific function or extension that’s different but there’s almost always an equivalent.

                                                                            I believe postgres has a few extras. GIS and text search (like with indexing, tokenizing, and lemmatization) are things I know are in postgres but I don’t think there’s the same support in the others. But I don’t use that functionality a lot anyway and I think a lot of people don’t (or at least don’t try to solve those kinds of problems in the context of a relational DB) so I really don’t feel like there’s a huge difference between postgres/mysql/mssql from the everyman’s perspective.

                                                                          1. 3

                                                                            FWIW I think most Python programmers prefer list comprehensions to map() and filter(). I personally don’t use the latter in my own code (after ~17 years and hundreds of thousands of lines of Python)

                                                                            https://www.oreilly.com/library/view/python-cookbook/0596001673/ch01s11.html

                                                                            1. 2

                                                                              Oddly this has been a minor point of contention in the past. That’s the majority view but there’s a substantial number of people who prefer it the other way around.

                                                                              I think early in py3’s life there was discussion of moving map/filter/reduce to functools but it didn’t end up happening. There’s a bit of an on-going narrative of Guido being a little hostile towards FP in python (LCs over map/filter/reduce, no multi-line lambdas, some of the comments he’s made on these things). I’m not sure how much credence to really give it, but I think it’s worth pointing out LCs over higher order functions isn’t unanimously prefered.

                                                                              1. 2

                                                                                My personal view is that generator comprehensions should be preferred if the alternative is map(lambda x: and that list comprehensions should be preferred if the alternative is list(map(.... But if I’m just doing something like (foo(x) for x in y) it seems cleaner to me to use map(foo, y).

                                                                                I only really write Python code for my own amusement though.

                                                                                1. 1

                                                                                  Less of a problem for scripts, but for long-term maintenance, list comprehensions survive the “small change requires only a small change” metrics a bit better.

                                                                                  Imagine if you have [foo(x) for x in y] and decide “oh let’s filter out for x = 0. You then have [foo(x) for x in y if x] (or x != 0 or whatever your condition is). If you started off with map(foo, y), you now need to do map(foo, filter(y, lambda x: x!=0)).

                                                                                  The unfair advantage is that list comprehensions handle both map and filter, so you don’t have to decide ahead of time. It also helps their case that anonymous functions in Python are limited to lambdas, which kinda stink (relative to if you were writing in Haskell for example)

                                                                                  1. 2

                                                                                    I don’t see any problem changing map(foo, y) to (foo(x) for x in y if x) if that’s what I’d like to replace it with.

                                                                              2. 1

                                                                                I personally like comprehensions mainly because I learned how to use them before map/filter and I’ve had a lot more practical use with them.

                                                                                I do see the use cases for map/filter though.

                                                                              1. 6

                                                                                Original paper here. Unfortunately it appears that the naysayers may have had a point.

                                                                                1. 16

                                                                                  Eh, all the paper really says is that wasm enables mining and it’s being used for that. Which isn’t surprising.

                                                                                  The numbers are quite small and presumably the browsers will fight mining scripts.

                                                                                  If you were to do an analysis of JS scripts used, you would also find a bunch of malicious stuff.

                                                                                  I contend that the prevalence of malicious code is more to do with societal/structural issues than technical issues.

                                                                                  1. 2

                                                                                    presumably the browsers will fight mining scripts

                                                                                    Should they? I mean in the form of a compromised site serving a scripting for mining for a malicious party, obviously it’s bad. But I’ve always like the idea of exchanging computing resources for content rather than ads. I don’t know if the economics could work out long term (I kind of suspect not) but especially on a desktop machine I’d much rather lend some fraction of my CPU or GPU to a content provider to yield value for them directly rather than do basically the same thing to look at some obnoxious advertisement that’s eating up resources anyway and lining the pockets of an ad broker which isn’t really adding value.

                                                                                    1. 7

                                                                                      I think browsers will prevent pages from consuming excessive compute resources without users’ opting in. This is already done to some extent in Firefox.

                                                                                      If the premise was exactly “ad server js compute” or equivalent “mining compute”, then I’m not sure what I’d prefer. All cryptocurrencies I know of are terrible for the environment and I don’t want to be doing any work for that. I’d much rather just pay for things I like. That offers a much better watts per dollar proposition (and browsers and payment providers could support this better).

                                                                                1. 1

                                                                                  /prog/ was one of my all time favorite programming communities. Even with the rampant shitposting, it felt like a community from a different era and I was heartbroken when it shut down.

                                                                                  1. 4

                                                                                    How common is this? Sounds really strange to expect something like that from employees.

                                                                                    1. 6

                                                                                      I work in a large organization that employs a lot of programmers. There’s definitely pressure to do professional development outside of work hours. No one has ever said it’s mandatory, but people are encouraged to do some udemy courses are their ilk and are praised highly and publically for completing them. No one has ever been fired for not doing that but depending on how cynical you are this can come off a lot like “putting in hours outside of work is how you advance”. It’s a little different than what’s described in the article as my employer actually discourages open source contribution (they issued a (in my opinion) fraudulent copyright complaint against one of my github repos that was subsequently reversed) but the idea that you need to pick up new skills relevant to the company’s work on your own time is definitely there.

                                                                                      This definitely isn’t universal but I’ve heard similar stories from elsewhere fairly often.

                                                                                      You can see why it makes sense, training people at work means giving up productivity, it’s expensive, and it generally doesn’t work very well. If you can actually get people to do it on their own time that’s a massive benefit you don’t have to pay a penny for.

                                                                                      And I’m someone who enjoys working on hobby projects and using new stuff for them at home, but even I loathe the de-facto policy. It makes something I do for fun feel like rendering unpaid services to my employer.

                                                                                      1. 4

                                                                                        When I worked as a consultant, the only hours that counted as work were the ones I logged at the client. Besides that, meetings were in my own time. We also had some mandatory evenings for information, and some semi-mandatory evenings for learning new technologies (I did attend them at first, but things didn’t really work out between me and that employer for various reasons, and I stopped attending them).

                                                                                        At one client, there were a few eager people who shouted that they put in extra time at home to learn. This can create an atmosphere in which it’s expected that you work some more at home (though I never really experienced it this way).

                                                                                        Some other times some of the management hinted that you should put in more time than what’s in your contract. Sometimes subtly (“you should only log the hours you worked, unless you messed something up and need to repair it”), sometimes more blantantly (at an intake for a potential new client: “tell them that you may not be familiar with all the technologies they use but you will spend the evenings learning them if this happens”).

                                                                                        All together, it’s not that common in my experience. I’ve heard much worse stories in other lines of work. I was more annoyed by managers with manipulative tendencies (when I worked as a consultant – it might have to do with them literally getting payed for every hour I work, no matter the quality of my work).

                                                                                      1. 1

                                                                                        Most of this wishlist sounds reasonable (although I’m a little skeptical of some items) but I would be sad to see it end up implemented in Github Issues. I actually really like GH issues precisely because it’s bare bones. As someone who runs projects that tend to have between one and five contributors it works great as an easy todo list that facilitates discussion of issues. As an irregular contributor to a number of projects (i.e. submitting a bugfix or minor feature add to projects I use as my use case necessitates, or just in filing bug reports) triage systems and work estimates and kanban boards are all a big pain in the neck, the only thing I really need to know is “does a ticket for this exist yet”.

                                                                                        I get that these things are useful, but I’d rather let the default GH issues system be as simple as possible and let third parties offer software for more complex use cases.

                                                                                        1. 1

                                                                                          The lead dev asks for money/recognition through social networks? What a bunch of beggars! He needs money? Me too! Does this person have a Patreon? Who cares! This guy owes me to use his software, he loves coding for free, the sucker.

                                                                                          I think it’s great when people help out or donate to OSS projects, but I’ve been getting a little nervous some of the attitudes towards project funding/sponsorship I’ve seen lately. Corporate sponsorship of projects gives those sponsors a fair amount of power over those projects. Personally I’d rather use software that was produced by someone who’s able to make it without a for-profit company subsidizing development. I think it also makes forking a project more difficult as you won’t be able to carry that revenue stream with you to the fork and is an extra hurdle to competition with the original software .

                                                                                          I’m not saying it’s wrong to accept money for work on an open source project, just that there’s a risk of harmful commercialization when we push for monetary support of OSS.

                                                                                          1. 8

                                                                                            I’m speaking as an open source author with contributions that were never sponsored, everything being entirely in my free time …

                                                                                            What you want is not sustainable due to these facts:

                                                                                            1. people need to put money on the table, so they need jobs
                                                                                            2. working in their free time cuts into their personal life, which is especially hard when you have a family; and let me tell you, OSS work may be fun, but it is NOT relaxing; in fact when you have users, it’s more stressful than your day job
                                                                                            3. not having a personal life is a recipe for burnout

                                                                                            You may ask yourself, given that I confessed to doing this in my free time, how I cope with it? The answer is that I don’t. And I don’t know for how long I’ll manage to keep it up.

                                                                                            Of course, highly popular projects can afford a good churn out rate, however such projects are also sponsored and have long timers available to train new contributors. For medium projects (e.g. most programming libraries) long timers are very hard to replace due to having specialized knowledge that’s hard to pass on to the next guy. And sometimes the problem domain is a difficult one, therefore the buss factor isn’t great.

                                                                                            The cold, hard truth is that projects eventually die without eventually receiving sponsorship in some form.

                                                                                            So if you want healthy projects, either donate money or contribute. Because otherwise, frankly, the wishes of non-contributors aren’t very relevant.

                                                                                            1. 3

                                                                                              I’m not convinced that people have to choose between a personal life and OSS work. People’s capacity to work on OSS on top of a job varies, maybe instead of working 8 hours a day on OSS projects some of us only have a couple hours here and there and the amount of work you can do in that time will be different from person to person. But personally I’d rather use a piece of software with a slower release cycle that’s produced and maintainable by hobbyists than one that has full time developers but is beholden to a sponsoring organization.

                                                                                              And realistically I’m skeptical that it makes economic sense for anyone involved. If the cost to a company of using OSS is to pay one or more developers of that software anything close to their market value they’d be better off actually hiring someone to do it, then either keeping it inhouse to have a competitive advantage, or release under their copyright/license for the PR. For developers, donations for OSS work is probably the least you’ll ever be paid for your time.

                                                                                              1. 2

                                                                                                Out of curiosity, have you done meaningful open source work? Have you led any projects that have users?

                                                                                                Note that I am not talking of occasional bug fixes or personal code that you end up storing on GitHub and that nobody uses.

                                                                                                I’m asking because I believe that no OSS software author with serious contributions would ever claim that working in your free time is doable long term.

                                                                                                And if you’re new to this, just give it 2-3 more years, we can talk then :-)

                                                                                                1. 3

                                                                                                  I’m not sure what your criteria for “meaningful” is but I have run projects with a userbase in my free time.

                                                                                                  You’ve referred to your experience a couple times now and questioned mine, that’s fine, but I feel like you haven’t really touched on the main points I raised: OSS development by corporate sponsorship poses a risk of commercialization that we should be wary of.

                                                                                                  1. 1

                                                                                                    The point wasn’t to highlight my experience or anything, but I also talked to a lot of other contributors and the pattern is the same … burnout. It’s a thing that eventually happens to a majority of OSS contributors that use their free time.

                                                                                                    The other pattern that keeps happening is the self-entitlement complex of users that don’t contribute anything.

                                                                                                    Wariness sounds good in principle, can’t hurt to be wary right? But it is highly discouraging and dare I say, is in line with the self-entitlement. Notice how you’re suggesting that people should just sacrifice their free time, instead of selling out, what’s the problem?

                                                                                                    You not being convinced that people have to choose between a personal life and OSS work is absolutely irrelevant, because it is not you that has to choose. Talk to the maintainers of the projects that you’re using and that are actively maintained (instead of dying), ask them what costs they have. I do that very often.

                                                                                                    The point was, if you haven’t maintained a popular project to have an idea of the personal costs that can incur, and unless you come up with an alternative plan to make OSS sustainable (e.g. crowd funding, etc), then your oppinion is actively harmful to an ecosystem that already depends on too few people.

                                                                                                    The reality is that most popular OSS projects that people depend on are understaffed. The bus-factor is huge. The dangers of the authors selling out to big corporations is the last thing you should worry about ;-)

                                                                                                    1. 1

                                                                                                      burnout. It’s a thing that eventually happens to a majority of OSS contributors that use their free time.

                                                                                                      Do you have anything beyond anecdotal evidence to support this claim? Or that corporate sponsorship prevents burnout?

                                                                                                      But it is highly discouraging and dare I say, is in line with the self-entitlement.

                                                                                                      It’s discouraging and self entitled that I prefer to use software that isn’t funded by for profit organizations? I’m sorry if that’s how you feel but I don’t think it’s a very good argument for corporate sponsorship of OSS.

                                                                                                      If I was a developer that only worked on proprietary software, would I be justified in calling out people who prefer to use open source software as being “highly discouraging and self entitled”?

                                                                                                      Notice how you’re suggesting that people should just sacrifice their free time, instead of selling out, what’s the problem?

                                                                                                      I think that’s a poor representation of my position here. At no point have I said anyone is obligated to work for free, I haven’t shamed anyone for “selling out” as you put it. I’ve simply said that I would prefer to use software that doesn’t require sponsorship from a for profit organization to exist.

                                                                                                      Again, I think the OSS/proprietary analogy works well. I don’t think people who work on closed source commercial software have “sold out”, I don’t think they’re doing something wrong. But I prefer to use open source software because I think there are advantages on both philosophical and practical grounds. Same thing here.

                                                                                                      The point was, if you haven’t maintained a popular project … then your oppinion is actively harmful

                                                                                                      I’m sorry but this reads a lot like “you don’t have the same experiences as me so you’re wrong”. I haven’t couched my argument in my experience, so trying to say I don’t have X, Y and Z qualification to speak on the matter doesn’t really do anything for your case here.

                                                                                                      1. 1

                                                                                                        Asking for “evidence” in this is completely ridiculous, it’s not like you’re expecting some sort of study. Would a randomized controlled trial be enough? Let’s not do sealioning.

                                                                                                        As for my argument being about experience, yes, that’s exactly what I’m saying, it’s good that we understand each other.

                                                                                                        Have a good day.

                                                                                                        1. 1

                                                                                                          Asking for “evidence” in this is completely ridiculous, it’s not like you’re expecting some sort of study. Would a randomized controlled trial be enough?

                                                                                                          Some kind of systematic evaluation of data would seem reasonable. A systematic observational analysis would seem like a reasonable place to start, I haven’t said anything about controls or randomization.

                                                                                                          Surely you see the issue with leaning entirely on anecdotal evidence right? Like if I provide an anecdote about a successful project that didn’t involve corporate support have I made a compelling case for my position in your mind? If not, how is what you’re doing any different?

                                                                                                          As for my argument being about experience, yes, that’s exactly what I’m saying, it’s good that we understand each other.

                                                                                                          Well I guess it is good that we understand each other, although I’ve make it clear why I don’t think you asking me or anyone else to simply take your word on the matter is reasonable. And again, even if you think you’ve conclusively ruled out the possibility of volunteerism software development at scale you don’t seem to have addressed at any point the issues with corporate sponsorship of OSS which was kind of the central point I was making.

                                                                                          1. 5

                                                                                            Cool visualization. If like me you don’t want to dig through a huge image to find your favorite distro you can use ctrl/cmd+F since it’s an SVG, which I greatly appreciated.

                                                                                            1. 1

                                                                                              More of a question about the commons clause than its application to these modules in particular, but the link says this:

                                                                                              if your product is an application that uses such a module to perform select functions, you can use it freely and there are no restrictions on selling your product

                                                                                              But the language of the clause prohibits selling of:

                                                                                              a product or service whose value derives, entirely or substantially, from the functionality of the Software.

                                                                                              Why does an application that uses redis as its storage or caching layer not “substantially” derive from the functionality of the software? What does “substantial” mean here? If I write a HTTP wrapper around redis + the redis labs modules can I sell that as a hosted service?