1. 3

    Who hates package managers? I’ve certainly had teammates frustrated by package management, or annoyed by implementations (e.g. “npm is haunted”), but I’ve never met anyone who has maligned package managers in general or maligned the concept of package management.

    Are these real people with software engineering experience? I can’t help but assume that anyone who says “I hate package managers” is maybe early in their career and unfamiliar with the problem-space that package managers fit in.

    1. 1

      “Hate” is a bit strong, but I really dislike some package managers because of how complex they are and how complicated they make it to package/maintain software. I don’t dislike all package managers though.

      I assumed after reading this article that the author was talking specifically about application developers who refuse to cooperate with package managers, and instead force users to get their warez through flatpak or something similar.

    1. 3

      I think the core of the refactor/rewrite conversation is how long you’re comfortable working on a branch without getting end-user feedback.

      I’ve only been working as a software engineer for about 10 years, but I’ve always known long-running branches as an anti-pattern that teams try to avoid. Instead, my teams always aimed to merge our code as soon as we have some problem solved — usually multiple times each day.

      (When I say “long-running branches” I mean both unmerged Git branches and code in the default branch that isn’t getting run in production. TODO: Find a better phrase to capture this.)

      On larger projects there are sometimes components that will take days or weeks to complete, so we try to break that work down into smaller pieces that can be worked on individually. Some things are hard to break down, so we make the trade-off to work in a big branch, but this is generally a last resort and a code/culture smell.

      I think of the decision to refactor or rewrite the same way:

      • If the system is small enough then it doesn’t matter because you aren’t working in a long-running branch.
      • Otherwise, try to refactor/rewrite one subsystem at a time in a way that users (and other engineers) can give you feedback. At every step you should have a working system, and if you break something then you find out immediately instead of when you launch (i.e. months later after you’ve forgotten how the broken thing was supposed to work).
      1. 1

        Fant Model… patch it until it utterly collapses under its own weight. I was not fond of this model, but I feel reality has encroached upon what would be a perfectly good time dreaming about how I will make the next iteration more efficient and better.

        That doesn’t mean you shouldn’t design things to be replaced and have well modeled APIs. Once upon a time I was witness to a complete engine rewrite piecemeal over a staggering period of time. Well, really, the rewrite never ended once the api gateway was silently proxied out by api and other conditionals. That fate also has potential issues as calls zig and zag through the api gateway.

        1. 3

          What’s the Fant Model? I tried looking it up but couldn’t find anything relevant.

        1. 12

          Counter-argument: Automatic code formatting tools and rules make fixes easier to backport, not harder. Without consensus on code formatting you’ll end up with trivial syntax differences between branches, which breaks git cherry-pick.

          The trick is to apply any code formatting rule to every supported release. If you’re backporting a patch from Linux 5.13 to 4.4 then you shouldn’t have any issues with syntax because 4.4 should already have the same code style as 5.13. Now the diff between the two branches represents the semantic differences, none of the syntactic litter you’ve picked up between release. (Most code formatters are non-deterministic, which isn’t ideal, but they’re a huge improvement over the Wild West of inconsistent syntax.)

          Do automated code formatted cause problems with git cherry-pick, or are large ‘rewrite every file’-type changes incompatible with email-based patch workflows? /semi-s

          1. 1

            I like this

          1. 1

            I’m quite curious about how semver snuck into the license name. Is that common? I love it.

            1. 6

              The article starts with a classy appeal to authority.

              At its core, all software is about manipulating data to achieve a certain goal. The goal determines how the data should be structured, and the structure of the data determines what code is necessary.

              Yes, exactly. Therefore, you can first define a logical data structure, and then add methods that operate on said structure. This idea does not conflict with OOP. In fact, prototype-inheritance languages take this a step further and separate data and code, just like you want it; it’s called a “traits object”, and it looks like this:

              "Assume this is traits yourObject."
              (|
                parent*= traits clonable.
                doSomethingWith: foo = (
                  bar: foo baz.
                  "Code to do something with foo goes here."
                ).
                "More code here as necessary."
              |)
              
              "Then, let's have a prototype for this object. Assume it is globals yourObject."
              (|
                parent* = traits yourObject.
                bar <- nil.
                "More constant or mutable slots go here."
              |).
              
              "Now, let's use this object."
              | obj |
              obj: yourObject copy.
              obj doSomethingWith: foo copy.
              "etc. etc."
              

              This “traits object” method allows you to keep code and data in two separate objects. You can even make the parent slot (the one with the star) assignable, so you can use the same object with a different set of methods.

              In my experience, the biggest problem with OOP is that encourages ignoring the data model architecture and applying a mindless pattern of storing everything in objects, promising some vague benefits.

              An integer is an object, and so is a struct. Everything can be done wrong if applied poorly.

              Instead of building a good data architecture, the developer attention is moved toward inventing “good” classes, relations between them, taxonomies, inheritance hierarchies and so on. Not only is this a useless effort. It’s actually deeply harmful.

              Not if you use a top-down approach. With such an approach, you first think about what your application should look like, and then implement the layer below that, then the layer below that, and so on and so forth. Application architecture can be done poorly in non-OOP contexts as well.

              FizzBuzz Enterprise Edition

              Yes, it’s a meme that is purposefully verbose. I’m sure you can also spawn an abomination called FizzBuzz Academic Edition implements it in something like Haskell or Idris and turns it into type spaghetti instead.

              OOP apologists will respond that it’s a matter of developer skill, to keep abstractions in check.

              It’s a human skill to visit the toilet when one has to discharge. Everyone expects it, and there is no excuse other than a medical condition. Similarly, if you can’t keep your architecture in check then this is not the fault of a programming paradigm, but rather your application of it.

              Your class Customer has a reference to class Order and vice versa. class OrderManager holds references to all Orders, and thus indirectly to Customer’s. Everything tends to point to everything else because as time passes, there are more and more places in the code that require referring to a related object.

              I don’t really see how you would improve upon this in a non-OOP setting? You will need to manage those relations somehow. Are you implying that having references to other objects is bad? Also, an orderManager need not hold references to all orders, why would it? It would simply deliver you the order instances. This sounds like a strawman.

              Another appeal to authority.

              Example: when class Player hits() a class Monster, where exactly do we modify data?

              (|
                hit = (|
                  parent* = traits clonable.
                  damage.
                  copyDamage: damage = (| c |
                    c: self copy.
                    c damage: damage.
                    c.
                  ).
                |).
              
                player = (|
                  parent* = traits clonable.
                  "Fields etc."
                  defaultDamage.
              
                  hitsMonster: monster = (
                    monster receivedHit: defaultDamage.
                    monster dead ifTrue: [
                      kills: kills succ.
                    ].
                  ).
                |).
              
                monster = (|
                  parent* = traits clonable.
                  receivedHit = (
                    "Do whatever you need to modify the state."
                  ).
                ).
              
                test = (| p. m. |
                  p: player copy.
                  p defaultDamage: damage copyDamage: 50.
                  m: monsters example.
                  p hitsMonster: m.
                ).
              |).
              

              encapsulation on a granularity of an object or a class often leads to code trying to separate everything from everything else (from itself).

              This can often have good reason. A property might have a very good reason to perform an operation when it’s set. I actually disagree with Python’s or C#‘s properties here, because they hide what’s actually going on. In Self, everything is a message pass; therefore, you can simply provide a message that performs the action you want.

              In my opinion classes and objects are just too granular, and the right place to focus on the isolation, APIs etc. are “modules”/“components”/“libraries” boundaries.

              That has its own share of problems. “Exporting” from a library is basically same thing as making a member public. I don’t quite understand the difference between having public and private methods on a “class”, and having public and private functions in a module.

              If program data is stored e.g. in a tabular, data-oriented form, it’s possible to have two or more modules each operating on the same data structure, but in a different way.

              See above for Self’s dynamic polymorphism.

              Combination of data scattered between many small objects, heavy use of indirection and pointers and lack of right data architecture in the first place leads to poor runtime performance. Nuff said.

              Not “nuff said”. Self’s generational GC concept has proven itself quite useful, and forms the basis of the Java HotSpot VM (one of the fastest VMs around, I think most would agree) along with the JIT. Object oriented programming languages do not create non-performant code; bad code creates non-performant code.

              The DataStore approach is just bizarre and seems like it’s implementing a slow, half-broken version of an RDBMS within the application. If you need to go brrrr fast then just use a query builder.

              All in all, 4/10, could use less inspirational quotes from figureheads.

              1. 1

                Aside: What language are you using for your examples? It doesn’t look familiar so I’m having a hard time understanding what it’s meant to convey.

                1. 1

                  I am using the Self programming language, which is a prototype inheritance-based object oriented programming language.

                  1. 1

                    Thanks!

              1. 14

                Meta: I’m shocked at the number of trolls in this thread. Lots of folks are contributing thoughtfully and in good faith, but a bunch of other people just want to stir the pot. I’m resisting the temptation to reply to them, so I’m commenting here to express my frustration.

                Anyone else kinda shocked to see such bleh behavior on such a [usually] kind site?

                1. 11

                  No.

                  1. 1

                    Could you maybe expound a bit?

                1. 7

                  Aside from regular work, I’m working on my code editor: cross platform, simple, modal and opinionated terminal code editor.

                  It’s been about 2 months that it’s now my main editor (just uninstalled neovim :D). It’s coded from scratch using rust, and today I removed the last non-platform dependency (fuzzy-matcher) and now the only dependencies are winapi and libc that are used in the platform layers.

                  My next milestone is updating the screenshots/gifs on github and some more polish before going 1.0

                  1. 1

                    Well done! Now I need to go digging for a link :D

                      1. 1

                        Thanks! Is it okay to link it here? Anyways, the project’s name is Pepper which is named after my cat :D

                        1. 1

                          It’s probably fine. After all; we are here to share our thoughts on technology!

                          Edit: it looks like a cool project, the opinionatedness is very much in line with my opinions.. leave everything to other tools and focus on the editor component.

                          1. 2

                            Thanks and thanks for taking a look! I took lots of inspiration from Kakoune on that

                            1. 2

                              That’s great, I love Kakoune’s take on vim commands. I wish it was supported in popular IDEs (there’s a buggy one for vscode, and that’s it I think).

                    1. 16

                      I think there’s an XY problem here. What’s the purpose of doing this? Is it because writing <a href> </a> is too much overhead to write? Too much overhead to read? Is “links can only be one word” a technical limitation, or a design goal?

                      1. 2

                        It’s for a 2D language. 2D langs work better when words are atomic. Matched delimiters (like parens, brackets, tags) are an anti-pattern in 2D langs. You gain a lot of nifty little features when you know words are atomic.

                        1. 9

                          Can you link to a resource that explains what you mean? I’m not familiar with 2D languages, but it sounds like you’re still using delimiters, they’re just white space instead?

                          1. 1

                            I have no idea how the listed syntax is associated with a 2D language, but https://en.wikipedia.org/wiki/Befunge is an example of a 2 dimensional language.

                      1. 2

                        Cool! I’d love an ortholinear board, but that’s okay. I’m happy to see them making cool stuff.

                        1. 1

                          interesting, are ortholinear keyboards better to type with?

                          1. 3

                            Ostensibly, yes. Ortholinear = less lateral finger movement = good, because lateral finger movement is bad.

                            1. 2

                              I like them a bit more, and I could see how they could be Objectively Better, but I don’t think I’ve seen it seriously studied. My understanding is that staggered keyboards are a byproduct of typewriters that couldn’t have vertically aligned keys, and since we don’t have that constraint then it’s reasonable to design around it.

                              I have an Ergodox that I enjoy a bunch, I can’t recommend it enough.

                          1. 1

                            I wonder which Docker images this was tested on. I seem to remember that the official python images are slower than ubuntu because of some detail around how it’s compiled. No link handy, though.

                              1. 1

                                You got it, thanks.

                              2. 1

                                fedora:33, in order to match the host operating system and remove that as a factor.

                              1. 4

                                I’d be interested in the output of perf record/report here since it would presumably show what’s causing a slowdown.

                                1. 5

                                  Yeah, I’m quite interested in why this is happening and would love to avoid the speculation in the article.

                                1. 1

                                  This is great, and along the same lines I’ve been thinking for the past few months. I started writing an AST editor (https://github.com/christianbundy/tri) but it’s honestly not a quick and easy side project.

                                  I’m looking forward to seeing more developments in this space, I’m convinced that text editors <<< code editors.

                                  1. 7

                                    Closed does not imply locked. I close inactive issues because when someone new looks at my repository I want them to see what’s currently being discussed — not every random bug or feature request everyone has ever considered. (Yes, the real problem is that GitHub’s issue UI sucks at sorting and displaying things that matter. Stale bots are just duct tape.)

                                    In my experience, there are two pain points with stale bots:

                                    1. Inactive maintainers whose stale bot is more active than they are. Meta: I tried to improve Probot’s stale bot and spent months fighting their bot while begging for a review.
                                    2. Participants who can’t separate the “issue closed” and “bug is solved” semantics. For most projects closing an issue is much closer to “I’m not convinced that this is a discussion I’d like to highlight for people browsing the issue list”. Or even “I have limited energy and having 900 open issues with no activity for three years stresses me out”. I think we have this problem because the semantics of open/closed differ and it affects the UI. (Consider how much we’d argue about whether something is a bug/feature if GitHub only showed bugs by default.)
                                    1. 3

                                      closed does not imply locked

                                      Yet, every issue I search on the web that comes up with a GitHub issue for exactly the issue I have which has been marked stale is closed for non-collaborators.

                                      1. 1

                                        Do you happen to remember which bot is closing them? The default configurations for the stale bot and the stale GitHub Action don’t automatically lock issues, but it’s possible that everyone is configuring them the same way.

                                        On the other hand, I could still see the “this issue is locked, please create a new one if you want to talk about this” being a justifiable option. (Personally, I would probably configure this for a longer period of time after closing.) There’s certainly a trade-off, as it raises the barrier to entry for necro-bumping, but it also means that each thread has less historical context baggage.

                                        My main point is regarding the semantics of open/closed, in retrospect I probably should have lead with that.

                                    1. 4

                                      Thanks for posting this, I always look forward to your updates on Oil. How are you feeling burnout-wise? Are there enough folks working with you that you aren’t spread too thing?

                                      1. 4

                                        Good question, I have been meaning to write a project retrospective but haven’t had time :-P

                                        Right now I feel pretty good about things. I’ve taken a bunch of breaks lately, and after a break I feel pretty productive.


                                        A very recent development which I haven’t blogged about is “headless mode” [1], which is a scheme to separate the shell into UI process and a language/interpreter process.

                                        This came out of discussions with a collaborator who is writing a shell GUI. If you want a mental picture you can imagine a shell UI that looks like a browser – it has a command line that’s like a URL bar, and then each command is run in its OWN terminal. That’s not the only thing you could build, but it’s one of them.

                                        The slogan is that shell is a language- and text- oriented interface, but there’s no reason it has to be a TERMINAL- based one! (at least no exclusively)

                                        I’m excited about this for a few reasons:

                                        1. it cuts the scope of Oil proper. The shell UI is written in Go, and others can do the same in Rust, Swift, etc. All you need is a library that support descriptor passing over Unix sockets, and everything works very nicely. I’m surprised this pattern hasn’t been explored before (AFAIK)! Basically you can create a new terminal or pipe for every command, and pass it to the Oil process, which will use it.

                                        2. It enables parallel development and design. I hope that a bunch of different UIs will be made. Given all the activity here (https://github.com/oilshell/oil/wiki/Alternative-Shells) I think that will happen. Some of them like xonsh are wrappers around bash, but headless mode is a way to wrap a shell in a more principled (and faster) way.

                                        3. It gives people a reason to use Oil! It does seem like most people want something friendlier like fish.

                                        4. I think the people who are writing the GUIs will give the Oil language a good workout, because you need some help from Oil to write the GUI (completion, etc.)


                                        So things keep improving, which keeps me motivated. There were times when I was banging my head against the garbage collector, and I got frustrated. But it’s still making progress, and now I know a lot more about garbage collectors, which is fun :)

                                        The project is inherently big, I’d divide it into roughly these equal-sized parts:

                                        1. OSH language (running existing scripts)
                                        2. Oil language
                                        3. Making the whole thing faster (mycpp, etc.). This has been going on for just over 2 years, and is probably the thing with the most unknowns/surprises. I took a wrong turn early in the project, but I learned something.
                                        4. Documentation
                                        5. The Interactive Shell (history, completion, etc.)

                                        So I think that pushing #5 outside the project is going to make things a lot more reasonable! That could be a years long effort in itself and I hope a lot of people will build on it. Most people don’t want to write an interpreter for .bashrc, but they want to use it!

                                        Feel free to join oilshell.zulipchat.com and look at #shell-gui for the discussions around it. It’s still early but there are working prototypes!

                                        [1] These links aren’t polished but may give you some background

                                        https://github.com/oilshell/oil/wiki/Headless-Mode

                                        https://github.com/oilshell/oil/issues/738

                                      1. 7

                                        I would also suggest that the existence of stale-bots implies deficits in Github’s Issue model and/or UI.

                                        1. 4

                                          This.

                                          “Open” and “closed” are completely arbitrary semantics that don’t map to whether the issue describes a problem that’s fixed in the default branch.

                                        1. 19

                                          Automatically closing stale issues is a useful signal that the project follows the CADT development model. https://www.jwz.org/doc/cadt.html

                                          1. 12

                                            That seems a bit harsh. People posting random non-issues can be a genuine issue for larger projects. People posting on long-since solved issues is also an issue, which tends to be >95% generic support or outright nonsense, and <5% useful.

                                            I don’t care much for auto-close bots, but I understand why people use them. Managing all of this requires a significant amount of time.

                                            I bet Angular had this exact problem; JavaScript tends to attract a lot of beginners and you’re forever cluttering the bug list with non-bugs unless you’re really diligent about maintaining this, and I can’t blame the maintainers on wanting to focus on actually maintaining the Angular project instead of guiding the endless stream of new users unfamiliar with Angular, JavaScript, etiquette, etc. It’s essentially the “Eternal September” problem.

                                            1. 4

                                              Can’t both of those issues be solved, well, closing the issues manually?

                                              I’d assume long-since solved issues should be closed because solved. For “junk” issues, generic support and whatnot, is it really better to just let them sit open for a week or two (or however long the bot takes) rather than just manually marking them as “offtopic/support/wtfisgoingonhere” and closing them?

                                              1. 11

                                                I’d assume long-since solved issues should be closed because solved.

                                                Yeah but people will comment on them. With this I meant the “lock bots” that lock issues after being closed for n days which prevents adding new comments.

                                                As for manual closing/locking, sure, but that’s not “free” time-wise, and it can be emotionally draining. I don’t really want to tell people to ask their question somewhere else or that they’re making zero sense, but I also don’t necessarily want to provide mentoring to random newcomers as I got a life to lead and stuff to do. People can also get angry or even abusive about, no matter how nicely and gentle you phrase it (I’ve had that even with random strangers emailing me out of the blue because they saw me on Lobsters, Stack Overflow, GitHub, or wherever). It’s not super-common, but it sucks when it happens.

                                                A bot just makes all of this easier and avoids the emotional drain. Is it the “chicken way out”? I suppose it is, like I said I don’t use it myself and generally just manually lock old issues and such if they attract a lot of comments, but I also never maintained a project the size of Angular, and I can see the reasons why people would use it.

                                                I think the “emotional cost” of maintaining larger open source projects is often underestimated. Everyone is different and some people struggle with this more than others, but personally I find it hard. I want to be helpful, but that’s just not feasible or realistic beyond a certain scale so there’s some amount of (internal) tension there. It also leads to situations where you feel obligated to do things that you don’t really want to do, and this is how maintainers burn out.

                                                In Bristol there are many homeless people asking you for money; walking to the city centre or Tesco’s (~2km) can easily mean you’ll be asked 3 or 4 times. Sitting out on the harbourside for dinner or a drink will net you about one homeless person every 30 mins or so on average. Before I lived there I never hesitated to give some change if I had any, because these kind of things are a fairly rare event in Eindhoven. But if you’re asked multiple times every day it just becomes unrealistic. I found it difficult, because I don’t want to say “no”, but I also can’t say “yes” all the time. One of the many reasons I was happy to leave that place.

                                            2. 2

                                              I’m curious, which projects do you maintain?

                                              1. 1

                                                I see this is the ops world too. Not just devs.

                                              1. 1

                                                Neat! I’ve signed up for email updates.

                                                How does this compare to Secure Scuttlebutt?

                                                1. 1

                                                  Few questions from someone not close to either the erlang world or the js world:

                                                  1. The author uses Node and Javascript pretty interchangeably. Are all JS frameworks basically the same as far as concurrency goes?
                                                  2. If my main erlang process needs to wait for the ps process to finish, what’s the advantage of having a separate process? isn’t the whole point of multithreading so that your main thread can do stuff while waiting for the DB to finish?
                                                  1. 1

                                                    Are all JS frameworks basically the same as far as concurrency goes?

                                                    Pretty much. The only way to have concurrency in JavaScript is to use async. JS by definition is single-threaded with async capabilities.

                                                    If my main erlang process needs to wait for the ps process to finish, what’s the advantage of having a separate process?

                                                    Error handling mostly. In Erlang failure of one process do not propagate to other (unless explicitly requested).

                                                    isn’t the whole point of multithreading so that your main thread can do stuff while waiting for the DB to finish?

                                                    Erlang processes weren’t created for “multithreading” but for error isolation. The parallelisation was added later. But returning to your question about “doing other stuff while waiting for DB to finish” - you still can do so, just spawn other process (it is super cheap, as Erlang processes are different from OS processes). You still will need some synchronisation at some point, in Erlang each process is synchronous and linear, and it relies on messages to do communication between processes. So “behind the scenes” in function Postgrex.query/2 (in Erlang functions are defined by module name, function name, and arity, you can have multiple functions named identically with different arity) is something like (quasi-simplified as there is more behind scenes, for example connection pool, and most of the code there will be hidden behind functions, but in it’s “core” it reduces to that):

                                                    def query(pid, query) do
                                                      ref = generate_unique_reference()
                                                    
                                                      # Send message to process identified by `pid` with request to execute `query`
                                                      # `ref` is used to differentiate between different queries. `self/0` returns PID of
                                                      # current process, we need to send it, so the Postgrex process know whom
                                                      # send response to.
                                                      send(pid, {:execute, self(), ref, query})
                                                    
                                                      # Wait for response form the Postgrex process
                                                      receive do
                                                        {:success, ^ref, result} -> {:ok, result}
                                                        {:failure, ^ref, reason} -> {:error, reason}
                                                      after
                                                        5000 -> throw TimeoutError # if we do not get response in 5s, then throw error
                                                      end
                                                    end
                                                    

                                                    So in theory, you could do different things waiting for the response from the DB, this is for example how sockets are implemented in gen_tcp and gen_udp, these just send messages to the owning process, and owning process can do different things “in the meantime”. In this case, just for convenience of developer, all that message passing back and forth, is hidden behind utility function. However in theory you can do so in fully asynchronous way. This is almost literally how gen_server (short for generic server) works.

                                                    1. 1

                                                      So you’d spawn a new process that makes a DB call, then to handle the result that new process spawns other processes? And in the meantime the original process can keep going? What if you need the result in the original process? Or do you just not design your code that way?

                                                      1. 1

                                                        So you’d spawn a new process that makes a DB call, then to handle the result that new process spawns other processes?

                                                        No and no. You are always working within scope of some Erlang process, and Postgrex start pool of connections when you do Postgrex.start_link/1. So you do not need to create any process during request, all processes are already there when you do request.

                                                        And in the meantime the original process can keep going?

                                                        You can do so, however there is rarely any other work to do by that given process. When writing network application with TCP connections, then in most cases each connection is handled by different Erlang process (in case of Cowboy each request is different Erlang process). So in most cases you just block current process, as there is no meaningful work for it anyway. However that do not interfere other processes, as from programmer viewpoint Erlang uses preemptive scheduler (internally it is cooperative scheduler).

                                                        What if you need the result in the original process?

                                                        My example above provides information in original process, it doesn’t spawn any new processes. It is all handled by message passing.

                                                    2. 1
                                                      1. Yep, Node is just a wrapper around V8 (the JavaScript runtime from Chromium) plus an API that provides access to system resources (filesystem, networking, OpenSSL crypto, etc). The concurrency being discussed is common across all JavaScript runtimes, and isn’t specific to any of the API that Node provides. I don’t know why the author says “Node” in the article, they’re talking about JavaScript.
                                                      2. No idea, I haven’t fully escaped JavaScript yet.
                                                    1. 15

                                                      (Comment mirrored from one I made under the post itself)

                                                      Whether or not the server-side code is visible isn’t very important. There are two reasons why:

                                                      1. (Anti-Signal) Signal is a closed platform; users can’t self-host a Signal server and expect to be able to talk to other Signal users. Users must accept whatever code the server runs, no modifications. This is an example of the difference between “free software” and “open-source”; this type of SaaS is open-source but not necessarily free.

                                                      2. (Pro-Signal) All three Signal apps (at the time of writing this comment) use E2EE with minimal metadata leakage. The server is unaware of the contents of the messages and cannot connect a sender to a recipient. As long as the apps don’t get an update that changes this situation, users don’t need to trust a Signal server to protect their privacy.

                                                      I wrote about the first reason in a bit more detail in a blog post. The follow-up article was posted here a bit over a week ago.

                                                      1. 5

                                                        Very good points. A lot of open-source software can hardly be described as free, in any greater sense of the word. For example, is the Firefox user really free? In practice, is he not just as subject to the will of Mozilla as a Chrome user is to the will of Google?

                                                        1. 3

                                                          As long as the apps don’t get an update that changes this situation

                                                          How would users verify this, exactly? Refuse all updates? Inspect on-wire behavior of new versions somehow (seems dicey)? Is there some easier way?

                                                          1. 2

                                                            Summarizing from the follow-up:

                                                            The easier way is to use an open platform/protocol and let users choose from many clients and server {implementations, providers}. These will all have to remain compatible with each other, ensuring some degree of stability. Simplicity of protocols and implementations can reduce the need to constantly push and keep up with updates. If an app gets a “bad” update, users can switch instead of being forced to accept it.

                                                            Having to get many implementations to agree on a compatible protocol slows down disruptive featuritis and discourages the “move fast and break things” mentality pervasive in Silicon Valley. Rather than constantly piling on new feature additions/removals, developers will be incentivised to prioritize stability, security, and bugfixes. Those updates are easier to keep track of.

                                                            1. 3

                                                              I just spent 20 minutes trying to make sense of your comment and I have trouble connecting the dots. What are these 3 apps that Signal has? Are they all by Whisper Systems? I didn’t see any others mentioned on Wikipedia. If so, can’t WhisperSystems coordinate releases to provide the illusion of stability while guarantees erode under the hood? I don’t understand why arguments about an open protocol with many clients and servers matter when they don’t apply to Signal.

                                                              1. 1

                                                                Perhaps Android, iOS, and Desktop (Web)? Not the OP, just speculating based on how I read the comments.

                                                                1. 2

                                                                  Yeah, that’s my assumption as well. In which case you can’t really think of the clients as independent. And so there’s no way to justify a closed-source end-to-end encrypted messaging app. There’s just no way to provide the desired guarantees without access to the sources, and without making it possible to verify that the sources correspond to what’s running on the server.

                                                          2. 1

                                                            This is an example of the difference between “free software” and “open-source”; this type of SaaS is open-source but not necessarily free.

                                                            I’m very familiar with both the OSD and FSD and am thoroughly confused by this comment. Can you explain what you mean, or what difference you’re pointing out?