1. 3

    I haven’t used this feature of Postgres in production before, I’d be curious what other use-cases people have found for them (that are actively used)?

    I suppose from reading the post and looking online that it’s possible for messages to be lost? i.e. if a client isn’t connected / listening to a NOTIFY, then the message is just lost?

    Interesting. Thanks for sharing.

    1. 3

      I’ve used it in various situations to hook it into other systems to avoid querying for new rows repeatedly for example. One can even combine this with triggers and notify under certain conditions and maybe even mention what has happened for example that “row with ID X has been updated”.

      So a piece of software would start, check the database and do it’s thing, then, would only continue to do its thing if a notify comes in, otherwise it can idle, instead of querying all the time.

      1. 2

        We use it for webhooks. We interact with a third-party which may not have the data right away, and will instead call a webhook. We save the data to cache it for later, and as part of that we trigger a notify that allows the original caller that was requesting the data to await the update to that row.

        1. 2

          I used it about a decade ago before things like RabbitMQ were in vogue. We had a bunch of workers that sat and listened for notifications. The web interface would insert rows into the work queue, the workers got notified, and then updated the work rows with a success/fail. Worked awesome and meant that the crusty old PHP web app didn’t need any new dependencies added to it.

          1. 1

            Same as @tonyarkles noted below: I use them for queues, which saves adding a whole new system.

            In addition I used them to monitor balances of users. If the balance gets under a certain threshold a trigger fires the notify, and some service picks that up.

            In both cases the payload is empty. The receiver of the notify will figure out what needs to happen, and the receivers are also woken up every once in a while with a timer, so it doesn’t matter much if a notify gets lost.

            Works very well, super easy to test, no dependencies to maintain.

            1. 1

              It’s super useful to refresh configuration between multiple system.

              We at https://hanami.run have a mail server and a web app. user use the web app to configure their email forwarding rule. the mail server listen to Postgres to refresh its configuration.

              Losing message is totally fine in our case because upon restarting, the mail server query Postgres and get fresh data already.

              As in, listen/notify is to notify about changes to do something. If a client isn’t connected, then upon re-connectiong they know what they have to do(reload the whole configuration).

            1. 3

              Something to keep in mind is that notify is limited in terms of bytes (8k) for the payload. However, in many scenarios this can easily be worked around by just indicating event types and IDs for the data. So a listener would get informed with meta data and query actual data based on this.

              1. 2

                That’s very good to know actually. I was not aware of that limitation.

              1. 34

                Theo will be disappointed to know that we also haven’t realized yet the mission of the Zig software foundation. All I can say is that we’re getting there, in part also thanks to the fact that we try real hard to avoid gatekeeping systems programming.

                The website was recently redesigned by me (blog post, previous discussion) to make the front page more accessible to newcomers with different backgrounds, most of which (gasp) have no idea what privsep is.

                More in general, in my opinion this is a low quality rant from somebody that hasn’t even bothered to open the website and/or do any research on Zig (in this talk Andrew does a great job at clarifying all the words in the motto, including two that Theo missed: general-purpose and maintaining).

                And more importantly the superficiality of the rant betrays what seems to me the real crux of the issue: Theo is dissatisfied that modern communication doesn’t try to appeal to him anymore.

                Perhaps these authors don’t understand that those of us attempting to use modern techniques like privsep in C (proper privsep is exceedingly rare outside of the C universe because memory-safe makes privsep irrelevant /sarc) remain uninspired by such lead-ins.

                Perhaps these authors do, and perhaps the center of the universe doesn’t sit in the dead center of the privsep illuminati.

                1. 10

                  hasn’t even bothered to open the website and/or do any research on Zig

                  Theo’s point is centered on how Zig was presented, not on the merits of Zig. “I doubt you wrote the following sentence: Zig is a general-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.” This is true as this is the motto from Zig’s website.

                  It’s good to have a concise description of what a project tries to do, since it tends to keep projects on track by providing a clear vision. Leading with it here unfortunately sounds a little the kid in “A Christmas Story” where he’s memorized and repeatedly recites the marketing tagline for the toy he wants. This is sad, because it looks like Sebastien put in the effort on the technical side. I’m sure the response would be different if Sebastien had instead lead with something like, “Projects X, Y and Z use Zig, and I think we can benefit from having it in ports.”

                  1. 4

                    Thank you, your post helped me understand how this discussion is IMO taken out of context of Theo focusing mainly on previous email’s curt way of trying to argue for inclusion of Zig in OpenBSD. Honestly, if I were in the same situation as Theo, I can easily imagine being similarly triggered by someone trying to push for adding a language to my set of maintained packages in a way that was done here. If there was at least “aims to be” instead of “is”, it would already be a very different message to me. The same sentence on Zig’s website IMO makes sense as something of an aspirational goal; and given a website is kinda a marketing site by necessity, I am (if unfortunately) already used to taking it with a huge grain of salt. But the same sentence sounds to me completely different in the context where it was used in the email. As such, I personally see this thread here as some completely unnecessary drama based on an email taken out of context. I strongly dislike that this happened, I believe it’s doing everyone disservice.

                    1. 3

                      For what it’s worth, I agree with you regarding unneeded sh*t-stirring, and I flagged this submission as off-topic.

                  2. 9

                    Zig is a general-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.

                    He still has a point, this tagline is rather vapid. It smells like corporate bullshit.

                    Zig is a better C

                    That catches the attention and explains the value-proposition much better. Then people are going to argue whenever Zig is better than C, but that’s for another day :-p

                    1. 3

                      Perhaps these authors do, and perhaps the center of the universe doesn’t sit in the dead center of the privsep illuminati.

                      Hot taco: I think OpenBSD is overly lauded for mitigations that are just plugging holes in the boat. Where’s the usage of i.e formal verification? That’s something that can be done with their existing C (knowing tdr’s opinion on memory safe languages; i.e he won’t use them), and has been done w/ existing OSes; drivers on Windows can be formally verified.

                      1. 2

                        First off: I really like the concept of Zig.

                        Theo will be disappointed to know that we also haven’t realized yet…

                        But then it’s a goal and not a claim, so saying “Zig is…” feels odd. I mean that’s the part of Zig Theo de Raadt commented on.

                        The rest was general dislike on that form of advocacy, not even directly targeted at Zig (he mentioned Rust after all).

                        And to be honest that selling of a theoretical possible or maybe currently not possible optimum as a claim about current functionality I think does projects and IT in general a disfavor. Like sure, a project should have goals. I very much agree. But there’s so many projects out there now that basically having nothing but a marketing website and maybe a skeleton out there. I really do find it rather annoying that project descriptions tend to have nothing to do with the actual state.

                        And yes, it might be a bit more clear on Zig, that it’s a motto/goal and I’d assume it’s also a tentative comment on this being the package description. The current Wikipedia description is a lot better. “Zig is a general-purpose programming language designed for robustness, optimality, and maintainability”. Also the first sentence of the actual article is a lot better “Zig is an imperative, general-purpose, statically typed, compiled system programming language”. No claims that aren’t backed by any projects yet.

                        Yes, this is about wording, because it’s about marketing. There were no claims made about the project itself from what I read.

                        1. 1

                          OpenBSD ecosystem benefits from having Zig on the platform, and having Zig treating OpenBSD as its target platform.

                          In my mind there is no question about it.

                          Zig allows programmers (including low-level/ real-time or near-real-time programmers) to make OS+ hardware platform choice, independent of the language.

                          May be I am bit of an optimist when it comes to a believe in the efforts such as Zig or BetterC .

                          To me they represent definite progress in several (but not all) dimensions moving software engineering forward, and I admire the technical skill, organizational skills and perseverance of the leaders in these efforts.

                          I think Theo’s comment is rooted in the well discussed category of skepticism summarized by:


                          “ … Extraordinary claims require extraordinary evidence” (a.k.a., the Sagan standard) was a phrase made popular by Carl Sagan. Its roots are much older, however, with the French mathematician Pierre-Simon LaplaceWikipedia stating that: “…the weight of evidence for an extraordinary claim must be proportioned to its strangeness.”[note 1] … “

                          I personally value others diversity of opinion, and personal biases – very often, consciously, because they are different than mine.

                          These biases are the ‘sauce’ that makes person’s character so valuable – their skepticism, their determination and willingness to question claims and to question status quo – in the domain where they have the expertise.

                          It is the Balance, created by Skepticism and Optimism – that has higher value than either one.

                          In terms of the messaging that package maintainers put into the package description for OpenBSD (or any other OS). I think they should be what the upstream ( software authors) have proposed (rather than Theo’s opinion :-). Just like in lobste.rs – when we submit the link to a story, we should not be offering our own interpretation. I actually think that Theo would be totally Ok with that.

                          Once in a while a sceptic in Theo, shows up – so we should take it in the context of the Balance :-).

                          [1] https://rationalwiki.org/wiki/Extraordinary_claims_require_extraordinary_evidence

                        1. 9

                          Zig is a general-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.

                          I am sure both csh and php aspired to do the same.

                          Bashing languages for having similar goals to languages the author has a distaste for is pretty weak. The only word Zig and PHP’s slogans share is “general-purpose” (don’t all languages aspire to that?), and PHP markets itself as pragmatic, completely contrary to Zig’s perfectionism. I wonder who said csh was for building robust or optimal software.

                          This feels like Theo just came up with 2 successful languages he didn’t like without thought to their similarities to Zig.

                          1. 4

                            “general-purpose” (don’t all languages aspire to that?)

                            Some more than others. In the case of Zig, general-purpose means being able to make Zig work for the vast majority of platforms out there, including esoteric and embedded ones, where having a runtime and assuming you have a heap at your disposal might be a problem, hence why Zig has no runtime and explicit allocation, among other things.

                            1. 5

                              Sure it’s a low-effort poke, but in general he’s not wrong.

                              I have nothing against Zig, but afaik “maintaining robust, optimal, and reusable software” is pretty much unproven, because it’s simply too young for that, especially to someone maintaining a 20+ year old C code base.

                              1. 2

                                Bashing languages for having similar goals to languages the author has a distaste for is pretty weak

                                Does he though? To me it sounds like he just makes a point that the claim should give examples. If you say “Zig is…” that does not sound like a goal, but a claim and if it’s not backed by anything it’s propaganda.

                              1. 18

                                it might be useful to tap into the wider Kubernetes ecosystem, e.g. operators - if you want to run PostgreSQL, Redis, Cassandra, ElasticSearch, Kafka with limited human resources, it might be easier to do so via Kubernetes Operators (whether or not such operational complexity, even abstracted, is worth it with a limited team, is an entirely different discussion)

                                From personal experience, if you think this is why you want to use Kubernetes, think again. You have to deal with topics of the actual software you want to run, should they arise, you have to deal with problems that Kubernetes might throw at you. And now you add a whole new thing that touches both and is its own beast. And the only ones being able to really deal with it is people will deep knowledge in all three of these.

                                Also the idea of operators very much feels like workarounds for workarounds. Building an abstraction for an abstraction that abstracts the management of that abstraction hardly feels like good design. Even if we say they are just better abstractions.

                                What I’ve seen at multiple companies now is that eventually one ends up with sort of an operator-stack that is one big customized setup for that one specific company. Speaking about snowflakes and pets…

                                In other words, you should be very sure about this being the right approach if you build your production services on top of this.

                                Not to say Nomad is without flaws, but it’s easier to decide what you want or need.

                                And with it being simpler, but having similar concepts, even if you end up switching over to Kubernetes the “lost” work time will for most situations be lower, than the other way round.

                                This is all just subjective and personal experience and of course situation changes. Both projects are developing rather quickly, so mentioned things might do as well.

                                In short: Don’t just choose Kubernetes, because there is operators.

                                The Kubernetes ecosystem is massive. There are entire companies, tools and whole niches being built around it ( ArgoCD, Rook, Istio, etc. etc. etc.). In some cases tools exist only because Kubernetes is itself so complex - Helm, Kustomize, there are a bunch of web UIs and IDEs ( Octant, Kubevious, Lens, etc.), specialised tooling to get an overview into the state and security of your Kubernetes cluster ( Sonobuoy, kube-hunter, kube-bench, armosec, pixie). Furthermore, there are literally hundreds of operators that allow abstracting the running of complex software within Kubernetes.

                                While this is true I really wonder whether I am the only one who thinks that a lot of these are simply not great pieces software. I don’t mean to pick on them, and having used some I really appreciate the effort, but for the sake of honesty a lot of these are not nice to use in a productive manner, but have very annoying rough edges. I don’t want to get in on individual ones, but to give some examples. For operators you might have silent errors, which can be very creepy, esp. when the configuration has minor variations compared to the software, or automatism that fights you. For UIs and IDEs there are the typical “smaller project” things. Like hard to search through logs, interfaces hard to adapt, stuff is shown out of date, something named badly or confusingly, etc. It’s the kind of topics one has when an IDE offers support for something new, like it was with Git or other things a decade or so ago. When things are not polished, they at times might be worse than not using them and I switched back and forth a lot using them.

                                Nomad and Consul came a long way there over the last year as well. Their web interfaces used to be like that, but now they start to be quite nice to use. Also certainly not perfect, but they actually made certain third party tools obsolete.

                                In the end you still should know how to do stuff on the command line, no matter what you choose. It will come in handy.

                                1. 2

                                  IIUC, the promise of operators is that with just a simple API call, I can have, say, a database cluster that then maintains itself, replicates itself, backs itself up, recovers itself on a new node if something happens to the old master, etc. I already have that with AWS managed services like RDS. If a disaster happens while I’m asleep or on a plane (though the latter doesn’t happen much these days), I can be confident that the service will recover itself. Yet I doubt there are sysadmins at Amazon babysitting my specific AWS instance. That’s why it seems plausible, at least to me with my lack of expertise in this area, that a Kubernetes operator should be able to do the same thing.

                                  1. 7

                                    Yes. The difference is that if something ends up not working (which I guess is the reason there is DevOps, SREs, etc.) with Amazon you call support, whereas with operators you hopefully have enough overview of the insides of the operator.

                                    You also might end up fighting some automatism. So you should still really know what you are doing and don’t assume it will just do everything for you.

                                    Or coming from a different angle. If everything worked as intended all of that wouldn’t be required. So I always wonder what happens if stuff breaks and the operator is another thing that can break and some of the bigger ones are pretty complex, having their own bugs. And since the initial thing you start off is a “disaster” you want to recover from I think defaulting to assuming everything will go fine from them one might not be the best approach.

                                    Of course there are different operators there. This is not to say you cannot have a simple operator for a piece of software and it will make your life easier. There’s however also giant ones and by just simply installing it if something stops working and you rely on it it might your life a lot harder and outages a lot bigger. So what I mean is really, that you should know what it implies if you download some operator with all these nice features that is run by some big corporation that had a team build that operator for some integral piece of software. If they have a problem in the operator they will sure have someone capable of fixing the issue. The question is whether your team can do much more than filing a bug report and hoping it’s fixed soon. Like don’t start to get an understanding of it when the disaster is already happening.

                                    1. 5

                                      Strongly second this. At work we use an in-house operator to maintain thousands of database clusters - but the operator is like a force multiplier or a bag of safe automations. It allows a small team to focus on the outlier cases, while the operator deals with the known hiccups.

                                      The whole thing relies on the team understanding k8s, the databases and the operator. The second feature we built was a way to tell the operator to leave a cluster alone so a human could un-wedge it..

                                      1. 4

                                        In my very limited experience with operators, they tend to be big old state machines that are hard to debug. It takes a very long time to get them working reliably and handle all the corner-cases.

                                      2. 4

                                        Beware of wishful thinking. It’s plausible but in reality we’re not there yet. Operators might help, but they’ll also have crazy bugs where they amplify problems because some natural barrier has been erased and is now “simply an API call”. One classical example is such operators breaking pod collocation constraints, because it’s so easy to mess it up and miss the problem until an incident happen.

                                        Sysadmins at Amazon are not babysitting your specific AWS instance, but how did they build something super reliable out of RDS ? I can only imagine, but I think they started with the widest range possible of known failures so they could put failover mechanisms in place to fight them. Then they added monitoring on the health of each Postgres instance, on the service provided, and on the failover mechanism themselves. Then they seated and waited for things to go red, fixed, and refined this for years, at scale. The scale helps here because if shows problems faster. I’m 100% sure most Kubernetes operators are not built that way.

                                      3. 2

                                        I heartily agree with you about abstractions-on-abstractions. Most Operators are just ways to combine several Kubernets native components together into a single, proprietary, package. It’s like a Helm chart, but different so that only the developers of the Operator really know what’s going on. In my experience, using those kinds of Operators is more-or-less a waste of time since you have to either learn the Operator and all of it’s constructs or you could just learn the Kubernetes constructs and learn how they operate with one another. I am firmly in the latter camp; I am also in the camp that self-hosts and does not use Kubernetes at home because it’s not a good tool.

                                        That being said, though, there is one place I can point to and give a two-thumbs-up-recommendation for an Operator. This is in places where the Operator actually provides new functionality in the Kubernetes API and not just an alternative abstraction for a Helm chart. The Operator in question is cert-manager. It provides functionality that Kubernetes does not provide natively and cannot be reasonable shoehorned into whatever it already provides. The new constructs map readily to a usable pattern that is easy to grok.

                                        On the other hand, there is the RabbitMQ operator which just takes all of the functionality of a Helm chart and hides it in things that can’t be viewed without a lot of kubectl magic… There is a place for everything and everything in it’s place. Use cert-manager. Avoid all other Operators unless there is a firm understanding of the additional abstraction layer it necessitates.

                                      1. 5

                                        Despite a marked increase of signups on XMPP and Matrix servers, of these three it was Signal that won by far the largest share of new users.

                                        If I am not wrong, I think that Telegram has gained even more users than Signal over the last few months.

                                        Also, I am not sure if calling Signal a “product” is right, as they are not selling anything.

                                        1. 17

                                          Products can be free. Services are products. Signal maintains some exclusivity and lock-in because not all of its software is free as in beer or free as in freedom. This closedness doesn’t make it bad necessarily but advantages competing for products that have these qualities to those who value them.

                                          1. 2

                                            Signal maintains some exclusivity and lock-in because not all of its software is free as in beer or free as in freedom.

                                            May I ask what you are referring to here?

                                            1. 6

                                              It was my understanding some parts of the back-end of Signal are not publicly released yet, but maybe I’m behind on what’s been released. I see now that there’s Signal Server and the support docs seem not to indicate that anything is non-public anymore. So, I’m probably wrong and I’m pretty happy to be wrong in this case!

                                              There’s still network effect lock-in with hosting the canonical instance, though. If someone goes and starts their own Signal instance, I’m not sure how they’d market it and there’s almost certainly no federation built-in to allow Signal™ Signal and the alternatives to intercommunicate.

                                              1. 11

                                                As far as I can see, the Signal Server repo hasn’t been updated for almost a year now. Development has clearly been ongoing but they’re apparently not publishing the source code any more. Not sure if there was any kind of announcement about this.

                                              2. 2

                                                I’m not “you”, but…

                                                1. Signal relies very much on the server that is reachable by a particular domain name. Whoever has control of the server that answers requests to that domain name has considerable control.
                                                2. The signal clients are open source (in name as well as in practice) but if you want to communicate with someone whose client needs to be using a patch you’ve written, your patch has to be merged into a branch controlled by Moxie, because that branch is what’s deployed to almost all of your correspondents.

                                                Both of these are similar to XMPP, except different. If you want to make a change and want it deployed, you can’t on Signal, and also not on XMPP.

                                                Most XMPP servers and clients are maintained by people you can’t persuade and most of the deployed clients and servers are updated by people you can’t persuade. But for Signal, one team, even one person, can, and that person isn’t you, so it’s very concentrated compared to XMPP, but similar in that you call precisely zero shots.

                                              3. 1

                                                My point was taht I think that “service” or “platform” would be a better word to describe what Signal is offering.

                                              4. 8

                                                Also, I am not sure if calling Signal a “product” is right, as they are not selling anything.

                                                I understand the commercial connotations of the term “product”. Still, I’m mostly using it here in the broader sense of a package, a thing that is produced. The term fits what I’m trying to convey in practically every other way, so I think it’s a good choice overall. Not all products need to be sold and paid for.

                                              1. 1

                                                I fail to see the use for Go for non-enterprise projects. It doesn’t seem to offer anything interesting compared to languages I already know (like C) apart from goroutines. I would love to be proved otherwise, though.

                                                1. 11

                                                  I like using go for (personal) server applications, because it has a comprehensive and well-designed standard library, that generate static binaries that can be easily copied around. With the new embed directive, this should make it even easier. I think Go doesn’t want to be interesting, it wants to be convenient, and that is certainly something it manages.

                                                  1. 4

                                                    I love to use Go for personal projects because it costs nothing to host it. I am talking about resources consumption. It fits on many “free plan” VMs, where Java would not.

                                                  2. 1

                                                    Personally, I just find it easy to write and understand and plus I enjoy the syntax.
                                                    When most people have a rough idea of something they want to make they do up a prototype in Python, but I usually do the same in Go just because I enjoy writing it.

                                                    1. 1

                                                      Compared to C for private and non enterprise projects Go offers:

                                                      • No requirement for header files
                                                      • Sometimes less verbosity
                                                      • Nicer Errors
                                                      • Faster build times

                                                      While Go has good tooling C might still have more.

                                                    1. 3

                                                      Another great article! Love reading your blog.

                                                      Just wanted to add some bits, that might not be clear and hopefully are interesting.

                                                      doas is not related to the FreeBSD project, sudo works just fine. doas also exists for Linux. They both actually come from people active in the OpenBSD community.

                                                      Regarding the adduser and system users. Usually those are done none-interactively using pw and often people create some custom ports/packages so user creation is done part of that process. Depending on the type of daemon running, it’s actually pretty common to use the nobody, if no file system access is required.

                                                      There are pre-built rust binaries, but only for x86_64. Also it’s very uncommon to use pre-built images, rather than FreeBSD ports/packages. One strength of FreeBSD I’d say is the ports collection, easily rivaling big Linux Distributions (such as RedHat or Ubuntu). So installing any programming language environment, be it if you wanna do Go, Rust, Lua, Crystal, C#, etc. is actually rather trivial.

                                                      Manually modifying /etc/rc.conf is not strictly necessary. There is sysrc to add/modify single lines and usually there is a <servicename>_flags to append additional flags. On some services there is nice service specific convenience flags as well reaching from specifying listen addresses, to scaling instances, enabling individual jails and if you use the ipfw firewall even specifying whole templates or individual ports to be opened. Depending on the software one uses the rc.conf might contain all configuration, which can make maintenance very easy.

                                                      I think that it is fair to say that outside of x86_64 FreeBSD is probably not the best of choices though.

                                                      1. 1

                                                        This might be a bit off-topic, but the colors make this quite hard to go over and shifts the focus on content-wise irrelevant parts, like links, dates and TODO/WIPs.

                                                        Using the reader view in Firefox, though not optimal seems to help.

                                                        1. 3

                                                          The story telling/relevant example is pretty important in my opinion. It also tends to help uncovering situations, where one accidentally applies “wrong best practices”, that don’t apply in a specific project or situation.

                                                          1. 5

                                                            We have official coding rules but many developers don’t understand the reason for the rule. Annotating them with stories would help in my opinion and also help to decide when to break such best practices.

                                                            1. 5

                                                              More importantly, it would highlight how many of them are “because I saw a blog post at ${hipCompany} that said so”

                                                              1. 1

                                                                Any practice without user discretion is doomed to become institutionalized bloat.

                                                            1. 2

                                                              This is great!

                                                              While I completely understand the sentiment of having that one tool that everyone uses and knows about I think that it comes at the prices of approaching problems in fewer ways, maybe from a technological point of view missing out. I don’t think that’s the case for Prometheus, but sometimes this also makes APIs specific to single client implementations.

                                                              Having more than one option and therefor also more than one interest group can be very beneficial, so I really appreciate this project.

                                                              That’s not to say that creating many Grafana clones is the right approach, but having some alternatives is certainly a good thing. Also this very much doesn’t seem to be a clone, as mentioned in the Readme.

                                                              Really nice. I think this could also be very useful for projects that measures things that aren’t your server instances, but to be used in single instance applications, like SoCi projects, where one simply wants to visualize some time series, without user management, etc. I also really like the straight-forward minimalism on the server side.

                                                              Just one nitpick, maybe it would make sense to specify the protocol as part of the source instead of enforcing HTTPS?

                                                              1. 1

                                                                Thank you, I’m glad you find this useful!

                                                                I do like having more options, and I was pleasantly surprised that it is possible to implement these kinds of different frontends for complex projects with relative ease. There’s a definitive tradeoff with this project (being more lightweight and quicker to write a new board vs being feature complete and easier to use), but I think that’s a neat thing to exist.

                                                                In that vein, we also wrote an alternative frontend to ElasticSearch with similar tradeoffs and design ideas in mind.

                                                                Just one nitpick, maybe it would make sense to specify the protocol as part of the source instead of enforcing HTTPS?

                                                                Good point! I created an issue to at least default to the protocol the frontend is using.

                                                                1. 2

                                                                  relative ease

                                                                  And that it’s possible to write them in modern, plain Javascript. No need for ES6 compilers or too insane browserhacks.

                                                              1. 4

                                                                I’ve seen linters that raise warnings about cyclomatic complexity in individual functions, and sometimes it seems worthwhile but I’m not convinced.

                                                                One measure of complexity I’d like to see is on average, how many different files I need to look at to understand what a given line is doing. I’ve read some Golang code that makes pretty heavy use of interfaces that feels like it’d rate highly on that, and (because of loose typing) a lot of Python code takes tracing through many files to understand what an object actually could be - can it be None? is it always one type? Better check all the callsites…

                                                                1. 1

                                                                  I have seen those too and I think cyclomatic complexity is a bad measure for standard linters.

                                                                  The reason is that I have seen more than one example where it has been used to increase complexity (in the way of making it harder to reason what it does). You have some switch statement for mapping some valued where it’s clear and obvious what’s going on. But your cyclomatic complexity check of course hates that switch statements.

                                                                  The problem here is not the check itself. It makes sense. The problem is that it’s not an exact measure for CCL and there is for example the mentioned case where it’s obviously wrong for measuring CCL. It very much stands out for that type of code.

                                                                  However once you make exceptions it leads to another problem. Early in my career we started to use liners to clear up some messes. So we each took a project with the goal of applying style and linting changes so we could better reason about stuff. We finished. We both had all checks pass. Only days later I realized that one fixed then by simply changing all the settings to fit the code, not vice versa.

                                                                  What I wanna say is that linting where warnings are sometimes okay lead to problems creeping in very quickly, because these warnings will be silenced and then they change from a very useful tool to more of a burden because you cannot be sure if you really don’t have that kind of code/problem anywhere.

                                                                1. 1

                                                                  I really like projects such as micro, because while I think that it’s good to have a million editors that are basically the same some diversity and projects trying to do stuff differently is good.

                                                                  1. 1

                                                                    I was under the impression that micro aimed to get rid of diversity in the terminal editor space by moving to standard GUI keybindings?

                                                                  1. 7

                                                                    I always wonder how much all the privacy changes going into Firefox effect measured market share. Also adblock usage, which I’d (blindly) assume to be higher on Firefox than Chrome.

                                                                    1. 13

                                                                      Mozilla has been placing ads in the German subway. (I’ve seen it in first in Hamburg, but I’ve also seen it in Cologne, Berlin and Munich) It says in German “This ad has no clue about who you are and where you’re coming from. Online-trackers do. Block them! And protect your privacy. With Firefox.” (Not my tweet, but searching for “firefox werbung u-bahn” yielded this tweet)

                                                                      I feel that Mozilla is going all in on privacy. (Context: Germany is a very private society culturally and also due to its past. Also one of the country with the highest usage of firefox.)

                                                                      1. 4

                                                                        But WhatsApp is still the main way to communicate.

                                                                        1. 2

                                                                          That’s probably true in every country. The Germans I know are all big on signal.

                                                                        2. 4

                                                                          Firefox isn’t a particularly aggressive browser on privacy though, Safari and Brave are much further ahead on this and have been for a long time. I think at this point Mozilla’s claims to the contrary are false advertising - possibly literally given that they apparently have a physical marketing campaign running in Germany. Even the big feature Mozilla are trumping in this release has already been implemented by Chrome!

                                                                          While I think privacy is a big motivator for lots of people and could be a big selling point of Firefox, I think consumers correctly see that Mozilla is not especially strong on privacy. Anyway I don’t see this realistically arresting the collapse in Firefox’s market share which is reduced by something like 10% in the last six months alone (ie: from 4.26% to 3.77%). On Mozilla’s current course they will probably fall to sub-1% market share in the next couple of years.

                                                                          1. 10

                                                                            You can dismiss this comment as biased, but I want to share my perspective as someone with a keen interest in strict privacy protections who also talks to the relevant developers first-hand. (I work on Security at Mozilla, not Privacy).

                                                                            Firefox has had privacy protections like Tracking Protection, Enhanced Tracking Protection and First Party Isolation for a very, very long time. If you want aggressive privacy, you will always have to seek it for yourself. It’s seldomly in the defaults. And regardless of how effective that is, Mozilla wants to serve all users. Not just techies.

                                                                            To serve all users, there’s a balance to strike with site breakage. Studies have shown that the more websites break, the less likely it is that users are going to accept the protection as a useful mechanism. In the worst case, the user will switch to a different browser that “just works”, but we’ve essentially done them a disservice. By being super strict, a vast amount of users might actually get less privacy.

                                                                            So, the hard part is not being super strict on privacy (which Brave can easily do, with their techie user base), but making sure it works for your userbase. Mozilla has been able to learn from Safari’s “Intelligent Tracking Protection”, but it’s not been a pure silver bullet ready for reuse either. Safari also doesn’t have to cave in when there’s a risk of market share loss, given that they control the browser market share on iOS so tightly (aside: every browser on iOS has to use a WebKit webview. Bringing your own rendering engine is disallowed. Chrome for iOS and Firefox for iOS are using Webkit webviews)

                                                                            The road to a successful implementation required many iterations, easy “report failure” buttons and lots of baking time with technical users in Firefox Beta to support major site breakage and produce meaningful bug reports.

                                                                            1. 5

                                                                              collapse in Firefox’s market share which is reduced by something like 10% in the last six months alone (ie: from 4.26% to 3.77%)

                                                                              On desktop it’s actually increased: from 7.7% last year to 8.4% this year. A lot of the decrease in total web users is probably attributable to the increase in mobile users.

                                                                              Does this matter? I don’t know; maybe not. But things do seem a bit more complex than just a single 2-dimensional chart. Also, this is still millions of people: more than many (maybe even most) popular GitHub projects.

                                                                              1. 3

                                                                                That’s reassuring in a sense but also baffling for me as Firefox on mobile is really good and can block ads via extensions so I really feel like if life was fair it would have a huge market share.

                                                                                1. 5

                                                                                  And a lot of Android phones name Chrome just “Browser”; you really need to know that there’s such a thing as “Firefox” (or indeed, any other browser) in the first place. Can’t install something you don’t know exists. This is essentially the same as the whole Windows/IE thing back in the day, eventually leading to the browserchoice.eu thing.

                                                                                  On iOS you couldn’t even change the default browser until quite recently, and you’re still stuck with the Safari render engine of course. As far as I can tell the only reason to run Firefox on macOS is the sync with your desktop if you use Firefox.

                                                                                  Also, especially when looking at world-wide stats then you need to keep in mind that not everyone is from western countries. In many developing countries people are connected to the internet (usually on mobile only) and are, on average, less tech-savvy, and concepts such as privacy as we have are also a lot less well known, partly for cultural reasons, partly for educational reasons (depending a bit on the country). If you talk to a Chinese person about the Great Firewall and the like then they usually don’t really see a problem with it. It’s hard to understate how big the cultural divide can be.

                                                                                  Or, a slightly amusing anecdote to illustrate this: I went on a Tinder date last year (in Indonesia), and at some point she asked me what my religion was. I said that I have no religion. She just started laughing like I said something incredibly funny. Then she then asked which God I believe in. “Well, ehh, I don’t really believe in any God”. I thought she was going to choke on laughter. Just the very idea that someone doesn’t believe in God was completely alien to her; she asked me all sorts of questions about how I could possibly not have a religion 🤷 Needless to say, I don’t talk much about my religious views here (also, because blasphemy is illegal and people have been fined and even jailed over very minor remarks). Of course, this doesn’t describe all Indonesians; I also know many who hate all this religious bullshit here (those tend to be the fun ones), but it’s not the standard attitude.

                                                                                  So talking about privacy on the internet and “software freedom as in free speech” is probably not too effective in places where you don’t have privacy and free speech in the first place, and where these values don’t really exist in the public consciousness, which is the majority of the world (in varying degrees).

                                                                                  1. 3

                                                                                    And a lot of Android phones name Chrome just “Browser”; you really need to know that there’s such a thing as “Firefox” (or indeed, any other browser) in the first place. Can’t install something you don’t know exists. This is essentially the same as the whole Windows/IE thing back in the day, eventually leading to the browserchoice.eu thing.

                                                                                    Yes. And the good thing is: the EU commission is at it again. Google has been fined in 2018. Actually, new Android devices should now ask the user about the browser.

                                                                                  2. 2

                                                                                    The self-destructing cookies plugin is the thing that keeps me on FireFox on Android. It’s the first sane cookie policy I’ve ever seen: When you leave a page, cookies are moved aside. Next time you visit it, all of the cookies are gone. If you lost some state that you care about (e.g. persistent login), there’s an undo button to bring them back and you can bring them back and add the site to a list that’s allowed to leave persistent cookies at the same time. I wish all browsers would make this the default policy out of the box.

                                                                          1. 12

                                                                            (very subjective thoughts, hopefully not too off-topic. I think that this is the right community though)

                                                                            I think the appearance of new “Hobby OSs” is one of the nicest things to happen in the recent years. There was a bit of a drought, as some projects slowly died. That’s not to say there aren’t any, there certainly are quite a few that made constant progress over all these years.

                                                                            However things like developing an OS mostly for fun is something that seems to be lacking lately. A part of that also seems to be that doing some things just for fun became harder in the mainstream world, if you wanna call it like that. Doing an app just for fun and distribute it to people is quite a hassle. One needs to pay certain fees for distribution, potentially even get a specific device to program, there’s usually quite a few things involved to keep things working, both on newer phones and not too rarely certain rules change.

                                                                            Overall things seem to move faster and the time for projects to be obsolete (unusable or close to) when doing nothing seems to speed up, in some fields more than other.

                                                                            Maybe it’s just my perception, but also it feels as if the willingness or let’s say the motivation to do a bigger project as a hobby in the free time goes down. A lot of the time people only do so if compensated (thanks to Patreon, etc. this is easily possible though), or if it at least makes well on the resume.

                                                                            Please don’t get me wrong. I certainly have no intention to tell people with their free time and completely understand things cost money. Please don’t take this as a criticism.

                                                                            What I am getting to is that with the growth of the amount of people being in IT it seems that - for the lack of a better word - the percentage of people doing “silly little things” is going down, especially when they can not be achieved within a few days.

                                                                            I have been wondering why this is. To me a lot of it feels like an increase of “wanting to feel professional” (again, no criticism!), even when not acting so. Maybe it’s also a general society or psychology topic. Maybe it’s how time is given a value and with such projects that are somewhere between work (with effort, like programming) and hobby, which for most people is a clearer categorization thing when watching a show on Netflix, playing a video game or listening to music hobbies taking effort make people more feel like they didn’t spend their time productively, nor considering it time to relax.

                                                                            A lot of that I perceive as “taking fun out of computing” so to say. But OS development projects like these, just like the tildeverse make me feel like a lot of it is returning after it was partly lost or at least not perceived by me.

                                                                            Curious on whether I am the only person seeing it like that or if you have different views on this.

                                                                            1. 7

                                                                              I too am happy to see these projects again, as hobby OS dev has always been a favourite interest of mine. But I do find it rather depressing that they all seem to be just yet another UNIXalike, rather than any sort of attempt to do something new, or even something else old that was abandoned.

                                                                              1. 4

                                                                                … the percentage of people doing “silly little things” is going down, especially when they can not be achieved within a few days. I have been wondering why this is. To me a lot of it feels like an increase of “wanting to feel professional”

                                                                                Yes, I agree with the sentiment. I think that the invention and the spread of internet in the mainstream has been a double edged sword. On one hand, it is so much easier now to learn how to do things and to make your creations accessible to the world. On the other hand, this benefit applies to everyone, not just to you, so you suddenly find yourself “competing” with a horde of amateurs and hobbyists just like you.

                                                                                Because if we’re honest, very few people want to make and do things in perfect isolation. There is not always a desire for a monetary reward, but I think that in the overwhelming amount of cases there is a desire for some kind of recognition from peers or others inside or outside the current social circle. But in this new era the bar to get that recognition is getting higher and higher. Not only the quality of the work rises, but also the expectation of what proper recognition is rises. I might be looking back with nostalgia, but I like to think that 50 years ago, if your mother had some skill in knitting sweaters, her skill would be recognized and valued in her family/village/street. So if she was able to impress 20 people, she would gain some real status and respect. If you were to try to get the same level of respect these days, you would need at least a couple of thousand followers on youtube/instagram/pinterest/whatever. Ideally you should also make some nice extra cash on the side by selling the the designs, or the sweaters themselves on etsy, or create tutorials on youtube, or..

                                                                                So the bar is much higher now and distractions are plentiful. So not as much people bother anymore. But that is relatively speaking. I think that in absolute numbers there are still way more people doing interesting stuff. They just get drowned out. But I don’t have any research to back that up.

                                                                                1. 1

                                                                                  That’s very insightful. So far I took that more as effects of walled gardens, and raising the bar by complexity, the wanting to feel professional (not doing “hacky” things out of love - despite the whole “Do you have passion for X?” at many job ads).

                                                                                  But sure, when you look for online likes, comments, stars and subscribers things are seen differently. And those measures typically don’t convey much. GitHub stars do oftentimes not even convey user base or general interest (readme-only repos with thousands of stars because it was posted on some news page, never even started out implementation). They mostly tell how many people have somewhat actively seen a headline or similar.

                                                                                  And of course the attention span and new things popping up, together with the “newer is always better” assumption one has a hard time.

                                                                                  The thing with research might turn out hard or at least I don’t know what the right approach is. A longer time ago I actually got interested in different ways of measuring impact of technologies (different kinds of, purely economical for example). The background was that things like measured programming language popularities seemed off, when looking at how they are perceived online, compared to when you looked into the real world.

                                                                                  A lot of these are community and philosophy based. To stay with programming language popularity. A project with excellent documentation, clear guides, its own widely used communication channels tend to have a lot fewer questions on Stack Overflow, etc. A language that is often taught at university, has been hyped, etc. has more. Also the more centralized a community is the fewer post you’ll find with largely the same content.

                                                                                  This doesn’t make a huge difference in the large, especially when putting in more factors, you still get a picture, but when it comes to finding patterns it is very easy to only end up with researching a specific subset, which might be interesting, but also might lead to in a way self-confirming assumptions. Or in other words, it’s hard to specify parameters and indicator to research without accidentally fooling yourself.

                                                                                2. 3

                                                                                  Personally, I disagree. I would conjecture that there are actually more people doing “silly little things” (including the “bigger projects”) than “before”, but there are also many times more people now doing things “for money/popularity” than “before”. It’s just that as a result, those in the first group lost visibility among the second group — esp. compared to the “before” times, when I believe the second group was basically an empty set.

                                                                                  As a quick example off the top of my head, I’d recommend taking a look at the Nim Conf 2020 presentations. Having attended this online conference, personally I was absolutely amazed how one after another of those were in my opinion quite sizeable “silly little things”. Those specific examples might not be OS-grade projects, but then there’s https://www.redox-os.org/, there’s https://genodians.org/, there’s https://www.haiku-os.org/, there’s https://github.com/akkartik/mu

                                                                                  I mean, to see that nerd craziness is alive and well, just take a look at https://hackaday.com/blog!

                                                                                  1. 4

                                                                                    Thank you for the shout out to Mu! @reezer, Mu in particular is all about rarely requiring upgrades and never going obsolete. It achieves this by avoiding closed platforms, only relying on widely available PC hardware (any x86 processor of the last 20 years), and radically minimizing dependencies.

                                                                                    My goal is a small subculture of programmers where the unit of collaboration is a full stack from the bootloader up, with the whole thing able to fit in one person’s head, and with guardrails (strong types, tests) throughout that help newcomers import the stack into their heads. Where everybody’s computer is unique and sovereign (think Starship Enterprise) rather than forced to look and work like everybody else’s (think the Borg). Fragmentation is a good thing, it makes our society more anti-fragile.

                                                                                    I’ve been working on Mu for 5 years in my free time, through periods when that was just 1 hour a day and yet I didn’t lose steam. I don’t expect ever to get paid for it, and the goal above resonates enough with me that I expect to work on it indefinitely.

                                                                                    1. 2

                                                                                      Just wanna say that despite Mu not being a tool I particularly want to use yet, I do read all your stuff about it that I encounter, and I’m very glad you’re out there doing it. And I’m certainly not alone.

                                                                                    2. 1

                                                                                      Thank you for your response. You give great examples. I actually meant to give them as well. Just to clarify. For me Redox would be part of that new wave (maybe even the start of it), while Haiku is a project that had continuous progress, but is one of the old surviving ones, just like Genode.

                                                                                      AROS is another example for an old project.

                                                                                      What I meant with things that died during that period was for example Syllable and some projects of similar philosophy.

                                                                                      I also agree with the sentiment that there is more people, but it doesn’t feel like it grew in proportion (that’s what I meant with percentage). But it feels like it is changing, which I really like. It feels like a drought being over. The Haiku community also had ups and downs.

                                                                                      But I also don’t think it’s just operating systems. That’s why I mentioned the Fediverse. A completely different thing seems to be the open source game scene, which also feels like it’s growing again, insanely so. Especially when looking at purely open source games, which feel like they have massive growth now.

                                                                                      However, I still have some worries about the closed platform topic, making it harder. Tablets and phones are becoming the dominant “personal computers” (as in things you play games on, do online shopping, communicate). And they are very closed. If you in the late 90s or early 2000s wanted to install an alternative OS on your average personal computer you could, even on your non-average sometimes. For your average smartphone or tablet that’s a lot less likely and unlike back then the drive (at large, with some exceptions) seems to go into things being even more shut off, giving less room to play.

                                                                                      I don’t know that area, but it seems similar things are true for video game consoles. Less homebrew, and at least I did not hear about OSs being ported there, which seems a bit odd, given that by all that I know the hardware seems to be closer now to your average desktop computer.

                                                                                      I did not know about Mu. Also I will take a look at the Nim Conf. So thanks for that as well!

                                                                                      1. 1

                                                                                        Not much of a metric, but I guess you could try and graph the announcements on the OSDev.org forum thread year by year and see if there’s anything to read from them. Though on a glance, I’d say they’re probably too sparse to warrant any kind of trendline (but IANAStatistician). And the first one is 2007 anyway, so still no way to compare to “the late 90s or early 2000s”.

                                                                                        Yeah, I also kinda lament the closing of the platforms; but on the other hand, Raspberry Pi, ARM, RISC-V, PinePhone, LineageOS… honestly, I think I’m currently more concerned about Firefox losing to Chrome and a possible monoculture here. Truth said, whether I like it and admit it to myself or not, the browsers are probably in a way the de facto OS now.

                                                                                        And as to Fediverse and in general non-OS exciting things, there’s so many of them… for starters just the (for me) unexpected recent bloom of programming languages (Go, Rust, Zig, Nim, but also Janet, Jai, Red, etc. etc. etc.); but also IPFS, dat/hyper, etc. etc; then Nix/NixOS; then just around the corner an IMO potential revolution with https://github.com/enso-org/enso; as you say with games there’s Unity, Godot, the venerable RPGMaker, there’s itch.io; dunno, for me there’s constantly so many exciting things that I often can’t focus exactly because there’s so many things I’d love to play with… but then even as a kid I remember elders saying “you should focus on one thing and not constantly change your interests”, so it’s not that it’s something new…

                                                                                    3. 2

                                                                                      I wonder how virtualization improvements over time might have also driven some of this?

                                                                                    1. 3

                                                                                      I used to use GitHub, but there the downside is that it’s really easy to get to hyped software, that ended up on HackerNews and alike. There’s projects with thousands of stars that effectively don’t exist yet or never will exist. So that’s not a good measure. What works a bit better is going over to users profile pages. Take someone that does a lot of GIS for example. It’s likely that they have other GIS projects, or contributed or starred them. Sometimes one finds something in READMEs.

                                                                                      But of course not everything is on GitHub. I also lock at competitors.

                                                                                      While a commercial(?) website (that I am completely unaffiliated with), alternative.to often works, when I know I want something that is similar enough to something else it actually does a pretty decent job.

                                                                                      Sometimes Wikipedia helps, but that’s now largely replaced with the Arch Linux Application List on their Wiki. That one is really nice for things like “I want a TUI version of X” or “I want X, but not depending on al of of GNOME/KDE/…”.

                                                                                      DuckDuckGo and Google seems to be getting worse and worse for that kind of research. I cannot pinpoint it exactly. Google sometimes has the edge when it knows your “interests”, but it can be really hard to exclude stuff and use it more like full text search. That’s something that used to be easier a decade or more ago. I assume fighting spam and the whole SEO thing could have stopped that from working properly. That assumption is based on there not being other search engine doing that kind of thing.

                                                                                      I also save posts on lobste.rs, if I think software (or software lists/comparisons) could be interesting at a later time. Search through it is not working so well, but I don’t think it’s that the search is bad, more that I would have to know what’s on the pages linked to. One line summaries are usually not enough and in general, summaries tend to be fairly bad for non-trivial tools. I am sure most of you came across situations where something different was understood based on them.

                                                                                      On top of that it’s getting worse, cause even very technical software is being marketed. Buzzwords and generic phrases “be more efficient”, “automate X”, etc. and sometimes blatant misuse of words can make it quite hard to find out what one is looking at.

                                                                                      1. 12

                                                                                        Here’s something I don’t understand. Microsoft in the late 90s was hit with an antitrust lawsuit after they bundled Internet Explorer with Windows and made it un-removable. Now, Google is doing the same, with most Android devices coming with an un-installable (AFAIK not even the “Disable” button works) Chrome browser, and their Chromebooks also come with pre-installed Chrome. How does this not cause an antitrust violation, considering a majority of the world runs Android (even if not Chromebook)?

                                                                                        1. 10

                                                                                          I feel like we’ve been in a long period where antitrust laws have been fairly toothless. However, there are also some differences.

                                                                                          I wasn’t around at the time, but my understanding is that basically the entire world of computing was using Windows. Today, a normal person might realistically access the web from their iPhone, or their Android phone, or their Windows computer, or their Apple computer, or their Chromebook. That’s a very different world from the one where every person realistically only could access the web from their Windows computer. We’re in a world where Chrome is in a privileged position on Android/ChromeOS, Safari is in a privileged position on iOS/macOS, and Edge is in a privileged position Windows, compared to the world where IE was in a privileged position on essentially every computing device.

                                                                                          1. 7

                                                                                            Back when I started to use Unix around the turn of the millennium it was a very different world indeed. Apple was as niche as “Linux on the desktop” is today, there were no other platforms, and Microsoft was the God Emperor Company when it came to desktop software.

                                                                                            Being sent a .doc was file a serious problem. You could kind-of open them in OpenOffice.org, but not really. There were some CLI tools as well (e.g. antiword) but they just dumped the text and everything lost was lost. Saving .doc files was possible, but expecting someone in Microsoft Word to view the document in the same way as you saved it was a leap of faith.

                                                                                            Making a site look great on both IE and Firefox was a real mission as they used different box models; the IE one made a lot more sense (and is also what everyone is using now box-sizing: border-box) but it wasn’t “according to the spec” and the Mozilla people stuck to their guns on this, a mistake IMO as it was far easier to just make CSS 2.3 to change this; it would saved untold hours of web dev work and made the spec better, as it’s just a better model. But ah well.

                                                                                            As much as people love to complain about Chrome now, the entire situation is a lot better. I rarely have issues in Firefox, and if I make something I tend to just test it in Firefox and then Chrome “just to be sure”, but it almost always just works well. Problems with .doc file formats and whatnot are mostly gone.

                                                                                            This doesn’t mean it’s all perfect or that we haven’t gotten new problems in return; I kind of resent that I need to own an Android or iOS device just to use WhatsApp for example, and that not using it can be quite debilitating. But overall, yeah, the “Chrome problem” is much less severe than the “IE problem” of 20 years ago.

                                                                                            1. 4

                                                                                              Disclaimer: I’m a Microsoft employee, but wasn’t during the antitrust trial.

                                                                                              It’s true that back in 1997 the market share of Windows was much higher than now, and that antitrust is really concerned with regulating monopolies. But note the antitrust trial was launched as a result of bundling IE with Windows, and in the end after the settlement, IE was still bundled with Windows.

                                                                                              Imagine an alternate universe where this didn’t happen. If Microsoft weren’t allowed to bundle IE with Windows, how would it have influenced Apple or Google’s behavior? Then again, if platforms didn’t bundle browsers, what would the user experience be today? I think part of the answer would be “we’d run a lot more native applications.”

                                                                                              1. 2

                                                                                                I think you are right. Sadly the main difference between them is in which stocks revenues go to. Otherwise, despite all of the marketing, fanoboyism, etc. they are still acting largely the same on both platforms. They essentially work the same, you need to have an account by them, you cannot use the device as intended without an account, you have to pay them loads of money to get access to their customers.

                                                                                                And there is essentially no competition. Furthermore this is slowly being set into stone, as for example European laws require one to use 2FA for money transfers/banking, and that 2FA in the majority of cases means you have to use an app provided by the bank, which means if you want to even have the slightest chance to compete you need to make all these banks develop an app, which is a chicken and egg problem, where big enough user bases won’t happen unless you have support for apps and vice versa. Banking apps just being one example.

                                                                                                I don’t think there are many ways out. Maybe something like forcing them to support let’s say WASM (or any kind of standard) there is little chance to get out of that. Even if you were had one of the biggest companies and that somehow became your major plan I imagine it would be very hard to break into the market without basing off open source Android for example. In other words you won’t achieve this with innovation alone.

                                                                                            1. 24

                                                                                              It is safe to say that nobody can write memory-safe C, not even famous programmers that use all the tools.

                                                                                              For me, it’s a top highlight. My rule of thumb is that if OpenBSD guys sometimes produce memory corruption bugs or null dereference bugs, then there is very little chance (next to none) than an average programmer will be able to produce a secure/rock solid C code.

                                                                                              1. -1

                                                                                                My rule of thumb is that if OpenBSD guys sometimes produce memory corruption bugs or null dereference bugs, then there is very little chance (next to none) than an average programmer will be able to produce a secure/rock solid C code.

                                                                                                Why do you think “the OpenBSD guys” are so much better than you?

                                                                                                Or if they are better than you, where do you get the idea that there isn’t someone that much better still? And so on?

                                                                                                Or maybe let’s say you actually don’t know anything about programming, why would you trying to convince anyone else of anything coming directly from a place of ignorance? Can your gods truly not speak for themselves?

                                                                                                I think you’re better than you realise, and could be even better than you think is possible, and that those “OpenBSD guys” need to eat and shit just like you.

                                                                                                1. 24

                                                                                                  Why do you think “the OpenBSD guys” are so much better than you?

                                                                                                  It’s not about who is better than who. It’s more about who has what priorities; OpenBSD guys’ priority is security at the cost of functionality and convenience. Unless this is average Joe’s priority as well, statistically speaking OpenBSD guys will produce more secure code than Joe does, because they focus on it. And Joe just wants to write an application with some features, he doesn’t focus on security that much.

                                                                                                  So, since guys that focus on writing safe code sometimes produce exploitable code, then average Joe will certainly do it as well.

                                                                                                  If that weren’t true, then it would mean that OpenBSD guys security skill is below average, which I don’t think is true.

                                                                                                  1. 5

                                                                                                    OpenBSD guys’ priority is security at the cost of functionality

                                                                                                    I have heard that claim many times before. However, in reality I purely use OpenBSD for convenience. Having sndio instead of pulse, having no-effort/single command upgrades, not having to mess with wpa_supplicant or network manager, having easy to read firewall rules, having an XFCE desktop that just works (unlike Xubuntu), etc. My trade-off is that for example Steam hasn’t been ported to that platform.

                                                                                                    So, since guys that focus on writing safe code sometimes produce exploitable code, then average Joe will certainly do it as well.

                                                                                                    To understand you better. Do you think average Joe both will use Rust and create less mistakes? Also, do you think average Joe will make more logic errors with C or with Rust? Do you think average Joe will use Rust to implement curl?

                                                                                                    I am not saying that you are wrong - not a C fan, nor against Rust, quite the opposite actually - but wonder what you base your assumptions on.

                                                                                                    1. 3

                                                                                                      I’d also add that there is deep & widespread misunderstanding of the OpenBSD philosophy by the wider developer community, who are significantly influenced by the GNU philosophy (and other philosophies cousin to it). I have noticed this presenting acutely around the role of C in OpenBSD since Rust became a common topic of discussion.

                                                                                                      C, the existing software written in C, and the value of that existing software continuing to be joined by new software also written in C, all have an important relationship to the Unix and BSD philosophies (most dramatically the OpenBSD philosophy), not merely “because security”.

                                                                                                      C is thus more dramatically connected to OpenBSD than projects philosophically related to the “GNU is Not Unix” philosophy. Discussions narrowly around the subject of C and Rust as they relate to security are perfectly reasonable (and productive), but OpenBSD folks are unlikely to participate in those discussions to disabuse non-OpenBSD users of their notions about OpenBSD.

                                                                                                      I’ve specifically commented about this subject and related concepts on the orange site, but have learned the lesson presumably already learned many times over by beards grayer than my own: anyone with legitimate curiosity should watch or read their own words to learn what OpenBSD folks care about. Once you grok it, you will see that looking to that source (not my interpretation of it) is itself a fundamental part of the philosophy.

                                                                                                      1. 1

                                                                                                        If that weren’t true, then it would mean that OpenBSD guys security skill is below average, which I don’t think is true.

                                                                                                        At least not far above average. And why not? They’re mostly amateurs, and their bugs don’t cost them money.

                                                                                                        And Joe just wants to write an application with some features, he doesn’t focus on security that much.

                                                                                                        I think you’re making a straw man. OpenBSD people aren’t going to make fewer bugs using any language other than C, and comparing Average Joe to any Expert just feels sillier and sillier.

                                                                                                        1. 3

                                                                                                          What’s your source for the assertion ‘They’re mostly amateurs’?

                                                                                                          1. 2

                                                                                                            What a weird question.

                                                                                                            Most openbsd contributors aren’t paid to contribute.

                                                                                                            1. 2

                                                                                                              What a weird answer. Would you also argue that attorneys who accept pro bono work are amateurs because they’re not paid for that specific work?

                                                                                                              Most of the regular OpenBSD contributors are paid to program computers.

                                                                                                              1. 1

                                                                                                                because they’re not paid for that specific work?

                                                                                                                Yes. In part because they’re not paid for that specific work, I refuse to accept dark_grimoire’s insistence that “if they can’t do it nobody can”.

                                                                                                              2. 1

                                                                                                                You seem to be using the word “amateur” with multiple meanings. It can mean someone not paid to do something, aka “not a professional”. But when I use it in day to day conversation I mean something more similar to “hobbyist”, which does not tell much about ability. Also saying they are amateurs, thus do not write “professional” code, implies anyone can just submit whatever patch they want and it will be accepted, which is very far from the truth. I assume with reasonable certainty that you never contributed to OpenBSD yourself, to say that. I am not a contributor, but whenever I look at the source code, it looks better than much of what I saw in “professional” work. This may be due to the focus on doing simple things, and also very good reviews by maintainers. And as you said, the risk of loosing money may be a driver for improvement, but it is certainly not the only one (and not at all for some people).

                                                                                                                1. 1

                                                                                                                  You seem to be using the word “amateur” with multiple meanings,

                                                                                                                  I’m not.

                                                                                                                  as you said, the risk of loosing money may be a driver for improvement, but it is certainly not the only one

                                                                                                                  So you do understand what I meant.

                                                                                                          2. -1

                                                                                                            nailed it

                                                                                                      1. 3

                                                                                                        Hint for people that don’t wanna read through all of that and are not so interested into bits that are organisational and more on the “chores” side of things.

                                                                                                        Jump to Projects, Miscellaneous and Third Party Projects to get an overview. On top of that Kernel, Userland and Ports contain typical changelog items. The rest is probably only interesting to you if you are more involved in the FreeBSD Project.

                                                                                                        This is not to forget about all the great work that is mentioned in the other projects and while it’s good to see what’s going on in the background - transparency is really great - I think that this is what most people on lobster.rs are likely interested in.