1. 20

    From my perspective the assumption that “copy pasta pirates” are the result of combating imposter syndrome and imposter syndrome being the result of being self taught is wrong. Well maybe not wrong, but I don’t think it’s the only or major contributor.

    I’ve seen both imposter syndrome and copy pasta pirates who had bachelor’s and master’s degrees. Far too often to see that as exception, even though I’m not completely sure how that can be, cause to my understanding they came from good universities, but that’s its own topic.

    I think a big cause is actually that either intentionally or unintentionally copy pasts pirates are what companies look for. Another reason is that technique and technologies are being replaced by products.

    To give an example for the latter. GitHub is happy when people don’t really know Git. It profits even GitHub and Git are considered to be synonymous. The same is true for buzzwords. Think of cloud computing, compute instances. Could providers don’t want you to think of vservers behind HTTP APIs, they don’t want you to think about outsourcing your infrastructure, even when that’s what you do. Another more anecdotal example is a master’s degree computer scientist not knowing the difference between socket.io, websocket and socket in the context of computer science, randomly using either word. This isn’t to make fun, but to give an example of why I don’t think being self taught or not necessarily makes a difference.

    But while these examples might show symptoms I don’t think companies wanting to profit from “magical thinking” and people not knowing differences between products and technologies are the cause. Even when they profit and have incentives i don’t think they are a major cause.

    I think a bigger cause is that many employers ask for something like a full stack developer (don’t feel attacked, just an example) and their performance indicator is whether they quickly can hack something together that on the surface looks good and in most situation with semi standard requests being a copy pasta pirate is the perfect way to go about this. Building quick and dirty prototype, proof of concept solutions, “MVPs”.

    Does that mean that the outcome is stable, secure, flexible, issues can be fixed quickly, changing requirement can be met or the next employee will be able to clean up that mess without outages? Most likely not. But that’s okay, because customers are used to stuff not always working, companies being “hacked” (which mostly is sensitive customer data being publicly available). Also managers expect something to be torn down and remade with a new team anyways.

    Morover in many situations having a lot of people that only can copy and paste can stabilize the position of ones who can do more. They either have special perks, like heing left alone, being famous in the company, earning more, or they work as consultants or similar.

    What I want to say is that even when it’s not a concious choice the industry has arranged itself with and even incentivised the copy pasta pirate.

    These are just some surface, shallow examples and very subjektive ones too. One could dig deeper, but the point I want to make is that right now they are in an integral part of how large parts of the tech industry work and that I don’t think it’s likely to change if only the way to counter gatekeeping and imposter syndrome are handled.

    1. 2

      I find ZSH super slow compared to fish. It, in tmux, with a useful but fast prompt like starship and an awesome editor like vim makes me super fast boosting my productivity.

      1. 2

        I haven’t had any speed issues with zsh. In what way do you find it slow? On my slowest machine (which has a noticeably slow 1ghz mobile processor), zsh takes ~450ms to fully initialize and display a prompt. ~70ms on my $3.50/mo Vultr VPS.

        Note: I don’t use oh-my-zsh or other big zsh overhaul monstrosities.

        1. 1

          My bad, yes, with a set of oh-my-zsh plugins. It makes ZSH insanely slow. Versus fish with a similar feature set out of the box, with no noticeable speed degradation.

          1. 1

            For most intents and purposes I recommend people to use grml’s zsh config. It’s a nice, feature rich, yet smaller and saner way if you just want a quick way to make use of what zsh has got to offer, not spending too much time to configure things on your own. It’s also well documented and you still get to pick and choose. It’s overall emphasizing more on being functional than pretty (or distracting).

      1. 5

        That’s not how you spell ZFS.

        Bcachefs is too new and unproven. It will eat your data if you let it by using it.

        1. 10

          Bcachefs is intended to succeed the current go-to options: Btrfs and ZFS. All filesystems have to start somewhere.

          1. 4

            Is there a guide what are the planned improvements over ZFS?

            1. 9

              Apparently it has a much smaller codebase than ZFS or btrfs (and in fact, it’s slightly smaller than ext4) while being cleaner and more flexible.

              1. 6

                Not a technical improvement, but if all goes right, it would be included in mainline kernel with a known-valid licence - which I don’t think zfs will ever achieve.

                1. 4

                  Not a technical improvement, but if all goes right, it would be included in mainline kernel with a known-valid licence - which I don’t think zfs will ever achieve.

                  …on Linux, at least ;)

                  1. 3

                    At the same time it seems to be GLPv2 licensed, which mean that merge into BSDs is also something that we probably will not see. So almost all users of ZFS will not bother with it.

                    1. 2

                      Maybe that’s for the best. Both worlds can have great filesystems, and the two worlds don’t need to use the same great filesystems. I’m getting more and more convinced that the less BSD people and Linux people have to interact, the better.

                      1. 1

                        I’m still hopeful once HAMMER2 is more mature, it’ll get ported to other systems including Linux.

                2. 3

                  To me, two things stands out comparing to ZFS (at least promised):

                  1. True tiered storage support, unlike current L2ARC / slog in ZFS;

                  2. Better expansion RAIDZ* disks.

                  1. 1

                    Number one is 100% correct, and no one should use an slog thinking it’s tiered storage. The only tiered storage ZFS does is tier-0 in memory (ARC, not L2ARC) and tier-1 to your pool, rate-limited by the slowest vdev. The ZFS devs have turned down requests for true tiered storage many times, so I don’t think it’s going to happen anytime in the next 5 years. (You can get it working by using ZFS with a dm-writecache device as the underlying “disk” but I think all bets are probably off as to the integrity of the underlying data.)

                    But for number two, draid is probably the correct answer. I think it’s superior to what bcachefs is offering.

                3. 2

                  But why not use what’s there. ZFS or porting HAMMER instead? If your focus is “reliability and robustness” code that is used in production seems better than creating new code with new bugs.

                  1. 3

                    But why not use what’s there. ZFS or porting HAMMER instead? If your focus is “reliability and robustness” code that is used in production seems better than creating new code with new bugs.

                    Because licenses, to start somewhere (the Linux kernel can’t include BSD code, for example). It’s also hard to argue that HAMMER is proven due to it’s extremely small user base, even if it might be interesting from a technical standpoint.

                    1. 6

                      Linux can include BSD licensed code - as long as there’s no advertising clause, it’s compatible with GPL.

                      See an old thread: http://lkml.iu.edu/hypermail/linux/kernel/0003.0/1322.html

                      1. 1

                        Linux can include BSD licensed code - as long as there’s no advertising clause, it’s compatible with GPL.

                        I stand corrected!

                        1. 1

                          This is true, but note that ZFS is CDDL, not BSD.See https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/

                          …but HAMMER would be fine from a license perspective.

                        2. 4

                          the Linux kernel can’t include BSD code, for example

                          What makes you think so? There’s quite a bit of code in the Linux kernel that started out BSD licensed.

                          1. 2

                            It’s totally legal to go either direction (BSD code mixed with GPL code or GPL code mixed with BSD code). There is a cultural stigma against the other license, depending on which license camp one is in. I.e. Most BSD people dislike the GPL license and go out of their way to replace GPL’d code with BSD license code(and the opposite for BSD code in Linux land).

                            As a more recent example, GUIX didn’t get invented out of a vacuum, it’s partly(mostly?) GPL people unhappy with the Nix license.

                            One extra gotcha as @singpolyma points out, GPL code mixed with BSD code enforces the GPL license on the binaries, usually making BSD proponents unhappy.

                            1. 3

                              This is very misleading because if you add GPL to BSD then the combined work/a derived work can no longer be distributed as BSD.

                          2. 2

                            the Linux kernel can’t include BSD code

                            BSD is GPL-compatible, but the reverse isn’t true: BSD code can’t include GPL’d code.

                            1. 2

                              Technically BSD codebases can include all the GPL’d code they want. They can even keep the BSD license on the parts that don’t depend on the GPL part. The binaries would be GPL though and there are other reasons purists might avoid this.

                            2. 1

                              due to it’s extremely small user base

                              So, like, bcachefs? or nilfs2 (which is in mainline…).

                      1. 1

                        It feels to me that the major feature is rollbacks. Now while I can understand that, after all it’s why people like having backups and ZFS snapshots (and but environments by extension) I just don’t really see how this really matters being integrated into the package manager really matters. If you have software installing it uninstalling software seems to have worked fine over the past few decades.

                        Is state/data/configuration somehow managed in a special way?

                        When I think of such scenarios the burden is on getting stuff that is not managed by package managers back into order.

                        On the topic of running multiple versions. While I very rarely have wanted to do that, mostly for Debugging or bad upgrade paths of software while not having backups from the article I understand that certain services are being run multiple times with different versions. If that is correct I’m very curious how that is done in relation to sockets (Unix, TCP, etc ) how does other software decide where to connect to? Just the address or is there something else in how packages are built handled that helps with deciding?

                        1. 7

                          It feels to me that the major feature is rollbacks.

                          I’m a keen NixOS and Guix user, and I don’t consider this to be directly important, though I can see why it’s seen that way.

                          On the topic of running multiple versions.

                          I also don’t run multiple versions of things.

                          The biggest benefit that NixOS/Guix introduces for me is that I can treat my machine as code, with high fidelity and efficiently.

                          In Linux distributions such as Debian/Arch/CentOS, I consider my machine to be a 30GB mutable blob (the contents of /etc, /usr, etc.). Updates are mutations to this 30GB blob, and I have low confidence how it’s going to behave. Cfgmgmt /automates/ this, automating low-confidence steps still results in low confidence.

                          For NixOS/Guix, I consider my machine to be about 100kB (my Guix config). I can understand this 100kB, and when I change the system, I change this 100kB, and accurately know what state my machine will in after its changed: the update is actually a replace.

                          Is state/data/configuration somehow managed in a special way?

                          ~Everything in /etc is managed via NixOS/Guix.

                          ~nothing in /var/ is managed via NixOS/Guix. NixOS/Guix reduces my state space, but /var is still state I have to care about. (Actually, on my desktop no state is preserved on /var between boots: every boot has a fresh computer smell.)

                          1. 1

                            Hey, thanks a lot for your response. I have some naive questions then.

                            In your desktop system, in many cases that blob, the state one cares about would probably be in $HOME. What about that?

                            What do you mean sheet “/etc is managed”? Say I have a configuration that would usually lie there. Where is it now? Say I want to customize it what would I do? Say configuration syntax changes, what would I do?

                            I understand your comparison with mutable blob vs declared state, after all that’s the same approach that other kinds of software often use, be it configuration management, some cloud/service orchestration tools and honestly a lot of software that has the word declarative in the first few sentences.

                            In practical use I see these systems fall apart very quickly, because a lot of the time it’s more a changing state in the way one would define a sweet of database migrations.

                            So for a simple example. Let’s take /etc. That’s the configuration. You in many situations can copy that to a new system and it’s fresh and the way you want. Various package managers also can output a list of which packages are installed in a format that can be read so you usually have /usr covered as well. Because of that I don’t usually see this part as a big issue. After all that’s in a way how many distro installers look at things. /boot is similar. /usr should not be touched, though sometimes it can be an emergency hack, but I prefer to have it read only other than on changes by the package manager.

                            That leaves /var and /home, which sounds at least somewhat similar to what you are saying (correct me if I’m wrong. So in my understanding what is done is more that the system makes sure that what should be actually is? Talking about upgrades, removals, etc. not leaving stuff behind? I guess that makes quick hacks hard or impossible? Don’t get me wrong I’d actually consider that a good thing.

                            /var on desktop might not have much needed state, but in many situations that state would be in /home.

                            Anyways, thank you again for your response. I guess at this point it might make sense if I took a closer look at it myself. I just am curious about practical experiences, because I completely understand that declaratively describing a system tends to look very nice on paper, but in many situations (also because of badly designed software) is more like simply writing a setup shell script and maybe running it each boot, just that shell scripts tend to be more flexible for better and for worse.

                            Of course having a solution that does that fit you with a good abstraction is interesting.

                            That’s why lately I’ve been thinking about where we handle big blobs that we sometimes want to modify in a predictive manner and had to think about database schemas and migrations.

                            Thanks again, have a nice day. :)

                            1. 4

                              What do you mean sheet “/etc is managed”? Say I have a configuration that would usually lie there. Where is it now? Say I want to customize it what would I do? Say configuration syntax changes, what would I do?

                              The contents live as part of your nix configuration. It’s both awesome and frustrating at times. Nix tries to overlay it’s view of reality onto the config file format, so say it’s nginx and instead of trying to write nginx confg like:

                              http {
                                      sendfile on;
                              }
                              

                              in nix you would write something like:

                              services.nginx.http.sendfile = true;
                              
                              

                              and then when nix goes to build the system, it will generate a nginx config file for you. This allows for some nice things, like services.nginx.recommendedOptimisation = true; and it will fill in a lot of boilerplate for you.

                              Not all of nix is this integrated with the config file(s), so sometimes you get some oddities, or sometimes the magic that nix adds isn’t very clear and you have to go dig around to see what it’s actually doing.

                              Another downside, is it means a re-build every time you want to change a minor thing in 1 application. The upside, nix is usually really good about not restarting the entire world and will try to just restart that 1 application that changed. This is just an off-shoot of the declarative process, and some will call it a feature, especially in production, but it can be annoying when in development.

                              You can turn all of that off and just say this app will read from /var/etc/nginx/nginx.conf and leave nginx.conf entirely in your control. This is handy when moving to nix, or maybe in development of a new service or something.

                              As far as the mutable state of applications, nix mostly punts on this, and makes it YOUR problem. There is some nix config options that packagers of apps can take advantage of, so say on upgrades, if you installed PG11 originally, it won’t willy nilly upgrade you to PG12. It makes you do that yourself. So you get all the new bits except PG11 will still run.

                              All that said this stuff isn’t perfect, so testing is your friend.

                              1. 1

                                You can turn all of that off and just say this app will read from /var/etc/nginx/nginx.conf and leave nginx.conf entirely in your control.

                                My goodness, do you have a guide or blogpost or something for this way of going about it? That’d be super helpful. I’ve tried Nix a few times and this is exactly where I go crazy. I can store real config files in git too; just let me do that! (and avoid the Nix language!)

                                1. 2

                                  https://search.nixos.org/options?channel=21.05&from=0&size=50&sort=relevance&type=packages&query=services.nginx is where I’d start to look for how to disable the config management part of the nginx service. If that wasn’t enough, I’d go to the corresponding file in nixpkgs.

                                  If a module doesn’t meet my needs, and I can’t easily make it do so, sometimes I will write my own module, to have fully control over it. I still reuse the nginx package, and would typically start my module by copy-pasting and trimming the existing one.

                                  95% of the time, the provided modules do exactly what I want.

                                  https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md aims to make the “please let me take control over the service” usecase easier.

                                  1. 1

                                    Well, doing this in some ways defeats one of the big reasons for Nix, but there are valid use-cases:

                                    For nginx, this is what I do:

                                    # this sets the config file to /etc/nginx/nginx.conf, perfect for us using consul-template.
                                      services.nginx.enableReload = true;
                                      services.nginx.config = "#this should be replaced.";
                                    

                                    Now it’s on you to maintain /etc/nginx/nginx.conf

                                    For us, we use consul-template(ran via systemd as more nix configuration) and it generates the config. But you are free to replace it (after deploy) manually. i.e. nix will over-write the /etc/nginx/nginx.conf file every nixos-rebuild build with the contents: #this should be replaced.

                                    Otherwise, what nixos tends to do is symlink the /etc//configfile -> to somewhere in /nix which is read-only for you. so it’s up to you to erase the symlink and put a real file there. One could automate this with a systemd service that runs on startup.

                                    Another way to do this, is hack up the systemd service, assuming the service will accept the location of the config file as a cmd line argument. This is non-standard and can be fiddly in nix.

                                    I don’t know of a better way.

                                2. 2

                                  On Guix system it is recommended to manage anything in etc via a “service” which will be written/configuerd in scheme and has deploy and rollback semantics.

                                  For config that lives in $HOME there is guix home and guix home services, which parallel the system for /etc and system services, and even work on other operating systems.

                                  1. 2

                                    In your desktop system, in many cases that blob, the state one cares about would probably be in $HOME. What about that?

                                    I use https://github.com/nix-community/home-manager to manage $HOME. I use that to manage my git config and bashrc. I also use it to declare which directories should survive a reboot. e.g. I persist ~/.steam and ~/.thunderbird, “Documents/”, a few others. But everything else, e.g. ~/.vim (which I only use in an ad-hoc manner) is wiped.

                                    Even that leaves some blob-like state: I persist “.config/dconf”. Ideally that could be managed declaratively, but I haven’t seen a workable solution.

                                    Let’s take /etc. That’s the configuration. You in many situations can copy that to a new system and it’s fresh and the way you want. Various package managers also can output a list of which packages are installed in a format that can be read so you usually have /usr covered as well.

                                    That works fine for building new machines well, but a typical Linux machine is built far less frequently than its updated. For example, I’ve managed machines with packages/Puppet/Ansible in the past, and occasionally run into situations where the machine state according to packages/Puppet/Ansible no longer matches the actual machine state:

                                    • postinst scripts that worked well during install, but get updated such that upgrades work and installs are broken.
                                    • cases where apt-get install $x followed by apt-get purge $x leaves live config (e.g. files in /etc/pam.d)
                                    • cases where the underlying packages are changed in ways incompatible with the Puppet config: after all, the underlying packages typically don’t attempt to QA against Puppet config.

                                    The result is that even just covering /etc and /usr, machines are brittle, and occasionally need to be rebuilt to have confidence.

                                    Talking about upgrades, removals, etc. not leaving stuff behind? I guess that makes quick hacks hard or impossible? Don’t get me wrong I’d actually consider that a good thing.

                                    Yes, it does make quick hacks hard/impossible. It is possible to do some quick hacks on the box (systemctl stop foo, for example), and nixpkgs is designed so that various parts can be overridden if needed.

                                    When we climb the ladder of abstraction, and lose access to easily change the inner workings of lower levels, it looks like (and is!) restrictive. In the same way I wouldn’t modify a binary in a hex editor to perform deployments, nor would I make live changes to a Docker image, I aim to not SSH to a machine to mutate it either. I prefer my interactions with lower-level abstractions to be mediated via tooling that applies checks-and-balances.

                                    Anyways, thank you again for your response. I guess at this point it might make sense if I took a closer look at it myself.

                                    I don’t make recommendations without understanding requirements, but NixOS/Guix is at least a novel approach to distributions, which might be interesting to OS folks.

                                    NixOS/Guix might have come too late for industry: containers also aim to manage system complexity, and do a good job of it. I think NixOS/Guix offers good solutions for low-medium scale, and as a way to build container images.

                                    I just am curious about practical experiences, because I completely understand that declaratively describing a system tends to look very nice on paper, but in many situations (also because of badly designed software) is more like simply writing a setup shell script and maybe running it each boot, just that shell scripts tend to be more flexible for better and for worse.

                                    I only use NixOS/Guix for my personal infra, and manage all those machines in a declarative manner (other than out-of-scope things such as databases like ~/.config/dconf and postgres).

                                    That’s why lately I’ve been thinking about where we handle big blobs that we sometimes want to modify in a predictive manner and had to think about database schemas and migrations.

                                    Yes, DB schema migrations is an interesting case where a declarative approach would be nice to have: it’s much easier to reason about a single SQL DDL than a sequence of updates.

                                    A similar problem I have is the desire for declarative disk partitions: ideally I could declare my partition scheme, and apply a diff-patch of mutations to make the declaration reality. It would only proceed if it was safe and preserved the underlying files. It’d likely only be possible under particular constraints (lvm/btrfs/zfs ?). Even then that’s hard to get right!

                                    Thanks again, have a nice day. :)

                                    You too!

                              1. 33

                                I don’t really agree with a lot of the claims in the article (and I say this as someone who was very actively involved with XMPP when it was going through the IETF process and who wrote two clients and continued to use it actively until 2014 or so):

                                Truly Decentralized and Federated (meaning people from different servers can talk to each other while no central authority can have influence on another server unlike Matrix)

                                This is true. It also means that you need to do server reputation things if your server is public and you don’t want spam (well, it did for a while - now no one uses XMPP so no one bothers spamming the network). XMPP, unlike email, validates that a message really comes from the originating domain, but that doesn’t stop spammers from registering millions of domains and sending spam from any of them. Google turned off federation because of spam and the core problems remain unsolved.

                                End-To-End Encryption (unlike Telegram, unless you’re using secret chats)

                                This is completely untrue for the core protocol. End-to-end encryption is (as is typical in the XMPP world) multiple, incompatible, extensions to the core protocol and most clients don’t support any of them. Looking at the list of clients almost none of them support the end-to-end encryption XEP that the article recommends. I’d not looked at XEP-0384 before, but a few things spring to mind:

                                • It’s not encrypting any metadata (i.e. the stuff that the NSA thinks is the most valuable bit to intercept), this is visible to the operators of both party’s servers.
                                • You can’t encrypt presence stanzas (so anything in your status message is plaintext) without breaking the core protocol.
                                • Most info-query stanzas will need to be plain-text as well, so this only affects direct messages, but some client-to-client communication is via pub-sub. This is not necessarily encrypted and clients may or may not expose which things are and aren’t encrypted to the user.
                                • The bootstrapping thing involves asking people to trust new fingerprints that exist. This is a security-usability disaster: users will click ‘yes’. Signal does a good job of ensuring that fingerprints don’t change across devices and manages key exchange between clients so that all clients can decrypt a message encrypted with a key assigned to a stable identity. OMEMO requires a wrapped key for every client.
                                • The only protection against MITM attacks is the user noticing that a fingerprint has changed. If you don’t validate fingerprints out-of-band (again, Signal gives you a nice mechanism for doing this with a QR code that you can scan on the other person’s phone if you see them in person) then a malicious server can just advertise a new fingerprint once and now you will encrypt all messages with a key that it can decrypt.
                                • There’s no revocation story in the case of the above. If a malicious fingerprint is added, you can remove it from the advertised set, but there’s no guarantee that clients will stop sending things encrypted with it.
                                • The XEP says that forward secrecy is a requirement and then doesn’t mention it again at all.
                                • There’s no sequence counter or equivalent so a server can drop messages without your being aware (or can reorder them, or can send the same message twice - no protection against replay attacks, so if you can make someone send a ‘yes it’s fine’ message once then you can send it in response to a request to a different question).
                                • There’s no padding, so message length (which provides a lot of information) is available.

                                This is without digging into the protocol. I’d love to read @soatok’s take on it. From a quick skim, my view is that it’s probably fine if your threat model is bored teenagers.

                                They recommend looking for servers that support HTTP upload, but this means any file you transfer is stored in plain text on the server.

                                Cross-Platform Applications (Desktop, Web, and Mobile)

                                True, with the caveat that they have different feature sets. For example, I tried using XMPP again a couple of years ago and needed to have two clients installed on Android because one could send images to someone using a particular iOS client and the other supported persistent messaging. This may be better now.

                                Multi-Device Synchronization (available on some servers)

                                This, at least, is fairly mature. There are some interesting interactions between it and the security gurantees claimed by OMEMO.

                                Voice and Video Calling (available on most servers)

                                Servers are the easy part (mostly they do STUN or fall back to relaying if they need to). There are multiple incompatible standards for voice and video calling on top of XMPP. The most widely supported is Jingle which is, in truly fractal fashion, a family of incompatible standards for establishing streams between clients and negotiating a CODEC that both support. It sounds as if clients can now do encrypted Jingle sessions from their article. This didn’t work at all last time I tried, but maybe clients have improved since then.

                                1. 8

                                  Strongly agree – claiming that XMPP is secure and/or private without mentioning all the caveats is surprising! There’s also this article from infosec-handbook.eu outlining some of the downsides: XMPP: Admin-in-the-middle

                                  The state of XMPP security is a strong argument against decentralization in messengers, in my opinion.

                                  1. 7

                                    Spam in XMPP is largely a solved problem today. Operators of open relays, servers where anyone can create an account, police themselves and each other. Anyone running a server that originates spam without dealing with it gets booted off the open federation eventually.

                                    Another part of the solution is ensuring smaller server operators don’t act as open relays, but instead use invites (like Lobste.rs itself). Snikket is a great example of that.

                                    but that doesn’t stop spammers from registering millions of domains and sending spam from any of them.

                                    Bold claim. Citation needed. Where do you register millions of domains cheaply enough for the economics of spam to work out?

                                    Domains tend to be relatively expensive and are easy to block, just like the IP addresses running any such servers. All I hear from server operators is that spammers slowly register lots of normal accounts on public servers with open registration, which are then used once for spam campaigns. They tend to be deleted by proactive operators, if not before, at least after they are used for spam.

                                    Google turned off federation because of spam and the core problems remain unsolved.

                                    That’s what they claim. Does it really seem plausible that Google could not manage spam? It’s not like they have any experience from another federated communications network… Easier for me to believe that there wasn’t much in the way of promotion to be gained from doing anything more with GTalk, so they shut it down and blamed whatever they couldn’t be bothered dealing with at the time.

                                    1. 3

                                      Your reasonning about most clients not supporting OMEMO is invalid because noone cares about most clients: it’s all about the marketshare. Most XMPP clients probably don’t support images but that doesn’t matter.

                                      For replays, this may be dealt with the double ratchet algorithm since the keys change fairly often. Your unknown replay would also have to make sense in an unknown conversation.

                                      Forward secrecy could be done with the double ratchet algorithm too.

                                      Overall OMEMO should be very similar to Signal’s protocol, which means that it’s quite likely the features and flaws of one are in the other.

                                      Conversations on Android also offers showing and scanning QR codes for validation.

                                      As for HTTP upload, that’s maybe another XEP but there’s encrypted upload with an AES key and a link using the aesgcm:// scheme (as you can guess: where to retrieve the file plus the key).

                                      I concur that bootstrapping is often painful. I’m not sure it’s possible to do much better without a centralized system however.

                                      Finally, self-hosting leads to leaking quite a lot of metadata because your network activity is not hidden in large amounts of network activity coming from others. I’m not sure that there’s really much more that is available by reading the XMPP metadata. Battery saving on mobile means the device needs to tell the server that it doesn’t care about status messages and presence from others but who cares if it’s unencrypted to the server (on the wire, there’s TLS) since a) it’s meant for the server, b) even if for clients instead, you could easily spot the change in network traffic frequency. I mean, I’m not sure there’s a lot more that is accessible that way (not even mentionning that if you’re privacy-minded, you avoid stuff like typing notifications and if you don’t, traffic patterns probably leak that anyway). And I’m fairly sure that’s the same with Signal for many of these.

                                      1. 3

                                        now no one uses XMPP so no one bothers spamming the network

                                        I guess you’ve been away for awhile :) there is definitely spam, and we have several community groups working hard to combat it (and trying to avoid the mistakes of email, not doing server/ip rep and blocking and all that)

                                        1. 3
                                          Cross-Platform Applications (Desktop, Web, and Mobile)
                                          

                                          True, with the caveat that they have different feature sets. For example, I tried using XMPP again a couple of years ago and needed to have two clients installed on Android because one could send images to someone using a particular iOS client and the other supported persistent messaging. This may be better now.

                                          Or they’ve also calcified (see: Pidgin). Last time I tried XMPP a few years ago, Conversations on Android was the only tolerable one, and Gajim was janky as hell normally, let alone on Windows.

                                          1. 3

                                            True, with the caveat that they have different feature sets. For example, I tried using XMPP again a couple of years ago and needed to have two clients installed on Android because one could send images to someone using a particular iOS client and the other supported persistent messaging. This may be better now.

                                            This was the reason I couldn’t get on with XMPP. When I tried it a few years ago, you really needed quite a lot of extensions to make a good replacement for something like WhatsApp, but all of the different servers and clients supported different subsets of the features.

                                            1. 3

                                              I don’t know enough about all the details of XMPP to pass technical judgement, but the main problems never were the technical decisions like XML or not.

                                              XMPP had a chance, 10-15 years ago, but either because of poor messaging (pun not intended) or not enough guided activism the XEP thing completely backfired and no two parties really had a proper interaction with all parts working. XMPP wanted to do too much and be too flexible. Even people who wanted it to succeed and run their own server and championed for use in the companies they worked for… it was simply a big mess. And then the mobile disaster with undelivered messages to several clients (originally a feature) and apps using up to much battery, etc.pp.

                                              Jitsi also came a few years too late, sadly, and wasn’t exactly user friendly either at the start. (Good people though, they really tried).

                                              1. 5

                                                I don’t know enough about all the details of XMPP to pass technical judgement, but the main problems never were the technical decisions like XML or not.

                                                XML was a problem early on because it made the protocol very verbose. Back when I started working on XMPP, I had a £10/month plan for my phone that came with 40 MB of data per month. A few extra bytes per message added up a lot. A plain text ‘hi’ in XMPP was well over a hundred bytes, with proprietary messengers it was closer to 10-20 bytes. That much protocol overhead is completely irrelevant now that phone plans measure their data allowances in GB and that folks send images in messages (though the requirement to base64-encode images if you’re using in-band bytestreams and not Jingle still matters) but back then it was incredibly important.

                                                XMPP was also difficult to integrate with push notifications. It was built on the assumption that you’d keep the connection open, whereas modern push notifications expect a single entity in the phone to poll a global notification source periodically and then prod other apps to make shorter-lived connections. XMPP requires a full roster sync on each connection, so will send a couple of megs of data if you’ve got a moderately large contact list (first download and sync the roster, then get a presence stanza back from everyone once you’re connected). The vcard-based avatar mechanism meant that every presence stanza contained the base64-encoded hash of the current avatar, even if the client didn’t care, which made this worse.

                                                A lot of these problems could have been solved by moving to a PubSub-based mechanism, but PubSub and Personal Eventing over PubSub (PEP) weren’t standardised for years and were incredibly complex (much more complex than the core spec) and so took even longer to get consistent implementations.

                                                The main lessons I learned from XMPP were:

                                                • Federation is not a goal. Avoiding having an untrusted admin being able to intercept / modify my messages is a goal, federation is potentially a technique to limit that.
                                                • The client and server must have a single reference implementation that supports anything that is even close to standards track, ideally two. If you want to propose a new extension then you must implement it at least once.
                                                • Most users don’t know the difference between a client, a protocol, and a service. They will conflate them, they don’t care about XMPP, they care about Psi or Pidgin - if the experience isn’t good with whatever client you recommend that’s the end.
                                                1. 2

                                                  XMPP requires a full roster sync on each connection, so will send a couple of megs of data if you’ve got a moderately large contact list (first download and sync the roster, then get a presence stanza back from everyone once you’re connected).

                                                  This is not accurate. Roster versioning, which means that only roster deltas, which are seldom, are transferred, is used widely and also specified in RFC 6121 (even though, not mandatory to implement, but given that it’s easy to implement, I am not aware of any mobile client that doesn’t use it)

                                                  1. 1

                                                    Also important to remember that with smacks people are rarely fully disconnected and doing a resync.

                                                    Also, the roster itself is fully optional. I consider it one of the selling points and would not use it for IM without, but nothing prevents you.

                                                    1. 1

                                                      Correct.

                                                      I want to add that, it may be a good idea to avoid using XMPP jargon to make the test more accessible to a wider audience. Here ‘smacks’ stands for XEP-198: Stream Management.

                                                2. 2

                                                  XMPP had a chance, 10-15 years ago, but either because of poor messaging (pun not intended) or not enough guided activism the XEP thing completely backfired and no two parties really had a proper interaction with all parts working. XMPP wanted to do too much and be too flexible.

                                                  I’d argue there is at least one other reason. XMPP on smartohones was really bad for a very long time, also due to limitations on those platforms. This only got better later. For this reason having proper messaging used to require spending money.

                                                  Nowadays so you “only” need is too pay a fee to put stuff into the app store and in case of iOS development buy an overpriced piece of hardware to develop on. Oh and of course deal with a horrible experience there and be at the risk of your app being banned from the store, when they feel like. But I’m drifting off. In short: Doing what the Conversation does used to be harder/impossible on both Android and iOS until certain APIs were added.

                                                  I think that gave it a pretty big downturn when it started to do okay on the desktop.

                                                  I agree with the rest though.

                                                3. 2

                                                  I saw a lot of those same issues in the article. Most people don’t realize (myself included until a few weeks ago) that when you stand up Matrix, it still uses matrix.org’s keyserver. I know a few admins who are considering standing up their own keyservers and what that would entail.

                                                  And the encryption thing too. I remember OTR back in the day (which was terrible) and now we have OMEMO (which is ….. still terrible).

                                                  This is a great reply. You really detailed a lot of problems with the article and also provided a lot of information about XMPP. Thanks for this.

                                                  1. 2

                                                    It’s not encrypting any metadata (i.e. the stuff that the NSA thinks is the most valuable bit to intercept), this is visible to the operators of both party’s servers. You can’t encrypt presence stanzas (so anything in your status message is plaintext) without breaking the core protocol.

                                                    Do you know if this situation is any better on Matrix? Completely honest question (I use both and run servers for both). Naively it seems to me that at least some important metadata needs to be unencrypted in order to route messages, but maybe they’re doing something clever?

                                                    1. 3

                                                      I haven’t looked at Matrix but it’s typically a problem with any Federated system: you need at least an envelope that tells you the server that a message needs to be routed to to be public. Signal avoids this by not having federation and by using their sealed-sender mechanism to avoid the single centralised component from knowing who the sender of a message is.

                                                      1. 1

                                                        Thanks.

                                                      2. 1

                                                        There is a bit of metadata leaking in matrix, because of federation. But it’s something the team is working to improve.

                                                      3. 2

                                                        Fellow active XMPP developer here.

                                                        I am sure you know that some of your points, like Metadata encryption, are a deliberate design tradeoff. Systems that provide full metadata encryption have other drawbacks. Other “issues” you mention to be generic and apply to most (all?) cryptographic systems. I am not sure why XEP-0384 needs to mention forward secrecy again, given that forward secrecy is provided by the building blocks the XEP uses and discussed there, i.e., https://www.signal.org/docs/specifications/x3dh/. Some points of yous are also outdated and no longer correct. For example, since the newest version of XEP-0384 uses XEP-0420, there is now padding to disguise the actual message length (XEP-0420 borrows this again from XEP-0373: OpenPGP for XMPP).

                                                        From a quick skim, my view is that it’s probably fine if your threat model is bored teenagers.

                                                        That makes it sound like your threat model shouldn’t be bored teenagers. But I believe that we should also raise the floor for encryption so that everyone is able to use a sufficiently secured connection. Of course, this does not mean that raising the ceiling shouldn’t be researched and tried also. But we, that is, the XMPP community of volunteers and unpaid spare time developers, don’t have the resources to accomplish everything in one strike. And, as I said before, if you need full metadata encryption, e.g., because you are a journalist in a suppressive regime, then the currently deployed encryption solutions in XMPP are probably not what you want to use. But for my friends, my family, and me, it’s perfectly fine.

                                                        They recommend looking for servers that support HTTP upload, but this means any file you transfer is stored in plain text on the server.

                                                        That depends on the server configuration, doesn’t it? I imagine at least some servers use disk or filesystem-level encryption for user-data storage.

                                                        For example, I tried using XMPP again a couple of years ago and needed to have two clients installed on Android because one could send images to someone using a particular iOS client and the other supported persistent messaging. This may be better now.

                                                        It got better. But yes, this is the price we pay for the modularity of XMPP due to its extensibility. I also believe it isn’t possible to have it any other way. Unlike other competitors, most XMPP developers are not “controlled” by a central entity, so they are free to implement what they believe is best for their project. But there is also a strong incentive to implement extensions that the leading Implementations support for compatibility. So there are some checks and balances in the system.

                                                      1. 5

                                                        Even though I’m not a game developer that project makes me really happy. I think it does an excellent job at being a Go project that largely sticks to the Go philosophy while making some practical decisions that might be considered going against it.

                                                        There’s many, sometimes widely used Go projects out there, often ones calling themselves frameworks, where when you take a glance at the source code or even just the documentation you really see how they actually want to be/program in C++, Java or JavaScript. There’s nothing wrong with these, but why program in Go then? I’m not gonna name any but when there’s network protocols and big parts of the net standard library are needlessly reinvented and interfaces are not being used, just to name something send instead of write and have it work slightly different and then people gave to write wrappers something has gone wrong with the project.

                                                        I know there’s trends and Go (and Rust and Zig and others) are hip right now, so lots of people currently use Go, but I always have to think that thess are probably the people complaining that Go doesn’t do things like some other language.

                                                        Given that this is something I have seen in game “engines” a lot too (with web frameworks being the other big offender) I just really enjoyed watching Ebiten developing like it did, sticking to being Go b rather than becoming it’s own thing (framework), barely resembling regular Go development. I’m sure most people here can think of languages and projects that are only “technically” still using the language/implementation they are made for.

                                                        A lot can be learned from Ebiten’s example. It seems to have made great choices between being perfect and practical. I also think it’s nice where the the boundary between C(++) and Ebiten (and Oto) is drawn. It’s just a joy to follow the project’s development, and as mentioned I do that without even developing games, even though it really made me itch to try and do so some day.

                                                        So in short: Great work!

                                                        1. 5

                                                          The CLOUD Act isn’t black magic; it can only force Signal to turn over the data they actually possess. Which is, as demonstrated by a consistent paper trail of court records, almost nothing.

                                                          You do realize that the NSA themself stated they don’t need the content, they only need to know when somebody talked to somebody else, so IP traffic. And that we’re bombing people with drones based on that meta data ? That doesn’t really help threema, but it does make a case for hosting your stuff at least outside the US (and cloud act) if possible.

                                                          1. 15

                                                            Sure. If your threat model is “The NSA is going to bomb me if they know who I’m talking to”, Cwtch is better suited because its goal is metadata resistance.

                                                            1. 1

                                                              Have you looked into fun projects such as Vuvuzela or Pond? :-)

                                                              1. 2

                                                                The Pond Readme recommends to use Signal instead.

                                                          1. 17

                                                            This title rubs me the wrong way. You can be a successful IT person (what even is that?) without knowing any of this. You can even use OpenBSD without caring about any of the history. It is mildly interesting, but not something anyone must know.

                                                            I am comparing it to the famous unicode article and that one has a ton of information everyone should have at least heard about https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/

                                                            1. 8

                                                              There seems to be a lot of wishful thinking in the BSD communities. In many ways I wish that (a) BSD was the free Unix everyone uses, but Linux is, for better or worse, and is the only Unix that “every IT person” needs to know something about.

                                                              1. 3

                                                                I think it depends on the BSD; I’ve found NetBSD people to be very relaxed about it, but OpenBSD to be almost cult like. FreeBSD is somewhere in between; a legacy that never hit its full potential, but the zealotry tempered by corporate success.

                                                              2. 6

                                                                You can be a successful IT person without even knowing about OpenBSD’s existence. Not to knock OpenBSD, but it’s not widely relevant.

                                                                That’s not to say it’s not being used – but if I go to Indeed.com and type in OpenBSD + my city, I get zero results. Expanded to my entire state, still zero results. More than 1700 results for Linux. 2,650 for Windows, but I expect some of those are false positives – like the Sales Rep job for Champion Window… (NetBSD also 0, FreeBSD gets 6.)

                                                                If I search all of California, I get 8 jobs that match with OpenBSD. Eight.(Nearly 15,800 with Linux.)

                                                                1. 2

                                                                  FWIW, I’ve hired about a half dozen people who wound up working with OpenBSD for some non-trivial portion of their job. I never saw fit to specify that in a job listing. I just listed UNIX. (OpenBSD was not the only UNIX they were dealing with by any stretch.)

                                                                  1. 2

                                                                    Am I right to guess that prior OpenBSD experience was not considered an important hiring criterion and they picked it up on the job?

                                                                    1. 2

                                                                      Something like that. We wanted people who had learned a couple of (*BSD, Solaris, AIX, Irix, HP-UX) because we had all of these in our environment. And new weird stuff came in all the time. Being able to pick stuff up on the job was perhaps the most important hiring criterion.

                                                                      Listing UNIX turned out to be a better filter for that than enumerating all of the things we were interested in.

                                                                      1. 2

                                                                        I am curious: what environment has such a Unix zoo? My first job was in a Sun shop, so we had Solaris 8/9 on sparc and Linux (Suse) was getting rolled out to x86 workstations too. We had no other unix though.

                                                                        1. 2

                                                                          We were a medium sized shop that wrote bespoke software in addition to making sure our niche products ran at customer sites.

                                                                          This was before many customers could readily get you long-term, low-friction access to their environments, so we’d replicate them in-house. And letting us host a service was out of the question :)

                                                                          We dealt with a lot of banks (or credit card issuers, which were often but not always banks), several insurance companies (traditional and medical), one lottery operator, a couple of governments, a couple of retail companies, a few advertising companies, and several ISPs. The common thread was that each had their entrenched UNIX systems, we needed to fit in, and many needed to use oddball peripherals (especially for managing cryptographic keys or for accelerating cryptographic operations) that we knew how to integrate.

                                                                          It’s funny that you called it a zoo. We used to call it that too… and I had about 4 racks of equipment for it where I posted lovingly crafted replicas of these signs.

                                                                  2. 1

                                                                    You can be a successful IT person without even knowing about OpenBSD’s existence. Not to knock OpenBSD, but it’s not widely relevant.

                                                                    As an OpenBSD user I’ve come to realise this. I used to mention it on my resume. But not anymore. Quite frankly I don’t think many of the people who see my resume give a shit about it :)

                                                                    1. 1

                                                                      You can be a successful IT person without even knowing about OpenBSD’s existence. Not to knock OpenBSD, but it’s not widely relevant.

                                                                      To be fair though you could say that about anything though. You can be a successful IT person without even knowing about HTML/JavaScript/HTTP/Assembler/Floating Point Numbers/Compiler/Character Encodings/Unicode/C/Java/Git/Linux/macOS/…

                                                                      I have certainly seen successful IT people not knowing one or more of them. Sure, some jobs require knowledge about those, but so some jobs require knowledge about OpenBSD.

                                                                      So I agree that the title is silly. However I’d argue the whole phrase isn’t much better.

                                                                      But then some people use that phrase, because it’s silly or at least famous. Just like “The Good, the Bad and the Ugly” and others is common. But at least that one is usually less wrong.

                                                                  1. 1

                                                                    Every time I’m on a Linux machine I miss that feature, but I heard it’s hard to port. It also hasn’t been ported to OpenBSD, because of the security implications mentioned at the end of the article.

                                                                    So hoping for an OpenInvoke, that’s more secure and portable. Oh and nicer syntax.

                                                                    1. 6

                                                                      I have been working really hard the past years to make Perl less painful to work with. Imo the main problem with Perl is the same with Javascript prior to typescript: too much flexibility created unbounded complexity.

                                                                      I have tried lobbying for a TypePerl superset of Perl, similar to Typescript to JavaScript, but the idea was not greatly received by our Perl experts. Without it, creating good developer tooling for Perl like a proper LSP would be really hard and thus the barrier of entry remains high, killing the growth of the language.

                                                                      1. 3

                                                                        javascript tooling in vscode is built on top of typescript technology. May be worth building a typed perl even if the majority of folks don’t use it

                                                                        1. 1

                                                                          Why but building it instead of lobbying?

                                                                          That said it kind of feels wrong. Not the idea, I’d still suggest to do it. However Perl is around flexibility, about TIMTOWTDI and about borrowing concepts from natural languages. Making something l more rigid in that sense feels like taking the core of Perl away from it. Of course that’s not true if you just pick some parts. I’d be curious about it. Maybe some ideas from Raku also help.

                                                                          I think PHP,. Perl, Ruby and to some degree Python and JavaScript and others like smalltalk started out with the idea of bringing programming languages closers to natural languages. A trend that has since been reversing again. But honestly I would be very curious about such a concept would be approached in today’s world, where things like implicit typinge for example exists in systems languages.

                                                                          There’s lots of concepts that can be stolen. On top of that there’s lots of great programming language backends basically ready to be used and they are used, but not so many going into that for a lack of a better word “very high level”, “copying natural languages”, “do what I mean”, rather than do what I say direction.

                                                                          I also think there’s a general trend to languages being optimized for big projects rather than scripting. Scripting therefore feels a bit like that thing that people did maybe in the 90s. I think taking the learnings from the past few decades could be a big factor here. I just maybe wouldn’t go so much for copying languages but going after their core concepts or call it philosophies.

                                                                          1. 1

                                                                            Why but building it instead of lobbying?

                                                                            Because its not the type of job that make sense to build alone by yourself, but to discuss with teammates and colleagues to collaborate on, and business to invest engineering hours into.

                                                                            Also I was not working that close to Perl, but a lot more around CI/CD, code health, code review system around a big Perl code base. So I don’t have the language expertise to properly evaluate whether if a solution is right

                                                                            feels like taking the core of Perl away from it

                                                                            Yes, some of my colleagues share this sentiment as well.

                                                                            But leaving Perl at the current state significantly make it hard to build concrete developer tooling (LSP, LSIF, etc…) and static analysis around. It’s also much harder to read, let alone write tests. Programmer from different backgrounds also have a much harder time onboarding a Perl code base.

                                                                            Overall, there will be tradeoff for sure.

                                                                            But if you look at Python, Ruby, Javascript or even Erlang, you would find a movement of big companies (Facebook, Stripe, Microsoft, …) having to apply gradual typing into their repos to improve code health and developer experience. It looks like the tradeoffs has been working out for those languages, and there is a high chance that it also will successful with TypePerl

                                                                        1. 9

                                                                          I love the honestly on their PinePhone Pro page . Truly refreshing.

                                                                          1. 4

                                                                            You should check out their other product pages. They’re all like that :-)

                                                                          1. 3

                                                                            I wonder if it would not have made sense to go after these goals by kind of creating a standard that is a subset of HTTP (1 or 1.1), maybe do some signaling using a header or some MIME type telling browsers or extensions of browsers to apply strict rules, maybe change how and what things are rendered. That way it would still remain compatible with a lot of software already built to interact with HTTP, and switching back and forth between protocols might be easier from a users perspective.

                                                                            1. 8

                                                                              You don’t even have to make anything up. HTML 2.0 is an already existing spec that solves all the problems of Gemini in a much more useful way and still includes stuff like unicode support. The entire purpose of Gemini is to be exclusive and cliquish; if you read the original rationale post I think this is pretty clear.

                                                                            1. 4

                                                                              I have played with Loki and think it’s a pretty good model. For small use-cases you can totally just use the filesystem storage option. You only have to upgrade to an object store when your scale requires it. It does require some configuration on the ingestion side to specify the fields you are going to index on but that doesn’t seem that big of a problem to me.

                                                                              1. 1

                                                                                It may have been me configuring it poorly (probably), but my experience w/ Loki in a small scale setting has been that it will do terrible, terrible things to your inodes when running Loki using filesystem storage.

                                                                                Just something to look out for, but worth keeping an eye on it. Besides the “Oops! All inodes!” issue, Loki+Grafana is a pretty nice setup.

                                                                                Related: https://github.com/grafana/loki/issues/364

                                                                                1. 2

                                                                                  I have not run into that issue in my setup. It may be a result of the amount of logs I’m pulling in which is actually quite small or something else to do with the my setup.

                                                                                  1. 1

                                                                                    It also has to do with the file system you are using, so it might partly be about using the right tool for the job. But it would certainly make sense to structure them in a better way, regardless.

                                                                                1. 1

                                                                                  I think scheme and lisp are more the exception from the rule, because it makes you think in terms of the lambda calculus which can be really eye opening and e think one at least should have looked into that anyways.

                                                                                  As someone who started out with higher level languages I’d argue a bit with lower level languages because they on average are simpler (but maybe not easier) and being lower level tend to cause you to learn about computers in a more practical manner.

                                                                                  A lot of CS you will come across you may have learned about in a theoretical way and I think if you do that on the side by understanding a language you might not waste a lot of time at some point mentally mapping that theory to praxis.

                                                                                  But whatever you do, you cheat yourself if you only end up learning some library instead of some language. With lower level languages you might still need to grasp more of non-library stuff, but it depends.

                                                                                  So whatever you do, the important thing is to not go beyond fundamentals without grasping them, like really understanding them and not just “makes sense”. The reason is that if you build up on misconceptions there it can be very hard and annoying to unwind that and at times you might really kill motivation to learn anything. Keep in mind it’s all Turing machines, no matter how far up you are.

                                                                                  But minds work differently so what works for you might not work for another person.

                                                                                  1. 2

                                                                                    Next up “Endless try catch”?

                                                                                    You really should handle errors in every language. In Go it’s just more explicit and without special construct s. Wished more languages had that too be honest. Probably would lead to a lot less bad code because people catch the wrong thing with multiple statements in try or essentially ignore then cause they never think about them.

                                                                                    But also if you don’t like it just use a different language? There’s huge amounts of languages doing error handling differently, choosing the rare exception sounds like a really odd thing to do. What’s the point in out of the thousands of languages there are choosing the one you disagree with?

                                                                                    If it’s “a constant struggle” as the article mentions it seems like a strange decision to stick with it.

                                                                                    1. 2

                                                                                      If you can handle all errors the same way, try/catch makes that easier:

                                                                                      try {
                                                                                          actionA();
                                                                                          actionB();
                                                                                          actionC();
                                                                                      } catch {
                                                                                        // handle error
                                                                                      }
                                                                                      
                                                                                      1. 6

                                                                                        Sure, but I would not exactly call splitting it up hard and in Go you could if you really have the case a lot handle errors just like that. It’s easy to write such a wrapper if it bothers you much.

                                                                                        In addition looking into real projects I have seen it more than once where that pattern was used a lot and the assumption that you can/want to handle all errors the same way either was wrong or became wrong.

                                                                                        Depending on the language you might easily catch too much (JavaScript being a great example here) or you especially if logging or similar would happen you usually end up wanting to have more context with your error anyways.

                                                                                        Of course it depends on what exactly you are doing, but at least in my experience splitting try/catch up seems to be something I do more frequently than combing error handling.

                                                                                        That’s also a bit of what I mean with choosing another language. But it’s also really about project sizes. If you have a bigger project you might add helpers to do what makes sense for a project anyways, maybe doing something with the errors/exceptions so you end up extracting that error value from the catch block and essentially end up the same as in Go after that helper function.

                                                                                        Of course for tiny let’s say “scripts” that might be a lot but if it’s really a tiny thing I think in many cases people completely ignore error handling.

                                                                                        Don’t get me wrong though. Of course there’s a reason try catch exists, but what gets me is when people choose a language that does a few things differently and then complain about that language having a faulty, quirky design because it is not pretty much exactly like hundreds of other languages. It’s a valid design decision to keep the language simple by treating errors like just another value/type and interact with via all the same means as all the others variables you have.

                                                                                        If someone programs in Go,or any other language and it’s unhappy about it not being Java (or any other language) why not use Java (or any other language)?

                                                                                        It’s not like you have to use Go, cause it’s by some people currently employed by Google or something.

                                                                                        And I know that there’s situations where for one reason or another you have to hat a certain language, but honestly that’s just part of the job and often you can get around it. It’s just ranting about language specifics and being like “they are doing it wrong” mostly cause they are doing it differently from your language of taste feels like something going nowhere. I mean it’s also not like nobody ever thought of doing try,/catch and I’m pretty sure that some of the Go core team developers would not only know about try/catch but eben have the knowledge to implement it if they wanted.

                                                                                        So what’s the point in being the millionth or so person to say they prefer try/catch over Go’s way?

                                                                                        There’s other languages, some of them sharing other things with Go. LLVM spawned huge amounts of languages in addition to huge amount of other languages. Wouldn’t time be better spent writing whatever you are missing there and be happy and productive instead of just repeating dislike and calling other designs “quirks”?

                                                                                        Unless that’s your hobby and what your want to be doing of course. Just feels a bit redudant on lobster.rs

                                                                                        1. 3

                                                                                          In addition looking into real projects I have seen it more than once where that pattern was used a lot and the assumption that you can/want to handle all errors the same way either was wrong or became wrong.

                                                                                          I completely agree. I am personally just fine with Go’s error handling. I just wanted to point out that the contrived Go example would not translate into “endless try/catch”.

                                                                                    1. 7

                                                                                      If you follow the original bug report then its supposed fix was never applied.

                                                                                      Apparently these systems with issues were running an old version compared to then currently stable version (0.6.x vs 0.7.x), where this issue had been alleviated by using PRNG.

                                                                                      And a lesson here - always follow up your bug reports :)

                                                                                      1. 19

                                                                                        Marked closed by “stale bot”. I think I’ve could argue that the lesson is not to use stale bot. ;-)

                                                                                        On a more serious but, for non trivial, non obvious (crashes, etc.), not easy to repeat or simply more edge case bugs it can be really annoying, when all the information provided, maybe over many years is “hidden away” in a closed bug report and work to analyze this over the course of a week was completely meaningless.

                                                                                        Maybe, if “stale” was really necessary it should be a separate state, one that people don’t actively look at but maybe want to include in searches to see if others encountered it, maybe even already have a solution like in this case.. or at least a workaround.

                                                                                        Following up without new findings, just to prevent a bot from doing something silly and usually pinging everyone for no reason doesn’t seem like the most sensible approach.

                                                                                        1. 5

                                                                                          It’s interesting that the cryptographically secure random number generator is so expensive. I believe the FreeBSD in-kernel Fortuna implementation is very cheap to query, all of the expensive operations happen when you add entropy to the pool. The first version acquired a lock and added things to the entropy pool every time an interrupt was received (this version was not merged - you can imagine the perf hit!) but it should be pretty cheap to query.

                                                                                          1. 3

                                                                                            You are right. There was actual code change suggested, but it never got merged: https://github.com/openzfs/zfs/pull/6544

                                                                                            1. 1

                                                                                              Hmm, so you’re saying in current versions, ZFS still runs in the background and wastes CPU and power even when not in use, just with a faster RNG so it doesn’t waste as much CPU and power?

                                                                                              Doesn’t sound very fixed to me

                                                                                            1. 10

                                                                                              I can’t believe a company like SalesForce doesn’t eye licenses like a hawk. That’s really weird, and poor form.

                                                                                              1. 19

                                                                                                It doesn’t surprise me at all. The only places I have worked that cared about licences of dependencies were banks. Everywhere else, using a library has been entirely at the programmers discretion and the programmer usually does not care.

                                                                                                This is how OpenWRT was born.

                                                                                                1. 3

                                                                                                  Maybe it’s a “software industry” thing? All three telecommnications businesses I’ve worked for have been very stringent about licensing and choosing the right licenses for our code and imported libraries etc.

                                                                                                  1. 7

                                                                                                    I think that mindset of:

                                                                                                    It is available on the internet, so it must be free to use

                                                                                                    Is quite popular outside of the software industry as well. Unfortunately people are quite hard to educate about intellectual property.

                                                                                                    1. 2

                                                                                                      More of a company size thing. At HP unusual license approvals had to come from the legal dept. And that’s for a project which has MIT, Apache2 and a few others pre-approved. I’m sure there were other projects which needed confirmation of everything.

                                                                                                    2. 1

                                                                                                      Google cares very much about licenses.

                                                                                                    3. 10

                                                                                                      I once told a room of OSS policy wonks that my Big Tech Co had no one in charge of licensing or knowing what we use or checking for compliance. They were flabbergasted as though this were not the norm. I have worked at many sizes of company, was always the norm. You want a dependency you add it to Gemfile or whatever and push, the end.

                                                                                                      1. 3

                                                                                                        In my experience unless an engineer makes the company lawyers aware of the risk here they won’t even know to think about it. I make a point of raising it everywhere I work and institute some amount of review on my teams for licensing. But it’s not even on the radar of most company lawyers.

                                                                                                        1. 1

                                                                                                          I worked at a company that had a policy, but there was no formal enforcement mechanism. Devs were supposed to check, but this didn’t happen consistently. As a practical matter, though, it really wasn’t a problem. Just before I left the lawyers started asking questions and I actually built the initial version of an enforcement system. As it turned out, basically all of our dependencies were Apache, BSD, or MIT licensed (IIRC).

                                                                                                        2. 2

                                                                                                          Keep in mind licensing isn’t the only part though.

                                                                                                          However, adding monkey patching to Go is not a reasonable goal while respecting anything close to “best practices.”

                                                                                                          I think if you start out with something non-reasonable, such as working against the very programming language you use, why would you get into thinking about its license?

                                                                                                          If a company like Linksys didn’t care about licensing why would a company like SalesForce?

                                                                                                        1. 1

                                                                                                          About being happy that Windows didn’t win. I hope this isn’t taken as a purely grumpy rant, however I think that we see concepts paralleling the complexities of the Windows world start to take hold in the ways that people and companies use Linux today. Large parts of where cloud environments and technologies especially from/sponsored by RedHat and CNF are common resemble mindsets that at least in the 2000s were more predominant in the Microsoft/Windows ecosystem and not in BSDs, Solaris, and most Linux distributions (maybe in some degrees in RHES and SLES, but I think even there it started later).

                                                                                                          Some of the developments seemn like a rerun. Also looking at cloud providers. First the virtual server was reinvented as compute instances and then managed web hosting gets reinvented as server less. Cloud Functions come close to CGI and PHP.

                                                                                                          The Linux on the desktop movement pushed a lot of copying from Windows and now even servers frequently use dbus (the D stands for desktop). At the same time Windows went the opposite direction. Starting with Interix around 2000 Windows became POSIX compatible. At least to some degree.

                                                                                                          I also think targeting UNIX/POSIX was kind of replaced with targeting Windows/Linux (Ubuntu mostly)/MacOS, then maybe accepting patches sometimes for others. Nobody really expects something else. Docker goes a step further, usually even kicking the largest part of Linux out. But maybe that will get replaced with WASI at some point, might be the same solution when it’s all the same anyways.

                                                                                                          In other words I think it’s easy to fool yourself thinking Unix or even Linux won. If you only speak of system calls that might be true to some degree, but I don’t think that’s how it’s meant.

                                                                                                          1. 5

                                                                                                            Wifi support (or lack thereof) is arguably the major weakness of FreeBSD when it comes to desktop. Glad to read that there are plans to improve things on that front.

                                                                                                            Server-side, I have adopted FreeBSD a few years ago, and I am very happy with it. Solid, coherent, understandable, well-documented. Keep it up and it simple!

                                                                                                            1. 1

                                                                                                              What’s the init system story like? I heard the BSDs use a shellscript-primary model; this surprised me, considering how well-integrated and non-kludgy pretty much everything else about BSD is, at least compared to Linux. As someone running BSD server-side, do you feel there are any downsides that would make InitWare or systemd a better-streamlined fit for server maintenance? Or not much difference either way?

                                                                                                              1. 2

                                                                                                                Not the starter of that thread, but as someone who has used FreeBSD in production for week over a decade now I want to point out that I think it’s common to imagine rc.d to be like the previously common init.d on Linux when it really is not.

                                                                                                                rc.d is a lot closer to what for example OpenRC has been offering on Linux. It’s by no means the same, but it’s certainly more streamlined to use the term you used. rc d feels almost like writing a config file, but gives you more flexibility when needed. It also has some nice benefits like being extendible and in rc.conf not only enable or disable services but being able to create custom options to for example make a slur of arguments to append to a command into a nice option for example to set a listen address using daemonnamw_address=localhost. Then there’s also a nice command line utility called daemon that can help with doing typical tasks like telling where logs go, how the restart behavior is, which i user to run a service as and so on.

                                                                                                                All of that makes a new init system way less of a priority, than it might have been on Linux distributions using init.d shell scripts. I really don’t think they have too much on common other than being init and written in shell. What you get when rc.d is way more structured, especially if you take a closer look like one would have to do when starting with systemd or any other as well

                                                                                                                1. 2

                                                                                                                  Yes, FreeBSD’s configuration is based on text files and scripts. AFAIK, there has been some proposal for a more unified approach (more à la launchd rather than systemd): for example, read “The Init System Debate Relaunched” article in Jul/Aug 2016’s issue of the FreeBSD Journal.

                                                                                                                  I haven’t mentioned that my experience with FreeBSD is not at a professional level—just for home servers. I like the feeling of being able to know exactly what the system is doing. With systemd, I feel like there is always something that I don’t understand. But that’s obviously just a consequence of my biased experience.

                                                                                                                  1. 2

                                                                                                                    It’s perhaps more accurate to say that the FreeBSD service management system is mostly implemented in shell scripts. For a long time, shell was the only memory-safe programming language that shipped in the base system (now Lua does as well), which meant that it was a good choice for anything that’s security-critical. rc.subr contains a lot of generic infrastructure for service management, including passing options to services, defining dependencies, and so on. Most RC scripts are almost entirely declarative, but because they’re also shell scripts the ones that need to do something custom are able to fall back to a more flexible programming language.

                                                                                                                1. 1

                                                                                                                  I have certainly gone through code dreaming, and fixed it the next morning according to findings. These happened a couple of times, and I’m still baffled by how little bits of large code bases seene to be able to stick around somewhere. I certainly wouldn’t have been able to recite code and even dreaming it was like I had to look whether that’s actually what was happening. Haven’t had that in a long time though. Probably for the better.

                                                                                                                  But I don’t think that’s what Tetris effect/syndrome is about. Reading the article it seems to be about applying thought patterns to other things. Maybe for programming that would be more like seeing the work at a restaurant or store as an algorithm, thinking about daily chores that need to be done as conditions and executing then as functions.

                                                                                                                  Something I’ve certainly done is at a time I work wise mostly had concurrency and parallelism in my mind, I was overly worried with doing things concurrently in an effective manner, using up all the time that would be idle wait.

                                                                                                                  I only stopped that when I realized your brain isn’t usually idle waiting and during “relaxation” a lot of important stuff is going on, which either is maintainance but also sparking creative processes, etc. which seems more worthwhile than many other things that could be achieved by using up that time. I think it also helps to be more focused.