1. 2

    the irony is that many crustaceans desperately need to read this article and bear it in mind.

    1. 2

      Yeah we’re not HackerNews, but I’ve seen plenty of right over kind here. Definitely should be read by everyone. Repeatedly.

      1. 2

        I still think we’re better than some other people in the same space:

        https://news.ycombinator.com/item?id=21494483

        I personally try to call out people who are more interested in being right than being kind.

      1. 1

        Would be interested to hear why they decided to whip up a new language it must have some salient features

        1. 1

          I think Dfinity supposed to compete against Ethereum, so presumably it needs distributed features. And anything is better than Solidity.

          1. 2

            We’re not competing with Ethereum. But you’re right about the distributed features part – we needed an approachable language that has the right semantics for the model DFINITY network exposes.

        1. 3

          Who generates the key in this case - the host or the token?

          1. 4

            Token.

            1. 4

              Thx. I’d much more prefer to generate it on an (offline) computer.

              1. 3

                On the contrary, generating on the token is safer (if it’s implemented correctly — YubiKey had a bug in a chip once), since the key can’t be extracted from it.

                1. 3

                  YubiKey had a bug in a chip once

                  Not just once.

                  That’s why I think an air-gapped computer running an open source crypto implementation is better.

                  1. 3

                    I think one does not exclude the other. E.g. ESP has eFuses for storing encryption keys, which can be read-projected (only readable by the hardware encryption support): https://github.com/espressif/esptool/wiki/espefuse

                    There are probably more secure elements that support this mode of operation.

                    1. 3

                      there is some concern that one has to “trust” Infineon, the maker of the cpu/chip that:

                      • there is no NIST/NSA style backdoor for their generated keys
                      • there is no way to exfiltrate or extract a private key without your knowledge (e.g. confiscated & copied by evil agent at airport security, but on-key validation doesn’t show that this happened

                      That said, I’m happier with an ECC yubikey than a filesystem password protected private key.

              1. 1

                epic hacking how long did this take you to do?

                1. 2

                  uh, good question. I don’t track time :) but roughtly

                  • debugging the i2c-hid driver bug and the screen brightness thing took a few days of occasional poking at things
                  • the TPM i2c driver took a couple days maybe
                  • the little ACPI things (keyboard backlight, tablet mode switch) took a couple hours, trivial stuff
                1. 37

                  Because I’d rather admin a CA, manage cert signing, handle revocation (how does this get pushed out to servers?), and all that jazz, more than running some ansible scripts? Wait.. No, I wouldn’t.

                  1. 11

                    Hah. I thought about this a lot when I read this article.

                    I think plenty of companies grow organically from a couple of dudes and as many servers, and before you know it you have 3 branch offices and 2 datacenters and a bunch of contractors, and it’s all well and good when everyone sort of trusts each other but then you get purchased and SOX’d and you have to scramble to make sure Larry who quit 3 years ago doesn’t have root on production still…

                    I assume your ansible scripts are well documented, and are run when you’re on vacation? ;)

                    I thought this article made a bunch of good points. Of course it’s an advertorial, but there’s enough meat in there to be interesting.

                    1. 6

                      I think plenty of companies grow organically from a couple of dudes and as many servers, and before you know it you have 3 branch offices and 2 datacenters and a bunch of contractors, and it’s all well and good when everyone sort of trusts each others but then you get purchased and SOX’d and you have to scramble to make sure Larry who quit 3 years ago doesn’t have root on production still…

                      Precisely this. My team went from 2 DCs with maybe a few dozen machines between them to 6 DCs in various stages of commission/deccommision/use and hundreds (probably just over 1000) machines to manage. Running an ansible script to update creds on hundreds of machines takes a very long time even on a powerful runner. We’re moving to a cert-based setup and for the machines where it’s enabled it’s incredibly quick, lets us do key rotation more efficiently, and is just generally a huge improvement. It’s an economy of scale problem, as most are, ansible was fine when it was a couple of us, but not even at our relatively small Xe3 scale. I can’t imagine trying to do that on larger scales. Managing a few servers for CA and so on is a dream comparatively.

                      1. 3

                        What do you do with hundreds of machines?

                        1. 2

                          Currently? We wait.

                          In the hopefully near future – something like OP

                          EDIT: I feel like the brevity may be interpreted as snark, so I’m going to add some details to mitigate that as it wasn’t intended. :)

                          Right now it takes a weekend or so to fully update everything, we mitigate some of it by running the job in stages (running only on pre-prod environments by product, only legacy machines, etc) It works out to running the same job a couple dozen times. That bit is automated. The real killer is the overhead of executing that many SSH connections from a single machine, basically. Running it in smaller chunks does mean we have a not entirely consistent environment for a while, but it’s pretty quick to run the job on a single machine if it fails or was missed. The runner has got flames painted on the side which helps, but it’s still quite slow.

                          I think this is probably representative of a big disadvantage that Ansible has compared to something agent-based like Chef or Puppet, on some level I’m okay with that though because I think Chef/Puppet would just hide the underlying issue that direct key management is a little fraught.

                          1. 3

                            This is why I switched from Ansible to Saltstack - deploys are fast and it has a similar feel and structure as Ansible.

                            1. 1

                              So to piggy back on SaltStack, it’s also neat because you can do a distributed setup of multiple Masters.

                              Makes it even faster for large fleets to roll out changes as each master manages a subset of the fleet with a salt master then farming out tasks to the other Masters to farm out to the minions/hosts.

                            2. 2

                              Another option may be to use a PAM module that updates the user’s authorized_keys file (from a central repo, such as LDAP) on attempts to lookup an account.

                              I’ve done this in the past and it worked out okay for largish deployments.

                              1. 2

                                You don’t need to update the key file on disk from ldap, you can use ldap to produce the contents of the key file directly.

                                https://man.openbsd.org/sshd_config#AuthorizedKeysCommand

                                https://github.com/AppliedTrust/goklp

                                1. 1

                                  Also an option, but you need to ensure that there is a timeout and caching, etc as well. Updating the on-disk copy has this trivial and built-in (respectively)

                                  1. 2

                                    sssd does all that, and more

                              2. 1

                                Gah, sorry, let me rephrase: what sort of workload is it?

                                (also, why not kerberos or something similar?)

                                1. 2

                                  I added an edit. As for kerberos, I just found this idea first – there was a FB article about it I came across a while ago (last year sometime, before this became a real problem), and started pushing for it. I work for an International BeheMoth, so changing things can be slow.

                            3. 1

                              I’ve reached this point too - considering moving the base stuff to either an os pkg and/or to use something like cfengine to distribute these faster than what ansible does. As an interim stage, I have a git pull-based ansible run on each box for the core, but I would prefer something that is more “reportable” than manually collating the status of packages on each system. Either way, I’m keen to store the CA info in an OS package, as a faster way to get boxes set up and updated.

                              1. 1

                                Precisely this. My team went from 2 DCs with maybe a few dozen machines between them to 6 DCs in various stages of commission/deccommision/use and hundreds (probably just over 1000) machines to manage. Running an ansible script to update creds on hundreds of machines takes a very long time even on a powerful runner.

                                this is why you can keep your public key in a kind of centralised store, say, an LDAP server, and disable local storage of public keys entirely; sssd supports this model very nicely.

                                (what irks me a bit about the advertorial above is that it conflates using host certificates and user certificates; and you can have one without another)

                              2. 3

                                I’ve managed ldap systems to handle distributed ssh / user authentication. I have less fear of that than anything CA related. I think its because OpenSSL taught me that all the tooling around it is terrible. Though I feel that Vault and other tooling is changing that slowly.

                                1. 2

                                  Probably about as well as crl’s get pushed out to server fleets, and accounts are actually deleted along with certificates revoked. Eg. Not bloody likely. ;)

                                  1. 1

                                    I think for every sysadmin who knows their sh*t, there are 10 who don’t. This article is meant for them.

                                    1. 2

                                      Fair enough; this probably also makes more sense for large (or very large) companies with a full team of ops/secops managing fleets of servers, coupled with some type of SSO solution (as mentioned in the article).

                                      1. 3

                                        I estimate that this becomes a problem once you surpass the the fact that more than 3 users need SSH access and have more than 30 machines accepting SSH-connections.

                                        Below that, it’s probably not worth the effort, but the moment you reach those numbers you will probably continue to grow beyond that rapidly and it’s still possible to make the change with relative ease.

                              1. 5

                                author of said post here, there’s a fair bit of context that I had to elide at the time of writing but as I’ve since left that org and it’s subsequently been sold, enough water has passed under the bridge to put a few more cards on the table…BTW lobsters has a rant tag, it should be used for this. So enjoy my use of florid language and gloss over the lack of perfection in the article’s flow.

                                I’ll endeavour to answer some of the comments as best as I can below. BTW I’m impressed at how much people have been able to assume from this short article about my general experience and competence. Nobody is perfect, and the older we get the more we appreciate the scale of our ignorance… live a little folks. It’s ok to rant on the internet about tech.

                                This post written in 2016 after the experience of ~ 2 years of continuing severe openssl bugs & CVEs https://www.openssl.org/news/vulnerabilities.html - a massive increase in reported & critical OpenSSL library vulnerabilities - more than 20 each year, and while we could avoid some of them, or mitigate by tweaking ciphers, not all were avoidable and we needed to upgrade both kernel and userland regularly. The dev team was 3 people, no CI, no code reviews, no test suite, design docs nor ops documentation, and no ethos of care in the dev team. Some people would yolo their patches in and disappear off for a few days, unreachable. Shortly after I started, the remaining original person disappeared, and the founder again went on a 3 month holiday “call me if you need anything”. In short it was a complete mess, and the infrastructure reflected that.

                                For some reason debian unstable had been chosen as the base os, apparently “coz security”. This meant it was almost impossible to do reproducible builds, and any software deployed last month would end up with a different set of packages than today. The organisation’s main app was written in perl, and relied on a mix of OS-provided packages, and CPAN & bespoke ones. This meant that every deploy to a fresh system would be a slightly different mix than the last time. Because of OpenSSL’s rate of change, pinning wasn’t appropriate either.

                                The last straw that broke the camel’s back was another kernel update to pull in a critical OpenSSL fix (I forget which one now), which also pulled in a perl RC, including a bug in the HTTP socket layer that caused any transfer that was exactly 1024 bytes long to hang. I recall that took about 6 hours of serious debugging to get down from the top-level “our app isn’t working sometimes with 3rd party APIs” to the specific change, and the corresponding 1 line fix.

                                The obvious thing in this case is to simply roll back to the previous app version, and figure this out in the morning, but there was so simple way of doing that in the current environment. At the time it was decided to push on through and try to track down the bug (the yolo hero culture), when in hindsight we should have backed the truck up and examined our poor life choices including not having sufficient test suites.

                                This, more or less, was the mindset that I wrote the rant in. IIRC the previous year included moving to the debian-unstable-with-systemd changes, which was a complete balls-up. I would have gladly stayed at that point and switchd jessie, and waited for the dust to settle, but that wasn’t my call.

                                2016 was an interesting time - docker was bringing lxc to the masses, and looked interesting but was evolving rapidly – too rapidly to be rely on in a small dev team, and container performance was noticeably sucky at that time. For contrast, zfs & jails in FreeBSD was available for a long time, well enough to be very stable. Boot Environments and snapshots were a killer feature, and IMO still is.

                                i’ve commented where it helps below, if you feel the need to troll, please show us your l33t p30p1e sk1llz.

                                1. 1

                                  This has got to be PR, right? This guy completely glosses over the existence of Docker.

                                  1. 3

                                    We don’t need docker, we’ve had jails since 1999

                                    1. 3

                                      There is no Docker for FreeBSD.

                                      1. 2

                                        It’s in progress, as far as I know: https://reviews.freebsd.org/D21570

                                        1. 2

                                          I meant like, he’s complaining about how you can’t use Linux to have reproducible environments, and he completely glosses over Docker

                                          1. 1

                                            Was Docker already widely-used in 2016? (Genuine question)

                                            1. 2

                                              I don’t have as good of an eye on the space as others, but from what I remember, it started picking up steam in 2014-2015

                                        2. 2

                                          Early in 2016 somebody tried out kubernetes, got a long way on having it up & running, but couldn’t keep up with the maintenance burden vs moving the environment onto it. At the time docker was a too-fast-moving target for a 3 person dev team to pick up the burden of maintaining this. Now, it would be a different kettle of fish, with mainstream stable distros providing this. LXC would have been an option at the time, though.

                                          Don’t get my wrong, I’m violently in favour of containers, but violently against FLOSS using a commercially controlled service as the main way of building and consuming software. Feel free to enlighten me if Docker’s moved on now and isn’t beholden to the VC behemoths for a sweet exit & cash windfall?

                                          1. 1

                                            To be clear, I have never used Docker (other than messing around with it for an hour a few years back). I have no place to say if it’s good software or not.

                                            I just find it fishy that Docker didn’t get a mention at all.

                                        1. 5

                                          I have been a huge fan of the OSX project called homebrew. […] [FreeBSD ports] is based on standard BSD makefiles, which while not as nice as homebrew’s ruby-based DSL, are very powerful.

                                          I find it hard to swallow an article telling me that my infrastructure should be boring, and then praising a Makefile remake in Ruby as “nice”. It’s too bad, because I do agree with the gist of the article.

                                          1. 2

                                            Are you well versed in homebrew or is “makefile remake in ruby” your take from the outside?

                                            1. 1

                                              I noticed the author’s comparison between BSD Makefiles and “homebrew’s ruby-based SDL”, which I assume means that Homebrew uses it’s own Ruby-based SDL instead of Makefiles. I’d be interested to know if that is not correct, and what the author actually meant.

                                              1. 1

                                                DSLs are subjective. Instead of you assuming what the author meant you could take a look at the homebrew DSL and make your own judgement. For example, here’s the erlang one:

                                                I know which one I prefer working with, but I also know which one I would rely on for reproducible builds.

                                                1. 1

                                                  I’m not commenting on Homebrew’s DSL vs Makefile. I’m not commenting on the DSL at all. I’m commenting on Ruby vs Makefile. Ruby is big and wieldy. Makefiles are boring. I think I’d rather have a boring dependency for a building system.

                                                  If you use Ruby (or Python for that matter) as a dependency for your build system, your operating system must ship with this scripting language. It may need to ship with a specific version of that scripting language. Special care must be taken when there is need for another version of the scripting language in user land.

                                                  Which again reiterates my point; Makefiles are boring build dependencies. Ruby is, abeit being practical or “nice”, not boring. Use boring build dependencies for building systems.

                                                2. 1

                                                  Brew does indeed have a ruby eDSL as the descriptive language for its packages. This encapsulates how to fetch the software, how to configure it, what patches to apply (if any), what configure flags, what make flags, and a bunch of other metadata things for the integration on the brew side of things. It’s a consistent “interface” to describe a brew package, whilst the software itself might use make, cmake, automake, autoconf, ninja, or who knows what else.

                                                  It would be possible to do all of that in Make, I imagine; just like we don’t really need automake/autoconf, or cmake, or the wrapping around makefiles in distribution packaging (RPM spec files, Debian’s rules file, itself, normally a makefile, etc.) — possible, but possibly herculean effort required and/or quixotic to attempt.

                                                  I once submitted a patch to a brew package for which I was an upstream maintainer. I changed a sha1-commit so they built from a different point in my upstream git repo; I think I removed a patch that was fixed or applied since whoever had added my package to brew had done so. I was able to basically type “brew edit ” and then something similar to submit my change. I was really impressed with the experience.

                                            1. 11

                                              I really don’t want to spend my free time tracking down how the latest kernel pulls in additional functionality from systemd that promptly breaks stuff that hasn’t changed in a decade, or how needing an openssl update ends up in a cascading infinitely expanding vortex of doom that desperately wants to be the first software-defined black hole and demonstrates this by requiring all the packages on my system to be upgraded to versions that haven’t been tested with my application.

                                              I find it impossible to continue reading after this. Nobody is forced to run Gentoo or Arch Linux on a production server, or whatever the hipster distribution of the day is. There are CentOS and Debian when some years of stability are required. More than any of the BSDs offer.

                                              1. 3

                                                Well, the rest also mentions apt-hell with debian and package upgrading.

                                                Can you elaborate on the last sentence?

                                                1. 10

                                                  Well, the rest also mentions apt-hell with debian and package upgrading.

                                                  I read that section now… it seems to imply you are forced to update Debian every year to the latest version otherwise you don’t get security updates. Does the author even know Debian? apt-hell? Details are missing. I’m sure you can get into all kinds of trouble when you fiddle with (non official) repositories and/or try to mix&match packages from different releases. To attempt this in production is kinda silly. Nobody does that, I hope :-P

                                                  Can you elaborate on the last sentence?

                                                  I’m not aware of any BSD offering 10 year (security) support for a released version, I’m sure OpenBSD does not, for good reason, mind you. It is not fair to claim updates need to be installed “all the time” as the poster implies and will result in destroying your system or ending up in “apt-hell”. Also, I’m sure BSD updates can go wrong occasionally as well!

                                                  I’m happy the author is not maintaining my servers on whatever OS…

                                                  1. 18

                                                    I read that section now… it seems to imply you are forced to update Debian every year to the latest version otherwise you don’t get security updates.

                                                    We have many thousands of Debian hosts, and the cadence of reimaging older ones as they EOL is painful but IMO, necessary. We just about wrapped up getting rid of Squeeze, some Wheezy hosts still run some critical shit. Jessie’s EOL is coming soon and that one is going to hurt and require all hands on deck.

                                                    Maybe CVEs still get patched on Wheezy, but I think the pain of upgrading will come sooner or later (if not for security updates, then for performance, stability, features, etc.).

                                                    As an ops team it’s better to tackle upgrades head on, than to one day realize how fucked you are, and you’re forced to upgrade but you’ve never had practice at it, and then you’re supremely fucked.

                                                    And, yes, every time I discover that systemd is doing a new weird thing, like overwriting pam/limit.d with it’s own notion of limits, I get a bit of acid reflux, but it’s par for the course now, apparently.

                                                    1. 3

                                                      This is a great comment! Thanks for a real-world story about Debian ops!

                                                      1. 5

                                                        I have more stories if you’re interested.

                                                        1. 3

                                                          yes please. I think it’s extremely interesting to compare with other folks’ experiences.

                                                          1. 7

                                                            So, here’s one that I’m primarily guilty for.

                                                            I wasn’t used to working at a Debian shop, and the existing tooling when I joined was written as Debian packages. That means that to deploy anything (a Go binary e.g. Prometheus, a Python Flask REST server), you’d need to write a Debian package for it, with all the goodness of pbuilder, debhelper, etc.

                                                            Now, I didn’t like that - and, I won’t pretend that I was instrumental in getting rid of it, but I preferred to deploy things quicker, without needing to learn the ins and outs of Debian packaging. In fact, the worst manifestation of my hubris is in an open source project, where I actually prefer to create an RPM, and then use alien to convert it to a deb, than to natively package a .deb file (https://github.com/sevagh/goat/blob/master/Dockerfile.build#L27) - that’s how much I’ve maneuvered to avoid learning Debian packaging.

                                                            After writing lots of Ansible deployment scripts for code, binaries, Python Flask apps with virtualenvs, etc., I’ve learned the doomsday warnings of the Debian packaging diehards.

                                                            1. dpkg -S lets you find out what files belong to a package. Without that, there’s a lot of “hey, who does /etc/stupidshit.yml belong to?” all the time. The “fix” of putting {% managed by ansible %} on top is a start, I guess.
                                                            2. Debian packages clean up after themselves. You can’t undo an Ansible playbook, you need to write an inverse Playbook. Doing apt-get remove horrendous-diarrhea-thing will remove all of the diarrhea.
                                                            3. Doing upgrades is much easier. I’ve needed to write lots of duplicated Ansible code to do things like stat: /path/to/binary, command: /path/to/binary --version, register: binary_version, get_url: url/to/new/binary when: {{ binary_version }} < {{ desired_version}}. With a Debian package, you just fucking install it and it does the right thing.

                                                            The best of both worlds is to write most packages as Debian packages, and then use Ansible with the apt: module to do upgrades, etc. I think I did more harm than good by going too far down the Ansible path.

                                                            1. 1

                                                              Yeah, this is exactly my experience. Creating Debian packages, correctly, is very complicated. Making RPM packages is quite easy as there’s extensive documentation on packaging software written in various languages. From PHP to Go. On Debian there is basically no documentation, except for packaging software written in C that is not more complicated than hello_world.c. And there are 20 ways of doing something, I still don’t know what the “right” way is to build packages that works similar to e.g. mock on CentOS/Fedora. Aptly seems to work somewhat, but I didn’t manage to get it working on Buster yet… and of course it still doesn’t do “scratch” builds on a clean “mock” environment. All “solutions” for Debian I found so far are extremely complicated, no idea where to start…

                                                              1. 1

                                                                FreeBSD’s ports system creates packages via pkg(8) which has a really simple format. I have lots many months of my life maintaining debian packages and pkg is in most ways superior to .deb. My path to being a freebsd committer was submitting new and updated packages, the acceptance rate and help in sorting out my contributions was so much more pleasurable than the torturous process that I underwent for debian packages. Obviously everbody’s experience is different, and I’m sure there are those who have been burned by *BSD ports zealots too.

                                                                Anyway it’s great to see other people who also feel that 50% of sysadmin work could be alleviated by better use of packages & containers. If you’re interested in pkg, https://hackmd.io/@dch/HkwIhv6x7 is notes from a talk I gave a while back.

                                                2. 1

                                                  Ive been using the same apps on Ubuntu for years. They occasionally do dumb things with the interface, package manager, etc. Not much to manage, though. Mostly seemless just using icons, search, and the package manager.

                                                1. 11

                                                  I think my favorite part is when they start off by complaining about all those new fangled features that Linux has, only to then sing the praises of filesystem level snapshots in ZFS.

                                                  Second place would be pointing to FreeBSD having one way of doing things (there aren’t enough FreeBSD developers left to maintain two ways) as great design, followed by being irritated that different Linux distributions have adopted one way of doing things via systemd.

                                                  Ultimately it feels like they are bewildered by the ability of the Linux world to maintain multiple distributions, each with different audiences and functionality. They prop up the argument that this somehow means a given user would need to be knowledgeable of and interacting with all distributions at any given time, rather than just picking one that has the qualities they need and following its conventions. Turns out when you have lots of users they come with lots of use cases, that’s where diversity across distributions really shines.

                                                  Also if I never saw another BSD Makefile again it’d be too soon.

                                                  1. 10

                                                    The article isn’t very good, but neither is this counterargument. File system snapshots don’t break existing scripts or workflows. Replacing ‘ifconfig’ with ‘ip’, switching to systems, and so on does.

                                                    1. 3

                                                      The article isn’t very good, but neither is this counterargument. File system snapshots don’t break existing scripts or workflows.

                                                      Of course they can. If your data sets are not fine-grained enough, then reverting to an older snapshot will result in a loss of data since the snapshot. If you use fine-grained snapshots, you can snapshot the system separate from data (e.g. /var). However, this can have other bad side-effects, e.g. a never version may update the format of an application’s data files, alter the database schema of an application that you are hosting etc. If you revert to an older snapshot, applications may stop working, because the data is incompatible with the older application.

                                                      Replacing ‘ifconfig’ with ‘ip’, switching to systems, and so on does.

                                                      That tired argument again. First of all, iproute2 (of which ip is part) has been around since 2001. It’s almost 20 years old! Secondly, most Linux systems still have net-tools (the package that contains ifconfig). E.g. on my machine:

                                                      % readlink $(which ifconfig)
                                                      /nix/store/3km31zw50hh5madank3ja4dvrq6rgvcl-net-tools-1.60_p20170221182432/bin/ifconfig
                                                      
                                                      1. 1

                                                        Of course they can. If your data sets are not fine-grained enough, then reverting to an older snapshot will result in a loss of data since the snapshot

                                                        If my workflow doesn’t involve snapshots, how are they getting reverted? Are you implying that there’s something that’s automatically reverting things without a change in my behavior?

                                                    2. 2

                                                      Sorry nickbp you seem to have read far more into my rant than I wrote. I’m not “bewildered by linux” as you put it, I’m just bored by wasting my time (and customer money) on needlessly fiddling with the carpet that we use to run our business on. I frequently work with small orgs who have their entire tech team being under ten people, and they simply don’t want to bother with “the stuff below the app”. zfs & jails are old tech, and even in 2016 for FreeBSD old tech. At that time, docker and containers outside the lxc world was evolving very very fast. The company had a choice between spending substantial time and effort keeping up to date with docker, or shipping valuable features to customers and the business. If you’re working for a company that has 200 people working on supporting devs, then that’s a totally different proposition.

                                                    1. 14

                                                      great talk just one note

                                                      Because, barring magic, you cannot send a function.

                                                      This is trivial in erlang, and even between nodes in a cluster, and commonly used, not some obscure language feature. So there we go I officially declare Erlang as Dark Magic.

                                                      1. 4

                                                        I suppose it’s not that “you cannot send a function”, but more like “you cannot send a closure, if the language allows references which may not resolve from other machines”. Common examples are mutable data (if we want both parties to see all edits), pointers, file handles, process handles, etc.

                                                        I’m not too familiar with Erlang, but I imagine many of these can be made resolvable if we’re able to forward requests to the originating machine.

                                                        1. 7

                                                          It’s possible to implement this in Haskell, with precise control over how the closure gets de/serialized and what kinds of things can enter it. See the transient package for an example. This task is a great example of the things you can easily implement in a pure language, but very dangerous in impure ones.

                                                        2. 1

                                                          I don’t know Erlang, but… I can speculate that it doesn’t actually send the functions. It sends their bytecode representation. Or a pointer to the relevant address, if the two computers are guaranteed to share code. I mean, the function has to be transformed into a piece of data somehow.

                                                          1. 13

                                                            it doesn’t actually send the functions. It sends their bytecode representation

                                                            How is that different from not actually sending an integer, but sending its representation?

                                                            In Erlang any term can be serialized, including functions, so sending a function to another process/node isn’t different from sending any other term. The nodes don’t need to share code.

                                                            1> term_to_binary(fun (X) -> X + 1 end).
                                                            <<131,112,0,0,2,241,1,174,37,189,114,105,121,227,76,88,
                                                              139,139,101,146,181,186,175,0,0,0,7,0,0,...>>
                                                            
                                                            1. 1

                                                              Would that also work if the function contains free variables?

                                                              That is what’s the result of calling this function:

                                                              fun(Y) -> term_to_binary(fun (X) -> X + Y end) end.
                                                              

                                                              (Sorry for the pseudo-Erlang)

                                                              1. 5

                                                                Yep!

                                                                1> F1 = fun(Y) -> term_to_binary(fun (X) -> X + Y end) end.
                                                                #Fun<erl_eval.7.91303403>
                                                                2> F2 = binary_to_term(F1(2)).
                                                                #Fun<erl_eval.7.91303403>
                                                                3> F2(3).
                                                                5
                                                                

                                                                … or even

                                                                1> SerializedF1 = term_to_binary(fun(Y) -> term_to_binary(fun (X) -> X + Y end) end).
                                                                <<131,112,0,0,3,96,1,174,37,189,114,105,121,227,76,88,139,
                                                                  139,101,146,181,186,175,0,0,0,7,0,0,...>>
                                                                2> F1 = binary_to_term(SerializedF1).                                           #Fun<erl_eval.7.91303403>
                                                                3> F2 = binary_to_term(F1(2)).                                                  #Fun<erl_eval.7.91303403>
                                                                4> F2(3).                                                                       5
                                                                

                                                                The format is documented here: http://erlang.org/doc/apps/erts/erl_ext_dist.html

                                                              2. 0

                                                                How is that different from not actually sending an integer

                                                                Okay, it’s not. It’s just much more complicated. Sending an integer? Trivial. Sending a plain old data structure? Easy. Sending a whole graph? Doable. Sending code? Scary.

                                                                Sure, if you have a bytecode compiler and eval, sending functions is a piece of cake. Good luck doing that however without explicit language support. In C for instance.

                                                                1. 6

                                                                  You can do it, for instance, by sending a DLL file over a TCP connection and linking it into the application receiving it. It’s harder, it’s flakier, it’s platform-dependent, and it’s the sort of thing anyone sane will look at and say “okay but why though”. It’s just that Erlang is designed to be able to do that sort of thing easily, and accepts the tradeoffs necessary, and C/C++ are not.

                                                                  1. 3

                                                                    The Morris Worm of 1988 used a similar method to infect new hosts.

                                                          1. 1

                                                            “Engineering Leadership at BitGo. Enjoy reading and writing about software development at scale.”

                                                            I really really feel for this guy. That’s a hell of a theft to shoulder personally.

                                                            However, also I note his role. Perhaps the attacker was more interested in company access, than his access, but decided at the last minute to take what was already on the table? I’d love to see a follow-up up post on how the company ensured that it’s not next on the list.

                                                            1. 1

                                                              joeerl was truly a father figure not just in the erlang community. In my recollection he never shot down peoples ideas, but had a knack for riffing with you to end up something neither of you had dreamed about. Well read, and with real world experience to share he brought humour and insight in equal proportions to any conversations he participated in. His enthusiasm and openness will be greatly missed.

                                                              Hallo Mike. Hallo Robert. Goodbye Joe.

                                                              ht to ferd for the line.

                                                              1. 6

                                                                I just wish they would spend 10 minutes and build official packages for FreeBSD and host the packages in their own repo. I did all the hard work, I tried to hand it over, but nobody there seemed interested. Instead they just release a tarball which is useless to everyone except me, who makes the packages and includes the rc script.

                                                                1. 4

                                                                  I really appreciate your debundling work! The thankless life of a porter

                                                                1. 26

                                                                  This law targets link aggregators like lobster.rs explicitly. Nobody has any idea how it should work in practice but presumably linking to the bbc now incurs an obligation to pay. A sad day for the internet as yet another promising future is crushed beneath those who use the law as a means to oppress the populace.

                                                                  1. 4

                                                                    How can a site like this one be affected by this law?

                                                                    Correct me if I’m wrong, but, lobste.rs:

                                                                    1. doesn’t make money off the content they host,
                                                                    2. it hosts links (not quotes, not summaries, …), they are giving the original author more visibility;
                                                                    3. it also hosts discussion, which I believe is still a human right.

                                                                    If someone where to acuse Lobsters of damaging content creators (which is what this law is all about, isn’t it?) how would that differ from taking me to court for sharing a link in a closed chat group, or even IRL?

                                                                    Lobsters is by the community, for the community, it’s not one large corp promoting their stuff (I could see the argument made against HN as it’s YCombinator’s site and it hosts some publicity from time to time), that does not differ IMO to sharing things IRL, and we surely won’t stop from sharing links to our friends, will we?

                                                                    If this law goes agains’t Lobsters for those reasons, then I will understand all the noise made around this directive.

                                                                    1. 2

                                                                      it hosts links (not quotes, not summaries, …)

                                                                      well, depends on the links you have. technically, torrent sites which host only magnet links should be fine, too, but aren’t.

                                                                      1. 3

                                                                        Torrent sites and alike are link aggregators for copyrighted material. It’s a link to something that’s already illegal to distribute, therefore torrent sites are redistributing copyrighted material, which goes against copyright.

                                                                        But lobste.rs’ submissions link to direct sources of information, where you can see the content how it’s creator wanted it to be. Sometimes paywalled articles are downvoted here because not everyone can read them. If lobste.rs were redistributing copyrighted material it wouldn’t be paywalled.

                                                                        A clear example of the opposite is https://outline.com, which displays the content of a third party without it’s consent, and without keeping the shape and form of how the author wanted it to be.

                                                                        Example:

                                                                        • This link is not illegal. It’s the primary source of the content, you are seeing the content how it was meant to be and from the creator’s own site.
                                                                        • This link is illegal, it’s a secondary source of copyrighted material, if the creator decides to paywall it, modify it, put ads on it, etc. They can’t.

                                                                        Lobsters links are to the direct (sometimes indirect, but it’s an unwritten rule to post the direct) source of the topic.

                                                                        I ignore if the EU-approved directive would put Lobsters in the “copyright infringement” bucket, if it does then I repeat my previous point: if sharing links online with other people is illegal, where do you draw the line so that sharing links with IRL friends isn’t, because that would be an obvious violation of free speech?

                                                                        1. 3

                                                                          Agreed. Threadstarter/OP is being way hyperbolic.

                                                                          This is probably not the best law that could have been enacted, but it’s also a fact that American companies like Google and Facebook have been lobbying heavily against it. A lot of online rhetoric is reflective of this.

                                                                        2. 1

                                                                          Such links are off-topic for this site.

                                                                          Only legitimate pointer to a torrent link for this site’s rules is for an open-source distribution. But in that case, it’s more appropriate to link to that project’s release page.

                                                                      2. 7

                                                                        actually lobste.rs is exempt because it earns less than 10million euros, and if its for educational or research use its exempt as well.

                                                                        just as sites like Wikipedia are exempt

                                                                        1. 26

                                                                          No.

                                                                          According to Julia Reda you have to fulfill all (not any) of those three criteria:

                                                                          • Available to the public for less than 3 years
                                                                          • Annual turnover below €10 million
                                                                          • Fewer than 5 million unique monthly visitors

                                                                          Since lobste.rs is nearly seven years old an upload filter is required.

                                                                          1. 4

                                                                            Reads like it was designed specifically to prevent newcomers to the field.

                                                                            Clever, and not subtle at all. I’m surprised to see this coming from the EU.

                                                                          2. 10

                                                                            You have to distinguish between former article 11 and former article 13 (now article 15 and article 17).

                                                                            Article 17 (requirement to try to get licenses and if you cannot get a license make sure that no licensed content gets uploaded) has the limitation regarding company size and age (as correctly stated by qznc) and Wikipedia is exempt from this.

                                                                            Article 15 however (requirement to license newspaper excerpts) does not have exemptions (according to my current knowledge). I guess however that all newspapers will again give Google a royalty-free license, because they fear that they will get less visitors without Google. Thus, in effect the only affected services are small services. Article 15 has these limits (imo not codified directly in the article, but in an annotation to the article): “The rights provided for in the first subparagraph shall not apply to private or non-commercial uses of press publications by individual users.”, but I am not sure how to interpret this “non-commercial uses by individual users” (it’s a similar grammatical construction in German).

                                                                            German Wikipedia stated that they are exempt from “article 13” (now 17). They mention their implications by article 11 (now 15), but do not mention that they are exempt from it. They state “[Article 11] could complicate our research for sources on current topics” (“Dies könnte auch unsere Quellenrecherche bei aktuellen Themen deutlich erschweren.”)

                                                                            1. 1

                                                                              I guess however that all newspapers will again give Google a royalty-free license, because they fear that they will get less visitors without Google.

                                                                              I can’t be certain but a discussion between some newspaper owners on a BBC Radio 4 weekly program on the state of media pointed out that similar laws existed already in both Germany and Spain (I think I remember the countries right) rendering Google news illegal in those countries and therefore unavailable. I don’t know if these new EU directives differ from those countries initial versions but their laws stated clearly that a license fee must be charged, therefore free licensing became illegal. The discussion revolved around how damaging it was to a number of publications whom obtained the majority if not all their revenue generating traffic from Google.

                                                                            2. 7

                                                                              Found something: I think lobste.rs is exempt from article 17 (article 13), because of the definitions in article 2. A “online content-sharing service provider” according to that definition “promotes [the content uploaded by its users] for profit-making purposes”. I think lobste.rs does not want to make any money? And then there comes the list of “educational or research use” that yakamo refers to. However, that’s only for the article 17.

                                                                              For article 15 (article 11) the relevant term is “information society service providers” and that is defined as: “information society service’ means a service within the meaning of point (b) of Article 1(1) of Directive (EU) 2015/1535”:

                                                                              Directive (EU) 2015/1535 in turn defines: “‘service’ means any Information Society service, that is to say, any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services.” (but I have troubles with “normally provided for renumeration”, because in Germany we have some terms that sound like they would mean “commercial”, but in fact they don’t).

                                                                          1. 2

                                                                            I would +2 this if I could. It’s funny and has a lot of useful comparisons across product and protocol lines. Sweet.

                                                                            1. 2

                                                                              I couldn’t help but read it in FPS Russia’s voice.

                                                                            1. 1

                                                                              Does Keep-Alive cost more memory?

                                                                              1. 0

                                                                                You’ll need some memory to track the state of the idle keep alive connection. They’re definitely not free.

                                                                                1. 3

                                                                                  They are more efficient than not using keepalive and having every new socket set up TLS, and also the end of the connection when they linger before getting cleaned up. You need to look at the whole setup particularly for repeated connections to the same server. Latest TLS 1.3 includes further improvements to setting up TLS faster on reused connections and again these need to be cached but also not using them has a cost too.

                                                                                  1. 0

                                                                                    Yes, I understand all that. The question was specifically about memory usage, not any other efficiency dimension.

                                                                              1. 2

                                                                                Apparently Intel hasn’t released a CVE for this yet. Strike 3 for the Clowd? Running untrusted workloads these days is about as safe as leaving 50 bucks on a seat in the train station.

                                                                                1. 10

                                                                                  After using Erlang and Go I don’t know why people keep choosing Go. Channels flip everything upside down whereas messages and inboxes are a much more natural way to manage concurrency. Coupled with Erlang’s first class support for introspecting processes there isn’t even a competition between who wins.

                                                                                  A few jobs back I was at a Go shop and to put it midly it was mostly a mess. One example that comes to mind was a library that had no provisions for stopping a goroutine because its channel protocol just didn’t have provisions for it. So if you used this library and you had some timeout constraints then whatever goroutine was launched would leak because you wouldn’t get the result in time and wouldn’t be able to tell it to stop. In Erlang this would not have been a problem because we could have asked the runtime to kill the runaway/broken process. If you’re wondering why the library wasn’t fixed it’s because that would have required changing all the call sites to pass in extra arguments to pass in the pieces for telling the goroutine to stop and it was just too much hassle to bother. In the end for that specific use case someone rolled a custom solution that bypassed the library (exactly as the Go designers intended I guess).

                                                                                  Go is an almost adequate language built in the Unix tradition. There are better tools but we just keep going for the hacky and simplistic solutions.

                                                                                  1. 4

                                                                                    My intro to erlang was awful. The library I happened to be using would return [x,y] or (x,y) or [x, (y)] or what have you with no rhyme or reason. Of course there’s no compile errors because it’s all dynamic. And bonus fun, it doesn’t actually crash where you expect because various list or tuple patterns can unpack each other. So then you plow forward and only die later when it turns out that y was really [y].

                                                                                    1. 4

                                                                                      I think that’s a generic problem for all dynamically typed languages. Fortunately Erlang has dialyzer: http://erlang.org/doc/man/dialyzer.html.

                                                                                      1. 3

                                                                                        This is one of the few areas where you get a feel for it over time. “Modern” erlang uses type specifications (dialyzer link below) and has compile time checking, sadly the error messages are not at elm or rust standards, and its not full HM like OCaml etc, but it’s pretty useful. It sounds like the library you’re using is not very functional in design. I generally like to keep these are functional as possible, possibly with a sum type return if error handling in line is needed.

                                                                                        That aside, erlang’s strong points are superb concurrency and robustness under load. If that’s what you need then it’s really good. If you need something else you should choose it. single binaries? inline assembler? use something else, and let erlang do the coordination or network stuff. Pragmatic.

                                                                                      2. 3

                                                                                        If you’re wondering why the library wasn’t fixed it’s because that would have required changing all the call sites to pass in extra arguments to pass in the pieces for telling the goroutine to stop and it was just too much hassle to bother. In the end for that specific use case someone rolled a custom solution that bypassed the library (exactly as the Go designers intended I guess).

                                                                                        So, you’re implying that Go is a bad language because you worked at a place where seemingly nobody cared about proper software design? You argument works for every language, simply replace Go with for example Java.

                                                                                        1. 3

                                                                                          I’m implying Go is a simplistic language that paints programmers into corners and other languages don’t have the same issues because they don’t treat programmers like children. Surprisingly, Go is an instance where the usual social problems play second fiddle to its technical issues.

                                                                                          Quote from the horse’s mouth

                                                                                          The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

                                                                                          http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/From-Parallel-to-Concurrent

                                                                                          1. 5

                                                                                            The contempt he has for their own engineers is striking. Doesn’t Google hire the best or doesn’t it?

                                                                                            1. 3

                                                                                              Google is a machine for printing money. You tell me what good engineers would do at Google.

                                                                                              But to answer your question directly I don’t think Google top brass cares about the skill of their engineers. I think Google now is mostly a resume signaling mechanism and my plan is to get hired and quit so I can claim to be ex-Google.

                                                                                              For anyone at Google that is totally not my plan so you should refer me and I promise not to quit within the first week.

                                                                                        2. 2

                                                                                          Channels flip everything upside down whereas messages and inboxes are a much more natural way to manage concurrency.

                                                                                          Please could you explain that a bit more? I have written a bit of Go but no Erlang. Until I read your comment, I thought both handled concurrency in a similar way, i.e. CSP-style message passing. What’s the difference? How do channels “flip everything upside down” compared to Erlang’s message passing? Thank you!

                                                                                        1. 3

                                                                                          I work entirely from home now, & have done at least 50/50 home office for almost 20 years now. In order of what’s made the biggest difference comfort & health wise:

                                                                                          1. getting in some regular fresh air and exercise breaks. I try in summer to work a few hours on the balcony when the sun isn’t too bright, and get some biking and swimming done.

                                                                                          2. a decent mechanical keyboard. You can spend 1000s on one or get a 2nd hand gamer model to try out for a while. Chances are you know somebody who won’t mind you having a test week or so.

                                                                                          3. dual screens and adjustable arm control. This has proved decidedly complicated as I’m reasonably tall and most monitor stands weren’t sufficiently high. I have rarely needed to adjust the screens so no need to go all out on fancy arms.

                                                                                          4. decent office chair. Expensive and worth every penny. If you spend 8h sitting on your butt might as well make it comfy. Plenty of leg room, easy height adjustment.

                                                                                          5. a big desk, enough to spread all the stuff out. 2 years ago I got an adjustable standing desk, quite expensive. I use it occasionally 2-3 times a week. I do like it but I think the novelty in the community outweighs the general benefits.

                                                                                          6. Decent speakers, headset & a good USB DAC for music.

                                                                                          What specifically? Obviously this depends on what you prefer, but I’ve found that good stuff doesn’t come cheap. Chairs desks and monitor arms that aren’t quality don’t pan out well. Again I’ve built this collection up over 7 years, don’t go all out at once.

                                                                                          • wasd v2 custom 105 keyboard <3 this. I have a keyboard.io butterfly but it’s not as well suited to me. I’ve had a carpal tunnel operation a year ago so this is pretty important for me. Split keyboards don’t float my boat.
                                                                                          • generic 27” Dell monitors. Good but not overly pricey.
                                                                                          • steelcase office chair, a large one with a foot rest. The most expensive item by far and worth it. If you have the time you can probably pick up a decent 2nd hand one from office suppliers and refurnished lease items, but YMMV.
                                                                                          • steelcase adjustable desk. These days I would just make my own out of a large piece of wood (I did one for the kids and it’s worked really well), and skip the adjustability unless you really need it. You can always spend more later.
                                                                                          • speakers I picked up a pair of BOSE desktop ones 2nd hand, and tossed up between a mojo chord, and a dragonfly red USB DAC. these are nicely portable so I can take them on vacation in the car, and I got the mojo got honestly it looks so damn cool. The sound was similar, and the dragonfly sticks out of my desktop oddly, but it might work well for a laptop.