Threads for api

    1. 9

      The problem with memes like this is that they require context to understand and the context is missing. This one gets reduced in many peoples’ minds to “performance doesn’t matter,” resulting in a lot of very slow code. What it really means is much more nuanced and boils down to several considerations: don’t compromise your ability to think about the solution by prematurely worrying about speed, don’t waste time micro-optimizing before you get the design right, use profiling to prioritize your optimization work, and optimize at the algorithm choice and design level before worrying about code level performance or micro-optimizations.

      Another one I dislike is “don’t roll your own encryption.” I now see it used all over the place in context where it doesn’t make sense, and it carries none of the important understanding of why encryption is harder than other kinds of code or how you should go about approaching crypto if you do find yourself needing to develop some.

      1. 3

        Much like ‘don’t reinvent the wheel’ and cries of ‘NIH’ if you dare to write something yourself.

    2. 3

      Not a great article. Your DSL problem sounds like a non-problem, all nontrivial programs to some degree function like a DSL. And I mean seriously: you can’t choose a Python module to function like net/http? Again, a real non-problem. Who cares when the tooling came around, as long as you have it?

      Your “perfect language” is probably in the set {Python, Lua, Racket, Go}.

      1. 14

        I think it’s a really great article, it voices some things I wanted to write down, but couldn’t find the time.

        A few things from my consideration on keeping languages small:

        • Do not only consider the cost of adding a feature, but also the cost of removing it.
        • If 10% of users would gain 20% more utility from a feature being added, that still means that the other 90% lose utility, because they still need to learn and understand the feature they didn’t ask for. It’s likely that the equation ends up being negative for most features if you account for that.
        • Don’t focus at being great at something. Focus on not being bad at anything.
        • Not every problem needs a (language-level) solution. If something is verbose, so be it.
        • Allow people to save code by being expressive, not by adding short-cuts for every individual annoyance.
        • Design things by writing the code you want your users to write. Then make that work.
        • Have a way to deprecate, migrate, and remove language and library elements from day one.

        And a few of the standard ones:

        • Eliminate special-cases.
        • Something that can be a library should never be a language feature.
        • Make sure all features are orthogonal to each other.
        • The 80/20 rules doesn’t apply to language design.
        • Make things correct first. Correct things are simple. Simple things are fast. – Focusing on “fast” first means sacrificing the other two.
        1. 3

          If 10% of users would gain 20% more utility from a feature being added, that still means that 90% lose utility. It’s likely that the equation ends up negative if you consider that those 90% still need to learn and understand the feature they didn’t ask for.

          You don’t lose utility from a feature being added. That’s nonsensical.

          1. 23

            You don’t lose utility from a feature being added. That’s nonsensical.

            You definitely can for some features. Imagine what would happen if you added the ability to malloc to Java, or the ability to mutate a data structure to Erlang.

            But of course this doesn’t apply to most features.

            1. 2

              I think it pretty much applies to all features.

              For whatever utility you get out of a feature, you have to take into account that when users had to learn 50 features before to use the language, they now need to understand 51.

              This issue is usually discarded by those who propose new features (expert users), because the have already internalized the 50 features before. Their effort is just “learn this single new thing”, because they know the rest already.

              But for every new user, the total amount of stuff to learn just increased by 2%.

              That doesn’t sound much but if you think that – whatever language you use – 99.99% of people out there don’t know your language.

              It’s hard to offset making things worse for 99.99% by adding a “single” great new feature for the 0.01%.

              1. 2

                For whatever utility you get out of a feature, you have to take into account that when users had to learn 50 features before to use the language, they now need to understand 51.

                Yes, but this is a completely different category from “this language had an important feature, and by adding this new feature, we destroyed the old feature”.

                Adding mutability to Erlang doesn’t just make the language more complicated; it destroys the fundamental feature of “you can depend on a data structure being immutable”, which makes the language dramatically worse.

                1. 1

                  but this is a completely different category

                  Yes, but this is the category I had in mind when I wrote the list.

                  The point the GP mentioned is above listed under “And a few of the standard ones”:

                  Make sure all features are orthogonal to each other.

            2. 1

              if you added the ability to malloc to Java

              Java has that already? Various databases written in Java do allocate memory outside the GC heap. You can get at malloc via JNI, as well as using the direct ByteBuffers thing that they kinda encourage you to stick to for this.

              1. 4

                Java has that already?

                Yes, and when it was added it was a huge mistake.

                Everyone I know who uses the JVM won’t touch JNI with a ten-foot pole.

          2. 7

            Don’t just think about the code you write; think about the code you need to read that will be written by others. A feature that increases the potential for code to become harder to read may not be worth the benefit it provides when writing code.

          3. 7

            C++ comes to mind. I think it was Ken Thompson who said it’s so big you only [need to] use a certain subset of it, but the problem is that everyone chooses a different subset. So it could be that you need to read someone else’s C++ but it looks like a completely different language. That’s no good!

          4. 8

            You don’t lose utility from a feature being added.

            That’s nonsense. Consider the case of full continuations, as in Scheme: implementing them requires that certain performance optimisations are impossible, which makes all code — even code which doesn’t directly use them — perform more slowly. Granted, this can be somewhat mitigated with a Sufficiently Smart Compiler™, but not completely.

          5. 4

            “Lose utility” is not the right framing. It’s more like increased cognitive overhead.

          6. 3

            You certainly pay a cost, though. That’s indisputable.

          7. 2

            Maybe “utility” is the wrong word for the thing you lose but you definitely lose something. And the amount of that thing you lose is a function of how non-orthogonal the new feature is to the rest of the language: the less well integrated the feature is, the worse your language as a whole becomes.

      2. 8

        Thanks for the feedback. While I haven’t worked on any Common Lisp program large enough to have turned itself into a DSL, I also know that for any task, there are usually a few libraries that each don’t work for more than 80% of the use-cases for such a library. Whether this is caused by the language itself or its community, I don’t know, but I think it has more to do with the way that CL encourages building abstractions.

        As for the fact that Python doesn’t have a net/http equivalent in its standard library, I remember this being a somewhat major driver for Go’s adoption. You could build a simple website without having to choose any kind of framework at all. It was really easy to get something together quickly and test-drive the language, which is super important for getting people to use it. Also, having something that creates a shared base for “middleware” and frameworks on top of the standard library had to have led to better interoperability within the early Go web ecosystem.

        I will concede that good tooling shortly after launch is the least important point, but really spectacular tooling is a good enough selling point for me to use a language on its own, so I think it does matter, since it allows people to write larger programs without waiting so much for the language to mature.

        It appears that I did a poor job of communicating that my list of points were geared towards new languages today (or ones of a similar age to Go), but I will absolutely play with Tcl and continue to investigate other existing options.

        1. 1

          As for the fact that Python doesn’t have a net/http equivalent in its standard library

          Well, there technically is http with http.client and http.server modules, just it’s so old that it’s abstractions are no longer abstract. It seems that nowdays python’s standard library needs updated abstractions, but that no wouldn’t have any use, as there are 3rd party libraries providing those abstractions(e.g. requests)

    3. 38

      To err is human; to propagate error automatically to all systems is devops.

    4. 2

      SaaS product? Check. Directly handles your data without encryption or blinding? Check. So it spies on you.

    5. 2

      This looks really, really cool. I have a couple questions that I didn’t see covered in the blog post (but if I missed something feel free to just point that out!):

      1. What’s the timeline like for integrating this into ZeroTier? Has it already been done?
      2. What is the relationship between LF to ZeroTier root servers? Does this completely obsolete ZeroTier root servers, and if so is that replacement optional? Or does it just augment them?
      1. 3
        1. Integration is happening.

        2. It’ll run behind a root server as its data store giving it full “situational awareness.” Basically it will let the roots that we run and the roots that you run be effectively the same node, like different instances of a microservice connected to the same database and event bus. So if you want to run your own roots they will act exactly like ours, but be on your stuff (even in the same building if you want really good low latency!).

        BTW LF has another neat property: you can disconnect from the network and it keeps working with its current data set, then when you reconnect your data will be merged. So you could set up roots that work across transient connections. You could also go air-gapped and your stuff would keep working. If you ever want to un-air-gap, it keeps working and just re-merges with the net.

        (I’m the author.)

    6. 3

      Global key-value stores without cryptocurrency crap are pretty cool, I wish one took off. dename didn’t, sadly.

      It’s mostly written in Go (1.11+ required)

      >_<

      LF’s intrinsic conflict resolution mechanism relies upon proof of work as a hard to forge proxy for time

      hmm, right, with a merkle dag it sort of is like time.

      I have some ideas though:

      • why not use multiple known public sources of signed timestamps as a(nother) proof of time? The node that wants to claim a name gets (a hash of) the name signed together with a timestamp by public servers:
        • For example, including the name in a Roughtime request nonce might accomplish that;
        • as well as including it as a request header when getting a Signed HTTP Exchange (and the cool thing here is that the certificate of the server that signed the exchange can be checked on Certificate Transparency..)
      • why not protect subsequent updates of a claimed name with a simple pubkey signature check? (updates must be signed simply by the same key as the initial claim) — sort of like TOFU in SSH, a simpler alternative to certificates
        • this can be exposed to users in a convenient way via text passphrases (like BIP39 “brainwallets”)
      1. 2

        I’m the author, and here’s some responses:

        Why not use multiple known public sources of signed timestamps…?

        Not a bad idea. LF is a work in progress. They could be another input in trust / conflict resolution.

        Why not protect subsequent updates…?

        It does this. Multiple entries with the same claimed name are treated as one in terms of trust verification and also carry the ‘weight’ (proof of work cumulative weight) of all versions, not just the latest version. That’s just not clear in the current docs. The query and scoring code is actually somewhat complex: look up all records by key, group by owner, add all revisions, …

    7. 10

      I was expecting a clueless anti-V6 rant. Instead I got a great history and an explanation of why the parts of V6 that suck suck.

    8. 27

      This person fundamentally misunderstands.

      http://www.ariel.com.au/jokes/The_Evolution_of_a_Programmer.html

      Go is not for beginners. Go is for people who are over their infatuation with gratuitous complexity and iamverysmart programming. Go is for programmers who want to get things done, not show off. It’s a low cognitive overhead language designed to make it easy to write and ship software and otherwise get the F out of the way. The ecosystem is full projects written to do things not to show off how smart the programmer is by their clever use of language features.

      I’ve been programming for over 20 years and I love Go. It’s not perfect by any stretch and I am also eyeing Rust these days, but Go is a refreshing breath of fresh air after decades of languages like Java and C++ on the static front that encourage over-engineering and sluggish dynamic languages like Ruby, Python, and JavaScript that crumble into messes of hidden bugs when you add more than a few coders to a project. For decades the only refuge has been the 1970s experience of C or the disciplined use of C++ and Java (intentionally avoiding over-engineering).

      1. 6

        I feel like you’ve accurately hit the nail on the head and captured my sentiments exactly. I’ve settled on Go because it’s productive and it’s surprisingly easy to write something and find that it works exactly as you intended first time. It’s not perfect—no language is—but I get more done and feel fulfilled more frequently than I do frustrated with it, so I take that as a win.

        1. 2

          I feel like if Go were for beginners it would not expose pointer internals, syscalls, unsafe, etc. It lets you go low level to the point of doing things that “are not guaranteed to work in future versions of the runtime.” It exposes its internals a ton more than e.g. Java or C#.

          It’s clearly a language designed for high productivity by programmers who know what they’re doing. Beginners can use it of course but there are ways to shoot yourself in the foot.

          The biggest language lawyer complaint about Go is that it lacks generics. I’ve found that I rarely miss generics and that 90% of the time I can come up with another solution that is just as fast and at least as clear.

          Here’s a great example:

          https://golang.org/pkg/sort/#Search

          Binary search in Go is implemented by passing a function that can see your data. No need to implement binary search on a generic.

          That kind of thing can be done most of the time. I’ve heard these kinds of idioms described as “Gooey” as in “the Go way.”

          The only real area where not having generics hurts is when you’re trying to implement really deep algorithms and heavy math and get high performance across both simple types like float64 and complex structured types. This is the area where C++ shines the brightest IMHO. In that case I’d be tempted to do the math kernel in C++ and call it from Go and do the more “boring” stuff in Go which is more productive.

    9. 4

      Distributed P2P key/value stores, more efficient distributed pub/sub multicast replication for virtual networks, boring corporate accounting junk.

    10. 3

      It seems to be a common theme these days…. People rediscovering time and time again why properly normalised data, ACID and a well thought through data model is important.

      I walked into a discussion recently where they were bemoaning the fragility, brittleness and complexity of a large json data structure…

      …my only comment was that I felt I had fallen into a time warp and I was back in the late 1980’s when people were bemoaning the problems of hierarchical databases and why a RDBMS was needed.

      Sigh.

      Sort of sad really.

      I’m still waiting for the pro-sql types to wake up to what CJ Date has been saying for decades and to up their game beyond null’s and auto-increment keys….. but we can’t get there because we keep having to rehash the basic stuff of normalization and ACID.

      1. 7

        The problem is the lack of SQL databases that require less than days to set up replication in a robust manner. Schemas are not the problem. Arcane hard to administrate software is the problem. PostgreSQL replication requires a dedicated DBA. I’m keeping a close eye on CockroachDB.

        1. 4

          I use Amazon RDS at the day job. Unless you have enough data to justify a DBA for other reasons, RDS is inexpensive enough and solves PostgreSQL replication.

    11. 5

      If you like the Slack interface but don’t like the idea of a closed silo housing all communications, check out: https://about.mattermost.com

      I have no affiliation with this company but we do use this.

      The biggest things the Slack style interface brings to the table are search-ability, cross-device sync, and persistence. I like those, and that can hop in and scroll back and catch up. I still prefer aspects of IRC though, and all these Slack-style apps are a lot fatter than an IRC client.

      I’ve never had a problem with it disrupting work. I just close the damn thing if I don’t want it right now. If your org whines to you if you do this, you have a culture/management problem not a tech problem.

    12. 17

      People seem to be missing the forest for the trees in this thread. The whole point of multi-user OSes was to compartmentalize processes into unique namespaces- a problem we’ve solved again thanks to containers. The issue is that containers are a wrecking ball solution to the problem, when maybe a sledgehammer (which removes some of our assumptions about resource allocation) sufficed.

      For example, running a web server. If you’re in a multi-tenant environment, and you want to run multiple webservers on port 80, why not… compartmentalize that, instead of building this whole container framework.

      Honestly, I think this article raises a point that it didn’t meant to: the current shitshow that is the modern micro-service/cloud architecture landscape resulted from an overly conservative OS community. I understand the motivations for conservatism in OS communities, but we can see a clear result: process developers solving problems OSes should solve, badly. Because the developers working in the userspace aren’t working from the same perspective as developers working in the OS space, so these developers come up with the “elegant” solution of bundling subsets of the OS into their processes. The parts they need. The parts they care about. When the real problem was that the OS should have been providing them the services they needed, and thus the whole problem would have been solved with like, 10% of the total RAM consumption.

      1. 3

        This is reasonable… except when you mentioned RAM consumption:

        Although containers themselves have almost no overhead, Docker is not without performance gotchas. Docker volumes have noticeably better performance than files stored in AUFS. Docker’s NAT also introduces overhead for workloads with high packet rates. These features represent a tradeoff between ease of management and performance and should be consid- ered on a case-by-case basis.

        Run containers with host networking on the base filesystem and there is no difference. Our wrecking balls weigh the same as our sledgehammers.

        http://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/$File/rc25482.pdf

        1. 6

          The problem isn’t really RAM or CPU weight, though the article uses that aspect to get its catchy title. The problem is unnecessary complexity.

          Complexity is brittle, bug-prone, security-hole-prone, and imposes a high cognitive load.

          The container/VM shitshow is massively complex compared to just doing multi-tenancy right.

          Simpler solutions are nearly always better. Next best is to find a way to hide a lot of complexity and make it irrelevant (sweep it under the rug, then staple the rug to the floor).

          1. 1

            Complexity is brittle, bug-prone, security-hole-prone, and imposes a high cognitive load.

            The container/VM shitshow is massively complex compared to just doing multi-tenancy right.

            Citation needed. This article is proof that doing “multi-tenancy right” requires additional complexity too: designing a system where unprivileged users can open ports <1024. Doing “multi-tenancy right” also requires disk memory and CPU quotas and accounting (cgroups), process ID namespaces (otherwise you can spy on your colleagues), user id namespaces so you can mount an arbitrary filesystem, etc, etc, etc.

            BSD jails have had all of these “complexities” for 20 years, and yet no one gripes about them. I suspect it’s just because linux containers are new and people don’t like learning new things.

    13. 6

      Erm, so I disable priv ports. I start a web server on port 80. Little Timmy comes along and starts a web server on port 80. What happens now?

      1. 3

        Timmy’s call to bind() fails, because the port is already in use by you.

        1. 4

          Then how is this actually useful for running multiple web servers on the same box? Wouldn’t it end up in a free-for-all, with the first user who starts up their Wordpress install getting port 80, while the rest have to contend with another, non-standard port?

          1. 12

            What *nix really needs is the ability to assign users ownership to IP addresses. With IPv6 you could assign every machine a /96 and then map all UIDs onto IP space.

            This is probably a better idea than even getting rid of privileged ports. You can bind to a privileged port if you have rw access to the IP.

            The real issue here is that Unix has no permissions scheme for IPs the way it does for files, etc.

            1. 5

              Its not so very much code to write a simple daemon that watches a directory of UNIX sockets, then binds to the port of the same name, forwarding all traffic. Like UNIX programming 101 homework easy. One can certainly argue its a hack, but its possible and its been possible for 20 years if that’s what people wanted. No kernel changes required.

              I think theres a corollary to necessity is the mother of all invention. If it hasn’t been invented, its not necessary. To oversimplify a bit.

        2. 2

          Sounds like Timmy needs a VM, so now I’m unclear on exactly how we’ve solved the energy crisis.

          1. 2

            Well, what happens when I grab 10.0.0.2 too? And .3 and .4?

            There needs to be an address broker at some level, and I’m not convinced it’s impossible for that broker to be nginx.conf proxying a dozen different IPs to a dozen different unix sockets. There’s a fairly obvious solution to the problem that doesn’t involve redesigning everything.

            So why then does AWS offer VMs instead of jamming a hundred users onto a single Linux image? Well, what if I want to run FreeBSD? VM offers a nice abstraction to allow me run a different operating system entirely. Now maybe this is an argument for exokernels and rump kernels and so forth, but I didn’t really see that being proposed.

            1. 6

              OK, sorry, didn’t mean to be argumentative. But it’s a really long article, so I could only keep some of it in my head, and it got a lot of upvotes, so I’m trying to mine out what the insights are. But don’t feel personally obligated to explain. :)

              There seemed to be a metapoint that things are inefficient because we’re using some old design from another era and it’s obsolete. But I didn’t see much discussion of why we can’t keep the design we have and use the tools we have in a slightly better way. Like nginx.conf to multiplex. Shared web hosting used to be a thing, right?

              1. 4

                I feel the metapoint was the opposite. The author wanted to go back to the old way things were done, but simply allow users to have their own IP address in the same way they have their own home directory.

                You can already add many IP addresses to a single machine in BSD and Linux. In Linux (don’t know about BSD), you can even create virtual sub-interfaces that have their own info, but reside on the same physical interface. The author wanted unix permissions on interfaces too, rwx = read write bind. So your hypothetical user Timmy user would have /home/timmy and eth0:timmy, with rwx on /home/timmy, and r-x on eth0:timmy. They would be able to read their IP, MAC, etc, and bind to it, but not change it.

              2. 2

                Shared web hosting used to be a thing. I think people have realised that hosting a website means running code, one way or another, and traditional unix was never really suited to the idea that there would be multiple people organizing code on the same machine: multiple users yes, but unix is very much single-administrator.

                More concretely, library packaging/versioning sucks: it’s astonishingly difficult to simply have multiple versions of a shared library installed and have different executables use the versions they specify. Very few (OS-native) packaging systems support installing a package per-user at all. Even something like running your website under a specific version of python is hard on shared hosting. And OS-level packaging really hasn’t caught up with the Cambrian explosion of ways to do data storage: people have realised that traditional square-tables-and-SQL has a lot of deficiencies but right now that translates into everyone and their dog writing their own storage engine. No doubt it will shake out and consolidate eventually, but for now an account on the system MySQL doesn’t cut it but the system has no mechanism in place for offering the user persistence-service-of-the-week.

                Personal view: traditional unix shared too much - when resources were very tight and security not very important it made sense to optimize for efficiency over isolation, but now the opposite is true. I see unikernels on a hypervisor as, in many ways, processes-on-a-shared-OS done right, and something like Qubes - isolation by default, sharing and communication only when explicitly asked for, and legacy compatibility via VMs - as the way forward.

                1. 1

                  Isn’t this exactly the problem solved by virtualenv and such? I’ve never found it especially difficult to install my own software. There was a big nullprogram post about doing exactly this recently.

                  There are some challenges for sure, but I get the sense that people just threw their hands in the air, decided to docker everything, and allowed the situation to decay.

                  1. 1

                    virtualenv has never worked great: a lot of Python libraries are bindings to system C libraries and depend on those being installed at the correct version. And there’s a bunch of minor package-specific fiddling because running in virtualenv is slightly different from running on native python.

                    People reached for the sledgehammer of docker because it solved their problem, because fundamentally its UX is a lot nicer than virtualenv’s. Inefficient but reliable beats hand-tuning.

                2. 1

                  You can’t quite use namespaces that way. Net namespaces are attached to a process group, not a user. But doing something like I described would truly assign one IP address to a user. That user would have that IP address always. They would ssh to it, everything they started would bind to it by default, and so on. It would be their home IP in the same way their home directory is theirs.

                3. 1

                  Docker is mentioned as also bloat because of image for each container.

                  Container and layer sprawl can be real. I can’t deny that :)

                  But you have two options to mitigate that:

                  1. Build your dockerfile FROM scratch and copy in static binaries. If you’re doing C, or Go, this works very well

                  2. Pick a common root - Alpine Linux (FROM alpine) is popular since it is fairly small. Once that is fetched, any container that references it will reuse it - so your twenty containers will not all go download the same Linux system.

      2. 1

        They have different ip addresses, There must be some way to use multiple addresses on the same linux install and if there isnt it would be easy to add.

  1. 2

    From the article: network service multi-tenancy. What does that even mean? Good question. I think that in his ideal world we’d be using network namespaces and we’d assign more ips per machine.

    Honestly it sounds like despite his concerns about container overhead, his proposal is basically to use containers/namespaces. Not sure why he thinks they are “bloated”.

    1. 3

      A few numbers would certainly make the overhead argument more concrete. Every VM has its own kernel and init and libc. So that’s a few dozen megabytes? But a drop in the bucket compared to the hundreds of megabytes used by my clojure web app. So if I’m provisioning my super server with lots of user accounts, I can get away with giving each user 960MB instead of 1024MB like I would for a VM? Is that roughly the kind of savings we’re talking about?