1. 3

    Bonus points if they offer per-domain API keys, so I can automate renewing Let’s Encrypt wildcard certs without having the server have access to all of my domains :)

    1. 5

      It doesn’t matter at the scale of 2 cores/8GB RAM/100GB disk, but if you start wanting more resources, coloing can be significantly cheaper than a VPS or cloud provider, and provide more stability (no noisy neighbors or strange networking problems). Would be interesting to see that cost comparison as well (amortized over some number of years)

      1. 2

        For static workloads, yes this is absolutely true. But if scale starts moving up or down, you’re stuck with the hardware you bought until you buy (and deliver and rack and configure) more.

        1. 4

          People tell me this all the time, but when I ask people what their ratio of peak load to typical load is, it’s usually less than the premium that one pays for running in the cloud, unless you’re large enough to negotiate a contract where you get much better than the public rates (which many people do, but that makes it pretty hard to price-compare). Maybe there are workloads where you’re only using your hardware a very small percentage of the time (data analysis, etc), that makes sense, but for a lot of companies with very seasonal traffic (ecommerce sites, etc), I’ve asked people about their peak load and it usually doesn’t seem to justify being in the cloud :/

          1. 1

            I both agree and not agree with you. People I know that are on this scale (big enough to own hardware, but not one of the super-players) usually have a relatively steady load and so resources in reserve so that they can provision extra/new apps when the need arises, and still have time to get new hardware, On the other hand, if they actually want to colo (because they need to be out there on internet) their load is much more volatile and harder to predict, so they overprovision a lot.

            Granted, this is based on personal experience, but I just wanted to bring it up.

            1. 1

              How much cheaper is it than (eg) 3 year non-convertible reserved instances?

              1. 1

                And people always under value time to delivery. I was in a position where we had to wait month(s) to get newer nodes in an cluster because the rack was full.

                1. 1

                  And having people around who can deal with both admin and hardware stuff.

              2. 1

                Both are completely valid and it varies organization to organization. I’m an IT consultant and it’s always hit or miss on projects whether the company will have the spare capacity in their vSphere cluster to provision servers when the project demands it. One product I deploy needs four cores and 16GB of RAM dedicated, and you might be surprised at the number of times we put the entire project on hold for months because they need to purchase and install the extra capacity in their VM cluster. If they had a project that was a surprise smash hit and they hadn’t provisioned resources for it weeks in advance, they’re throwing money down the drain.

                Some companies have the capacity to scale already (which may or may not be a waste of money like you said), while some can scale but it would take weeks or months. There are pros and cons to both approaches and it really depends on the company and product you’re talking to.

          1. 2

            Maybe it makes me a certain kind of developer to say this, but if there are like 8 idiomatic ways to do 1 extremely common thing in a language, I can’t see that as anything other than a total failure of process and design.

            Probably there are author/readers who are happy to learn new grammar, vocabularies, and/or alphabets for every book they work with, but to me it just seems like an abdication of responsibility.

            1. 3

              Are you familiar with the term “design space”, Peter? It’s entirely possible that your preferred PL hasn’t arrived at the optimal solution either. In an imperfect world, different sets of tradeoffs can be equally valid. It’s not that the ecosystem is terribly fragmented over this problem, and even Haskell practitioners have learned to navigate which idioms work in which situation (and which don’t).

              YMMV, but I tend to believe that experimentation is ultimately the only way to arrive at better solutions - even if that occurs only in small increments, and even if a lot of things get re-invented in different contexts along the way. I also feel old enough to make my choices on my own.

              1. 0

                I’m completely down with experimentation and exploring a design space. What I’m not down with is doing those things with, or in, a general-purpose, non-academic programming language. In my opinion, the authors of these languages have the responsibility to make specific decisions in all of the dimensions of the design space, so that users can build on an invariant foundation. Experimentation is better done at a different level, with a different language altogether, rather than a continuous stream of modifications to an existing one. And this is because success and failure, what works and what doesn’t, takes a long time to figure out, years at least. Users have to build several generations of artifacts on top of a language foundation to get a feel for it. Changing the foundation along with the artifacts subverts the process and largely invalidates the conclusions.

                Of course I’m saying all this from a particular perspective which not everyone shares. I join organizations and use languages to deliver value. While I benefit from the results of experiments, experimentation as a process isn’t useful to me per se; the churn makes my work a lot harder.

                Concretely, if I join a team whose Rust artifact spans enough time, I’ll have to learn not only the problem domain but likely many different idioms for many different things like error handling, as different techniques drifted into and out of fashion over the project’s lifetime. This overall condition sucks a lot more than whatever benefits a given experimental technique may have brought.

                1. 4

                  This post is largely describing libraries that people have written, and all of the changes to the standard library Error type it describes don’t change foundational model at all. Isn’t that “doing experimentation at a different level” than the language?

                  Additionally, since Error is a trait, you can use libraries that use these different error types in a normal rust program without needing to learn the new idioms that the library provides, if you want.

                  That seems pretty good to me, although I understand how getting it perfect on the first try would have been better :P

                  1. 1

                    Yeah, I may be overstating things here.

                  2. 1

                    What @wesleyac said. I assume the “creating value” proposition was meant as a polemic, too.

              1. 3

                This should really talk about the privacy side of writing this feature - it’s super hard to get right (what “right” even means depends a lot on your threat model), and if you want to build one of these, you need to know what the privacy implications are. It looks like the linked implementation returns the image url to the client - unless you go out of your way to cache it on the server, linking the image like that will leak details about the user who sees the preview.

                See https://signal.org/blog/i-link-therefore-i-am/ for some thoughtful discussion of this.

                1. 2

                  Most modern web developers don’t consider these things, which is a huge issue. The notion is to “ship fast, deploy young”, and we are all paying the price.

                  Granted, one little link preview won’t break the camel’s back. But given how much of this complexity is used everywhere - hundreds and hundreds of little JavaScript snippets loaded on things as simple as blogs - the disaster is almost comically insurmountable.

                  1. 1

                    In a web app, it could be actually harmful to trust the client to generate it for every other user (versus in a messenger setting where there’s only one “other”)

                    1. 1

                      Yeah, this is true - possibly the signal blog post isn’t the best thing to link to - for a web app, I’d be more concerned about leaking information about the people who view it (and in order to not do this, you need to think about cacheing the image on the server, etc, which this article doesn’t go into at all). Your threat model depends on your usecase, but I think it’s bad that this article doesn’t talk about privacy at all.

                    2. 0

                      You can see this feature in other use-cases than private messages. Anyway, this article is about generating preview data from url, it’s not about building a private messenger.

                      1. 1

                        Sure - in the webapp usecase, it might be bad for the client to generate it, but it’s also bad to have the client directly request the image from the server that’s being linked to, for privacy reasons. You need to think about the threat model for the thing you’re building.

                    1. 4

                      This is super neat! That said, I think that ASCII/English has been really handy for giving all programmers a lingua franca for working together–I’m not sure throwing that away in the name of local identity is an unalloyed good.

                      1. 7

                        The author addresses this in their talk at Deconstruct this year (not published yet, but will be eventually), and, in a less compelling way in http://ojs.decolonising.digital/index.php/decolonising_digital/article/view/PersonalComputer/3

                        Basically - yes, privelaging one language over another is not good, but that goes whether the language is English or Arabic. This is an art project, but a programming language that embodies these ideals would not have any canonical “name” for a given function, and instead would allow the programmer to map a human friendly name in any language onto the computer-friendly identifier (possibly a content-addressable system like unison). This also fixes lots of real problems with linking, etc (you don’t have to do name mangling, for instance).

                        1. 5

                          I guess I can see both sides. On the one hand, it’s nice to have as many people using the same programming language as possible, so that there are many libraries for those languages, and many users for them to find and fix bugs. In that point of view, non-English speakers having to learn a little bit of english to find docs and keywords and such is just a cost of doing business.

                          On the other hand, it’s nice to have a way for non-English speakers to get into programming without having to learn a foreign language. In that point of view, the cost of doing business is that they may get siloed into a small-time language without as much presence for libraries, docs, error message help, etc. But it might help people become interested enough in programming to explore other languages, when they might never bother without a taste from this.

                          What does all that come down to though? I wouldn’t want to actively suppress it or anything. Maybe just let users know that what the legitimate concerns are, and let them make their own choice on that to learn I guess.

                          1. 3

                            The Arabic script is not particularly well suited for use as a symbolic alphabet because of its highly cursive nature and the way that letter forms change based on location within a word. This is relevant to the script’s use as an ASCII-equivalent (and makes font layout rules for Arabic more complicated than for some other scripts), but also means that older technologies like the printing press were harder to adapt to Arabic-script languages.

                            It’s interesting to think about how an equivalent of ASCII (and precursor technologies like teletypewriters and telegraphs) would’ve been developed with respect to a non-Latin script. That said, I’m not personally particularly fond of the aesthetics of the Arabic script. There’s other alphabets I would be more inclined to see an alternate-world ASCII of - the Mxedruli script used by Georgian and a few other Caucasian languages, for instance, is really pretty and less well known than it should be.

                            1. 4

                              The story of typewriter for Hangul (Korean alphabet) is truly fascinating, but unfortunately mostly unavailable in languages other than Korean. For example, typewriters had two keys for “w” because of technical limitation of typewriters (they had difference advance). Modern computer keyboard layout inherited this even if technical limitation no longer applies.

                              1. 1

                                Or cunieform!

                              2. 2

                                it was a happy accident that the lingua franca for programming is also very well suited. the highly irregular spelling means more short strings are words, so short names can be more expressive.

                              1. 8

                                I got an itch, so I started working on designing a Z80-based laptop. I have most of the mainboard figured out (I think), but still need to work out the keyboard (ideally some kind of mechanical keyboard) and graphics (thinking 7 or 8” 320x240 LCD). And I need to design the case. I’m toying with the idea of an ESP8266-based “modem” for it, too.

                                I was debating between the 6502 and the Z80, but then I found this book and that decided that.

                                I have a bunch of ideas that I really just don’t know if they’re even doable because I don’t really have much experience with designing this kind of hardware.

                                I’ve also been learning a lot of math lately, and still tinkering around with some robots.

                                1. 2

                                  Please post more about this project as it progresses. I found some appropriate context (and nostalgia) in this Z-80 advertisement from May 1976.

                                  1. 1

                                    I’m tagging relevant posts with z80 on my site; if that’s what you’re looking for. Otherwise, I’ll probably post weekly here as I progress, assuming I do.

                                  2. 2

                                    What are you doing for the screen interface? I’ve been working on a Z80 computer, and it seems hard to find a screen with reasonable resolution that the Z80 is fast enough to actually drive without some sort of coprocessor.

                                    1. 2

                                      I don’t know yet; I haven’t ruled out the idea of making a GPU of sorts out of an AVR. Some options include an SSD1306 (I2C) or KS018B based display; if I do 320x240 it’s going to have to be monochrome simply on account of the memory that’s required for something like that.

                                  1. 9

                                    I wouldn’t be so quick to write off static websites. The use cases you listed are entirely do-able with a static site, and modern generators (hugo, jekyll, pelican) are miles ahead of where static site generation was even a few short years ago.

                                    I serve my blog with Github Pages, so deploying is git push (wrapped in a script that does some other stuff like re-generating the static files). Things like comments (or photo galleries) can be embedded if you really want to go that route. Though there are often clever ways to accomplish stuff like this anyway (e.g staticman for comments).

                                    1. 8

                                      modern generators (hugo, jekyll, pelican) are miles ahead of where static site generation was even a few short years ago.

                                      I’m curious what innovations you’ve seen in static site generators in the past few years? I haven’t seen anything fundamentally different, but I’m curious if there’s something I’m missing!

                                      1. 1

                                        Me too.

                                        I used to use and love Pelican for my blog, and I still have the utmost respect for that project and its maintainers (The code is solid and very approachable - something I really value.)

                                        But I have zero web design talent and couldn’t get it looking the way I wanted, and also posting from a mobile device is pretty painful, so I turned back to the dark side and now use wordpress.com.

                                    1. 5

                                      I’m unconvinced that the size of binaries is correlated at all with any metric people actually care about. Anecdotally, people used to write games in assembly, and then C++ - both languages that produce reasonably sized binaries, but nowadays it’s common to include interpreters (lua, etc), drivers for many different controllers, whatever crap the unity standard library includes, etc. This is great for dev productivity, but has no value to the consumer (or even negative value, since they need to download all of that).

                                      I know that this is mentioned in the post, but I think that it completely undermines the point of all of the analysis that uses binary size.

                                      1. 2

                                        It’s mostly the size of the assets. Binaries are nothing compared to them.

                                        1. 1

                                          C++ - both languages that produce reasonably sized binaries

                                          Wait, what?

                                          Including a single template in your C++ code can easily dwarf the size of the Lua interpreter.

                                        1. 4
                                          y = false, true; // returns true in console
                                          console.log(y); // false (left-most)
                                          

                                          Huh, that definitely tripped me up for a second. Is this because the comma is higher precedence than the assignment?

                                          1. 9

                                            The assignment to y belongs wholly to the expression on the left side of the comma operator: y = false. The left and right sides of the comma operator don’t interact. The comma operator is just a way to squeeze in two or more expressions where only one expression is valid, e.g. the first parameter of a for loop. The list of expressions is treated as a single expression that always evaluates to the result of the right-most expression. For that reason y = (false, true) has your expected result. Along the same lines var x = 1, y = 2 expands to var x; var y; x = 1; x = 2; because of variable hoisting.

                                          1. 6

                                            Recently there’s been a lot of discussion of keyboard latency, as it is often much higher than reasonable. I’m interested in how much the self-built keyboard community is aware of the issue. Tristan Hume recently improved the latency of his keyboard from 30ms to 700µs.

                                            1. 2

                                              The Planck that Dan and I tested had 40ms of latency - not sure how much that varies from unit to unit though.

                                              1. 3

                                                I would expect very little, using the QMK firmware with a custom keymap. There’s typically only a handful of C with a couple ifs, no loops.

                                              2. 2

                                                Why are those levels of latency problematic? I would think anything under 50ms feels pretty much instantaneous. Perhaps for people with very high typing speeds or gamers?

                                                1. 1

                                                  The end-to-end latency on a modern machine is definitely noticeable (often in the 100s of ms). Many keyboards add ~50 ms alone, and shaving that off results in a much nicer UX. It is definitely noticeable comparing, say, an Apple 2e (~25ms end-to-end latency) to my machine (~170ms end-to-end latency, IIRC).

                                                2. 1

                                                  I recall reading about that. I’ll see about getting some measurements made, and see what it’s like on my Planck.

                                                  I’m interested in how much the self-built keyboard community is aware of the issue

                                                  I haven’t really seen much about it :/ If we could find an easy way of measuring latency without needing the RGB LEDs and camera, that would be good.

                                                  1. 2

                                                    a simple trick - use a contact microphone (piezo), jack it into something like https://www.velleman.eu/products/view/?id=435532

                                                1. 2

                                                  I sort of disagree that this should be personal preference - Git itself, as well as many prominent figures all advocate for imperative. Seeing as this is already relatively standard, I don’t see any benefit in taking other approaches. In order to be convinced that another approach is better, I’d have to be convinced that the benefits are worth doing something non-standard, which I think is unlikely for the vast majority of projects.

                                                  Like most style-guide things, I didn’t like it at first, but stockholm syndrome has set in and it’s all good now :P

                                                  1. 14

                                                    I am back from Recurse Center. I’m catching up with family and friends, and doing the hundred put-off chores that come with returning from a long trip and finishing a big project. Probably not much code in my future this week, but I hope to sneak in some Advent of Code.

                                                    1. 5

                                                      Thanks for writing that report! It sounds like it was a very productive and fun trip.

                                                      And thanks a lot for hosting lobste.rs too! It’s been a great resource for me while I develop my shell.

                                                      I was considering attending Recurse, as a change of environment to “finish” up my shell in 2018. One silly question: do they have computers there? Or is everyone coding on a laptop? Do they have monitors?

                                                      I looked here and couldn’t find the answer:

                                                      https://www.recurse.com/manual#sec-environment

                                                      It’s a little silly, but I’m most productive on Linux, while my laptop is a Mac. I have used VirtualBox but somehow it feels a little wrong. Probably something to do with the screen size. Also my tests take a fair amount of computing power.

                                                      Do they have a printer there? Another thing is that I frequently print out CS papers to read (I don’t like reading long docs on a laptop or tablet.)

                                                      They aren’t dealbreakers as I can make my own arrangements, but I’m just curious.

                                                      1. 4

                                                        Folks bring their own computers. Mostly that’s laptops, but one or two people brought desktop PCs. I think it was because they wanted the processing power for ML tasks, but you could bring one just because you prefer it, sure. There are a half-dozen monitors available for use. There are two printers, one of which can take print jobs via email (I spent an hour or two with cups but never got eiter working).

                                                        1. 4

                                                          Just going to chime in to say that RC is great! I finished up an 18-week stint there a month or so ago, and I really enjoyed my time there.

                                                          Re: printing out papers - as @pushcx mentioned, there are a couple printers, and one of the parts of RC that I enjoyed a lot was finding interesting papers that folks had printed out lying around the space and reading them :)

                                                          1. 3

                                                            Hi! RC is awesome – I’m not sure if they have a printer but there are probably between 5 and 10 monitors, and almost always some are free.

                                                            1. 1

                                                              There is at least one working laser printer there as of May of this year :)

                                                        1. 11

                                                          Hey @loige, nice writeup! I’ve been aching to asks a few questions to someone ‘in the know’ for a while, so here goes:

                                                          How do serverless developers ensure their code performs to spec (local testing), handles anticipated load (stress testing) and degrades deterministically under adverse network conditions (Jepsen-style or chaos- testing)? How do you implement backpressure? Load shedding? What about logging? Configuration? Continuous Integration?

                                                          All instances of applications written in a serverless style that I’ve come across so far (admittedly not too many) seemed to offer a Faustian bargain: “hello world” is super easy, but when stuff breaks, your only recourse is $BIGCO support. Additionally, your business is now non-trivially coupled to the $BIGCO and at the mercy of their decisions.

                                                          Can anyone with production experience chime in on the above issues?

                                                          1. 8

                                                            Great questions!

                                                            How do serverless developers ensure their code performs to spec (local testing)

                                                            AWS e.g. provides a local implementation of Lambda for testing. Otherwise normal testing applies: abstract out business logic into testable units that don’t depend on the transport layer.

                                                            handles anticipated load (stress testing)

                                                            Staging environment.

                                                            and degrades deterministically under adverse network conditions (Jepsen-style or chaos- testing)?

                                                            Trust Amazon / Microsoft / Google. Exporting this problem to your provider is one of the major value adds of serverless architecture.

                                                            How do you implement backpressure? Load shedding?

                                                            Providers usually have features for this, like rate limiting for different events. But it’s not turtles all the way down, eventually your code will touch a real datastore that can overload, and you have to detect and propagate that condition same as any other architecture.

                                                            What about logging?

                                                            Also a provider value add.

                                                            Configuration?

                                                            Providers have environment variables or something spiritually similar.

                                                            Continuous Integration?

                                                            Same as local testing, but automated?

                                                            but when stuff breaks, your only recourse is $BIGCO support

                                                            If their underlying infrastructure breaks, yep. But every architecture has this problem, it just depends on who your provider is. When your PaaS provider breaks, when your IaaS provider breaks, when your colo provider breaks, when your datacenter breaks, when your electrical provider blacks out, when your fuel provider misses a delivery, when your fuel mines have an accident. The only difference is how big the provider is, and how much money its customers pay it to not break. Serverless is at the bottom of the money food chain, if you want less problems then you take on more responsibility and spend the money to do it better than the provider for your use case, or use more than one provider.

                                                            Additionally, your business is now non-trivially coupled to the $BIGCO and at the mercy of their decisions.

                                                            Double-edged sword. You’ve non-trivially coupled to $BIGCO because you want them to make a lot of architectural decisions for you. So again, do it yourself, or use more than one provider.

                                                            1. 4

                                                              And great answers, thank you ;)

                                                              Having skimmed the SAM Local doc, it looks like they took the same approach as they did with DynamoDB local. I think this alleviates a lot of the practical issues around integrated testing. DynamoDB Local is great, but it’s still impossible to toggle throttling errors and other adverse conditions to check how the system handles these, end-to-end.

                                                              The staging-env and CI solution seems to be a natural extension of server-full development, fair enough. For stress testing specifically, though, it’s great to have full access to the SUT, and to be able to diagnose which components break (and why) as the load increases. This approach goes contrary to the opaque nature of the serverless substrate. You only get the metrics AWS/Google/etc. can provide you. I presume dtrace and friends are not welcome residents.

                                                              f their underlying infrastructure breaks, yep. But every architecture has this problem, it just depends on who your provider is. When your PaaS provider breaks, when your IaaS provider breaks, when your colo provider breaks, when your datacenter breaks, (…)

                                                              Well, there’s something to be said for being able to abstract away the service provider and just assume that there are simply nodes in a network. I want to know the ways in which a distributed system can fail – actually recreating the failing state is one way to find out and understand how the system behaves and what kind of countermeasures can be taken.

                                                              if you want less problems then you take on more responsibility

                                                              This is something of a pet peeve of mine. Because people delegate so much trust to cloud providers, individual engineers building software on top of these clouds are held to a lower and lower standard. If there is a hiccup, they can always blame “AWS issues”[1]. Rank-and-file developers won’t get asked why their software was not designed to gracefully handle these elusive “issues”. I think the learned word for this is the deskilling of the workforce.

                                                              [1] The lack of transparency on the part of the cloud providers around minor issues doesn’t help.

                                                              1. 3

                                                                For stress testing specifically, though, it’s great to have full access to the SUT, and to be able to diagnose which components break (and why) as the load increases.

                                                                It is great, and if you need it enough you’ll pay for it. If you won’t pay for it, you don’t need it, you just want it. If you can’t pay for it, and actually do need it, then that’s not a new problem either. Plenty of businesses fail because they don’t have enough money to pay for what they need.

                                                                This is something of a pet peeve of mine. Because people delegate so much trust to cloud providers, individual engineers building software on top of these clouds are held to a lower and lower standard. If there is a hiccup, they can always blame “AWS issues”[1]. Rank-and-file developers won’t get asked why their software was not designed to gracefully handle these elusive “issues”

                                                                I just meant to say you don’t have access to your provider’s infrastructure. But building more resilient systems takes more time, more skill, or both. In other words, money. Probably you’re right to a certain extent, but a lot of the time the money just isn’t there to build out that kind of resiliency. Businesses invest in however much resiliency will make them the most money for the cost.

                                                                So when you see that happening, ask yourself “would the engineering cost required to prevent this hiccup provide more business value than spending the same amount of money elsewhere?”

                                                            2. 4

                                                              @pzel You’ve hit the nail on the head here. See this post on AWS Lambda Reserved Concurrency for some of the issues you still face with Serverless style applications.

                                                              The Serverless architecture style makes a ton of sense for a lot of applications, however there are lots of missing pieces operationally. Things like the Serverless framework fill in the gaps for some of these, but not all of them. In 5 years time I’m sure a lot of these problems will have been solved, and questions of best practices will have some good answers, but right now it is very early.

                                                              1. 1

                                                                I agree with @danielcompton on the fact that serverless is still a pretty new practice in the market and we are still lacking an ecosystem able to support all the possible use cases. Time will come and it will get better, but having spent the last 2 years building enterprise serverless applications, I have to say that the whole ecosystem is not so immature and it can be used already today with some extra effort. I believe in most of the cases the benefits (not having to worry too much on the underlying infrastructure, don’t pay for idle, higher focus on business logic, high availability and auto-scalability) overcome by a lot the extra effort needed to learn and use serverless today.

                                                              2. 3

                                                                Even though @peter already gave you some great answers, I will try to complement them with my personal experience/knowledge (I have used serverless on AWS for almost 2 years now building fairly complex enterprise apps).

                                                                How do serverless developers ensure their code performs to spec (local testing)

                                                                The way I do is a combination of the following practices:

                                                                • unit testing
                                                                • acceptance testing (with mocked services)
                                                                • local testing (manual, mostly using the serverless framework invoke local functionality, but pretty much equivalent to SAM). Not everything could be locally tested depending on which services you use.
                                                                • remote testing environment (to test things that are hard to test locally)
                                                                • CI pipeline with multiple environments (run automated and manual tests in QA before deploying to production)
                                                                • smoke testing

                                                                What about logging?

                                                                In AWS you can use cloudwatch very easily. You can also integrate third parties like loggly. I am sure other cloud providers will have their own facilities around logging.

                                                                Configuration?

                                                                In AWS you can use parameters storage to hold sensible variables and you can propagate them to your lambda functions using environment variables. In terms of infrastructure as code (which you can include in the broad definition of “configuration”) you can adopt tools like terraform or cloudformation (in AWS specifically, predefined choice by the serverless framework).

                                                                Continuous Integration?

                                                                I tried serverless successfully with both Jenkins and CircleCI, but I guess almost any CI tool will do it. You just need to configure your testing steps and your deployment strategy into a CI pipeline.

                                                                when stuff breaks, your only recourse is $BIGCO support

                                                                Sure. But it’s kind of proof that your hand-rolled solution will be more likely to break than the one provided by any major cloud provider. Also, those cloud providers very often provide refunds if you have outages given by the provider infrastructure (assuming you followed their best practices on high availability setups).

                                                                your business is now non-trivially coupled to the $BIGCO

                                                                This is my favourite as I have a very opinionated view on this matter. I simply believe it’s not possible to avoid vendor lock-in. Of course vendor lock-in comes in many shapes and forms and at different layers, but my point is that it’s fairly unpractical to come up with an architecture that is so generic that it’s not affected by any kind of vendor lock-in. When you are using a cloud provider and a methodology like serverless it’s totally true you have a very high vendor lock-in, as you will be using specific services (e.g. API Gateway, Lambda, DynamoDB, S3 in AWS) that are unique in that provider and equivalent services will have very different interfaces with other providers. But I believe the question should be: is it more convenient/practical to pay the risk of the vendor lock-in, rather than spending a decent amount of extra time and effort to come up with a more abstracted infrastructure/app that allows switching the cloud provider if needed? In my experience, I found out that it’s very rarely a good idea to over-abstract solutions only to reduce the vendor lock-in.

                                                                I hope this can add another perspective to the discussion and enrich it a little bit. Feel free to ask more questions if you think my answer wasn’t sufficient here :)

                                                                1. 6

                                                                  This is my favourite as I have a very opinionated view on this matter. I simply believe it’s not possible to avoid vendor lock-in. Of course vendor lock-in comes in many shapes and forms and at different layers, but my point is that it’s fairly unpractical to come up with an architecture that is so generic that it’s not affected by any kind of vendor lock-in.

                                                                  Really? I find it quite easy to avoid vendor lock-in - simple running open-source tools on a VPS or dedicated server almost completely eliminates it. Even if a tool you use is discontinued, you still can use it, and have the option of maintaining it yourself. That’s not at all the case with AWS Lambda/etc. Is there some form of vendor lock in I should be worried about here, or do you simply consider this an unpractical architecture?

                                                                  When you are using a cloud provider and a methodology like serverless it’s totally true you have a very high vendor lock-in, as you will be using specific services (e.g. API Gateway, Lambda, DynamoDB, S3 in AWS) that are unique in that provider and equivalent services will have very different interfaces with other providers. But I believe the question should be: is it more convenient/practical to pay the risk of the vendor lock-in, rather than spending a decent amount of extra time and effort to come up with a more abstracted infrastructure/app that allows switching the cloud provider if needed? In my experience, I found out that it’s very rarely a good idea to over-abstract solutions only to reduce the vendor lock-in.

                                                                  The thing about vendor lock-in is that there’s a quite low probability that you will pay an extremely high price (for example, the API/service you’re using being shut down). Even if it’s been amazing in all the cases you’ve used it in, it’s still entirely possible for the expected value of using these services to be negative, due to the possibility of vendor lock-in issues. Thus, I don’t buy that it’s worth the risk - you’re free to so your own risk/benefit calculations though :)

                                                                  1. 1

                                                                    I probably have to clarify that for me “vendor lock-in” is a very high level concept that includes every sort of “tech lock-in” (which would probably be a better buzz word!).

                                                                    My view is that even if you use an open source tech and you host it yourself, you end up taking a lot of complex tech decisions from which is going to be difficult (and expensive!) to move away.

                                                                    Have you ever tried to migrate from redis to memcache (or vice versa)? Even though the two systems are quite similar and a migration might seem trivial, in a complex infrastructure, moving from one system to the other is still going to be a fairly complex operation with a lot of implications (code changes, language-driver changes, different interface, data migration, provisioning changes, etc.).

                                                                    Also, another thing I am very opinionated about is what’s valuable when developing a tech product (especially if in a startup context). I believe delivering value to the customers/stakeholders is the most important thing while building a product. Whatever abstraction makes easier for the team to focus on business value I think it deserves my attention. On that respect I found Serverless to be a very good abstraction, so I am happy to pay some tradeoffs in having less “tech-freedom” (I have to stick to the solutions given by my cloud provider) and higher vendor lock-in.

                                                                  2. 2

                                                                    I simply believe it’s not possible to avoid vendor lock-in.

                                                                    Well, there is vendor lock-in and vendor lock-in… Ever heard of Oracle Forms?

                                                                1. 4

                                                                  I find the “doord” analogy incorrect. It makes systemd look like it is based on a loosy idea from the start. Openning doors faster in a car is not as important as booting an OS. While I’m not a systemd fan, I find the comparison unfair, which weaken the argument against it. Systemd was based on an important fact: existing init systems were a mess to manage. Sadly, the implementation grew into something that’s even more complex and huge.

                                                                  I was expecting more focus on what Devuan is doing for the opensource community like supporting software that do not depend on an init system, or encouraging simple ideas instead of overengineered ones (looking at you systemd-hostnamed…).

                                                                  Instead, this article looks just like any other rant against systemd, with the same arguments everyone brings up that all fall in the “bugs” category.

                                                                  After all, systemd brings some kind of « stability », as its interface is consistent (even though it has bugs). For many people, the new shinny features of systemd are definitely not worth its complexity, and this is for these people that the work from the Devuan guys is important. By keeping the alternative to systemd alive, they keep the spirit of linux which aim to keep every piece of software running on top of the kernel swappable, instead of relying on a rigid and complex API.

                                                                  1. 5

                                                                    There’s a really good comment by someone who maintained Arch Linux’s init scripts pre-systemd about why they switched over. I’m as anti-systemd as the next person, but it’s important to understand why it became so successful.

                                                                    1. 7

                                                                      Having a standard init system is incredibly valuable for package maintenance and having full process control does require having code in init to track children, grandchildren and even detached child processes. You can’t do that without being the init process.

                                                                      All that being said, systemd is terrible from a usability standpoint. I honestly haven’t seen all the random/crashing bugs people complain about, but I do think systemctl is a terrible command, the bash completion is terrible slow, you can’t just edit a target file; you have to reload the daemon process for those changes to take effect, you have to call status after a command to see the limited log output, binary logs, etc. etc. etc.

                                                                      There have been so many attempts to take the one good thing (standardized init scripts) and make drop in replacements (uselessd and others) and they all hit some pretty hard limits and are eventually abandoned. That’s sad that systemd is so integrated that replacements aren’t even remotely trivial.

                                                                      Without systemd, you need one of the udev forks, consolekit and a few other things to make things work. Void Linux, Gentoo and Devuan are pretty critical in keeping this type or architecture viable. Maybe one day someone will come up with an awesome supervisor replacement and get other distributions on-board to have a real alternative.

                                                                      1. 5

                                                                        Having a standard init system is incredibly valuable for package maintenance

                                                                        The problem here is that Systemd can never be a standard init system, because it’s Linux only.

                                                                        Maybe one day someone will come up with an awesome supervisor replacement and get other distributions on-board to have a real alternative.

                                                                        I’m working on it :) https://github.com/davmac314/dinit

                                                                        This has been my pet project for some time, although I’m long due to write a blog post update on progress. (Not a lot of commit recently, I know - that’s because Dinit uses an event loop library, Dasynq, which I’ve been focussing on instead - that should be able to change now, as I’ve just released Dasynq 1.0).

                                                                      2. 5

                                                                        it was impossible to say when a certain piece of hardware would be available […] this was solved by first triggering uevents, then waiting for udev to “settle” […] Solution: An system that can perform actions based on events - this is one of the major features of systemd

                                                                        udev is not a system that can perform actions based on events, like devd does on FreeBSD? What is it then?

                                                                        we have daemons with far more complex dependencies

                                                                        The question is… WHY?

                                                                        Sounds like self-inflicted unnecessary complexity. I believe that services can and should start independently.

                                                                        I run several webapps + postgres + nginx + dovecot + opensmtpd + rspamd + syncthing on my server… and they’re all started by runit at the same time, because none of them expect anything to be running before them. nginx doesn’t care if the webapps are up, it connects dynamically. webapps don’t care if postgres is up, they will retry connection as needed. etc. etc.

                                                                        Why can’t Linux desktop developers design their programs in the same way?

                                                                    1. 4

                                                                      Does anyone know why Ada is seeing a bit of a resurgence (at least, among the HN/Lobsters crowd)? I’m quite surprised by it, so I’m wondering if there are any interesting lessons that can be taken from it in terms of what causes languages to become popular.

                                                                      Also, what terms I should search for to find out more about Ada’s type system? It seems quite interesting - I’d love to learn more about what tradeoffs it’s making.

                                                                      1. 12

                                                                        Personally, after shunning Ada when I was younger because it felt cumbersome and ugly, I have seen enough failures where I’ve thought, “gee, those decisions Ada made more sense than I thought, for large projects”. I think some people are experiencing that; at the same time there’s this new wave of systems languages (often with stated goals like safety or being explicit about behavior) which is an opportunity for reflection on systems languages of the past; and SPARK is impressive and is part of the wave of new interest in more aggressive static verification.

                                                                        An earlier article posted on lobste.rs had some nice discussion of some interesting parts of Ada’s type system: http://www.electronicdesign.com/embedded-revolution/assessing-ada-language-audio-applications

                                                                        Also, the Ada concurrency model (tasks) is really interesting and feels, in retrospect, ahead of its time.

                                                                        1. 2

                                                                          I’m with you that it’s the new wave of systems languages that helped. The thing they were helping were people like me and pjmpl on HN that were dropping its name [among other old ones] on every one of those threads among others. There have been two, main memes demanding such a response: (a) the idea that Rust is the first, safe, systems language; (b) the idea that low-level, esp OS, software can’t be written in languages with a GC. There’s quite a few counters to the GC part but Burroughs MCP in ALGOL and Ada are main counters to first. To avoid dismissals, I was making sure to drop references such as Barnes Safe and Secure Ada book in every comment hoping people following along would try stuff or at least post other references.

                                                                          Many people contributing tidbits about the obscure systems languages on threads in a similar topic that had momentum. The Ada threads might be ripple effects of that.

                                                                          1. 3

                                                                            Now that I think about it, your posts are probably why I automatically associate Ada with Safe Computing these days.

                                                                        2. 7

                                                                          I think its part of the general trend of interest in formal methods and correctness (guarantees or evaluation of). We’ve also seen a lot on TLA+ recently, for example.

                                                                          1. 10

                                                                            I think lobsters, at least, is really swingy. A couple people interested in something can really overrepresent it here. For example, I either found or wrote the majority of the TLA+ articles posted here.

                                                                            And things propagate. I researched Eiffel and Design by Contract because of @nickpsecurity’s comments, which lead to me finding and writing a bunch of stuff on contract programming, which might end up interesting other people…

                                                                            One focused person can do a lot.

                                                                          1. 10

                                                                            … how in the world is Microsoft going to extinguish SSH? How can Microsoft extinguish core infrastructure in widely-used OSes they have no control over?

                                                                            1. 14

                                                                              Don’t question the meme.

                                                                              1. 3

                                                                                There’s lots of potential extensions to standards in CompSci that improve pain points that exist anywhere from individual use to enterprises. They could do an extension for one of those they buy or tweak a bit to slap a patent on. Obfuscate it a bit, too. Then, they deliver that advantageous benefit in their version. It gets widespread after a Windows release or two. Once people are locked into it, they can extend it in some new ways. Maybe some cool tricks like SSH proxies for Microsoft applications, VPN’s into their cloud, or something for Xbox Live like people used to do with LogMeIn Hamachi. They might even be cross-licensing it to third parties. Those might have already built stuff on it since it’s in Windows at no cost to them.

                                                                                You’re not on open-source SSH any more with those applications. Now, you’re depending on their tech that plays by their rules on their paid platforms. It’s also called SSH so anyone Googling for SSH on Windows might find the “genuine software.” ;)

                                                                                1. 3

                                                                                  It gets widespread after a Windows release or two.

                                                                                  Here’s the point which doesn’t add up. It won’t become “widespread” beyond the desktop world.

                                                                                  A lot of things are widespread in the Windows world and not really beyond it, so Microsoft could do this to those things, but SSH is not one of them, and is not going to become one of them.

                                                                                  I’m aware of Embrace, Extend, Extinguish. I was alive and aware in the 1990s. I’m also alive and aware in a world where Linux and Open Source in general is so widespread that it isn’t going away.

                                                                              2. 5

                                                                                Isn’t that the GNU philosophy? ;)

                                                                                1. 2

                                                                                  They are pretty similar in mechanism haha.

                                                                              1. 4

                                                                                I usually can’t speak about my work, but this week I’m giving a talk to a university robotics club about motion planning. Trying to do more talks in general.

                                                                                1. 1

                                                                                  I usually can’t speak about my work

                                                                                  Man, that sucks. I hope you’re keeping secrets for the right moral reasons. Too often people do unkind things when they have secrecy.

                                                                                  1. 1

                                                                                    Could I get a link to your materials? I’ve been writing a blog series on control theory, so I’m curious to see how others teach somewhat similar topics.

                                                                                    1. 2

                                                                                      Sure. I don’t have them collected yet, but I plan to provide a collection to the students so I’ll send it along.

                                                                                  1. 1

                                                                                    I’m curious about how prospective employers outside the web-frontend/app development would evaluate someone with her skillset and experience. First let me add the disclaimer that I am a researcher and have never had or hired for a “traditional” programming job.

                                                                                    My hypothesis is that candidates with Computer Science (or equivalent) degrees from reputable universities can be expected to have some “stock” skills: basic to moderate programming skills, knowledge of data structures and algorithms, some systems programming experience, and perhaps some more specialized knowledge in webdev or graphics or compilers or machine learning etc. depending on what higher level courses they took. As a potential employer, I can be fairly confident that such a candidate could pick up whatever the current hip technology is in a couple weeks and work on a variety of projects across my system. With the proper initial training and guidance, such a candidate could work on a front-end app, or a back-end server, or maybe even on mission critical parts of my distributed system (under the guidance of a senior engineer), depending on how good they are and what prior experience they have.

                                                                                    By contrast, for someone like the author, unless my project is specifically involving JavaScript, Angular, ReactJS, I wouldn’t be comfortable hiring them, or I would have to put in lots of time and money into training them. The university degree experience has done a lot of training and evaluation of a potential employee than a bootcamp, or that I can easily learn from a GitHub resume.

                                                                                    1. 1

                                                                                      There’s no reason that people can’t be self-taught at systems/data structures/algorithms topics, and while the author might not have that experience, I think that there are many people without degrees that do. That being said, showing that off in a résumé screen or interview is probably more important for candidates without CS degrees.

                                                                                      I can be fairly confident that such a candidate could pick up whatever the current hip technology is in a couple weeks and work on a variety of projects across my system.

                                                                                      I think people often underestimate the depth of knowledge that’s possible in things like this. React is extremely complicated, and while I’m sure I could throw together a webapp using it in a day or so, knowing all of the tools/patterns/idioms available, and knowing how to effectively debug and diagnose problems is a skill that I think would take a lot longer to learn. I’ve definitely made statements along these lines before towards webapp development/react/etc, but the fact is, frameworks are skills just like any other, and there’s no reason to expect that you can learn all of a large/complicated framework in a couple weeks.

                                                                                    1. 12

                                                                                      It’s good to see more people talking about skipping university/college as an option! A lot of people who I went to high school with went to college because it’s the next thing that they’re “supposed” to do, rather than thinking about costs/benefits and other options. So far, I’ve been pretty happy with my choice not to go, but I definitely had a lot of doubt at first, mostly because I wasn’t seeing anyone else pursue the same sort of path.

                                                                                      1. 8

                                                                                        I replaced an iconfont that I was using on my site with inlined svg logos a week or so ago. Got rid of a request, a ton of wasted bytes for icons I wasn’t using + css for it, and improved accessibility at the same time.

                                                                                        SVG is great :)