1.  

    Wow. Just wow. Selected citations from comments:

    This destroyed 3 production server after a single deploy!

    Make a pull request and help out!

    Not a single pull request was merged in the last 2 months that came from an outside contributor. There are currently over 70 PRs open and none of them have any activity from the npm team.

    How about we give the two person team more than 24 hours to run npm unpublish npm@5.7.0?

    I’m not sure if you’re joking, but that command only allows unpublishing versions published within 24 hours, and not older.

    1. 19

      A major reason I use Debian is that, as a user, I consider 90% of software lifecycles to be utterly insane and actively hostile to me, and Debian forces them into some semblance of a reasonable, manageable, release pattern (namely, Debian’s). If I get the option to choose between upstream and a Debian package, I will take the latter every single time, because it immediately has a bunch of policy guarantees that make it friendlier to me as a user. And if I don’t get the option, I will avoid the software if I possibly can.

      (Firefox is the only major exception, and its excessively fast release cadence and short support windows are by far my biggest issue with it as a piece of software.)

      1. 4

        I never really understood why short release cycles is a problem for people, but then I don’t use Debian because of their too long ones. For example, the majority of Firefox’s releases don’t contain user-visible changes.

        Could you elaborate what your problems with Firefox on Debian are? Or why software lifecycles can even be hostile to you?

        1. 6

          I’m with you. I update my personal devices ~weekly via a rolling release model (going on 10 years now), and I virtually never run into problems. The policies employed by Debian stable provide literally no advantage to me because of that. Maybe the calculus changes in a production environment with more machines to manage, but as far as personal devices go, Debian stable’s policies would lead to a net drain on my time because I’d be constantly pushing against the grain to figure out how to update my software to the latest version provided by upstream.

          1. 3

            I’ve had quite a few problems myself, mostly around language-specific package managers that break something under me. This is probably partly my fault because I have a lot of one-off scripts with unversioned dependencies, but at least in the languages I use most (Python, Perl, R, shell, etc.), those kinds of unversioned dependencies seem to be the norm. Most recent example: an update to R on my Mac somehow broke some of my data-visualization scripts while I was working on a paper (seemingly due to a change in ggplot, which was managed through R’s own package manager). Not very convenient timing.

            For a desktop I mostly put up with that anyway, but for a server I prefer Debian stable because I can leave it unattended with auto-updates on, not having to worry that something is going to break. For example I have some old Perl CGI stuff lying around, and have been happy that if I manage dependencies via Debian stable’s libdevel-xxx-perl packages instead of CPAN, I can auto-update and pull in security updates without my scripts breaking. I also like major Postfix upgrades (which sometimes require manual intervention) to be scheduled rather than rolling.

            1. 1

              Yeah I don’t deal with R myself, but based on what my wife tells me (she works with R a lot), I’m not at all surprised that it would be a headache to deal with!

          2. 7

            Every time a major update happens to a piece of software, I need to spend a bunch of time figuring out and adapting to the changes. As a user, my goal is to use software, rather than learn how to use it, so that time is almost invariably wasted. If I can minimize the frequency, and ideally do all my major updates at the same time, that at least constrains the pain.

            I’ve ranted about this in a more restricted context before.

            My problem with Firefox on Debian is that due to sheer code volume and complexity, third-party security support is impossible; its upstream release and support windows are incompatible with Debian’s; and it’s too important to be dropped from the distro. Due to all that, it has an exception to the release lifecycle, and every now and then with little warning it will go through a major update, breaking everything and wasting a whole bunch of my time.

            1. 4

              Due to all that, it has an exception to the release lifecycle, and every now and then with little warning it will go through a major update, breaking everything and wasting a whole bunch of my time.

              I had this happen with Chromium; they replaced the renderer in upstream, and a security flaw was found which couldn’t be backported due to how insanely complicated the codebase is and the fact that Chromium doesn’t have a proper stable branch, so one day I woke up and suddenly I couldn’t run Chromium over X forwarding any more, which was literally the only thing I was using it for.

              1. 2

                Ha, now I understand why I use emacs. It hasn’t changed the UX in years, if not decades.

              2. 4

                Because you need to invest into upgrading too much of your time. I maintain 4 personal devices with Fedora and I almost manage to upgrade yearly. I am very happy for RHEL at work. 150 servers would be insane. Even with automation. Just the investment into decent ops is years.

                1. 2

                  For me there is an equivalence between Debian stable releases and Ubuntu LTE ones, they both run at around 2 years.

                  But the advantage (in my eyes) that Debian has is the rolling update process for the “testing” distribution, which gets a good balance between stability and movement.

                  We are currently switching our servers from Ubuntu LTE to Debian stable. Driven mostly by lack of confidence in the future trajectory of Ubuntu.

              1. 6

                Done. Now, how do I sponsor a boiling water emoji campaign?

                1. 7

                  Slowly. You sponsor it slowly.

                  1. 2

                    In Bitcoin.

                  1. 5

                    I previously suggested it under the tag, embedded, since that’s what the industry and FOSS sectors call it. I noted it had it’s own style of programming, CPU’s (or MCU’s), tooling, newsletters (eg Embedded Muse), etc. It’s a $10+ billion dollar market. That tag was rejected in favor of shoehorning such work into existing tags. Hardware + programming has been my standby even though those barely fit.

                    IoT are embedded-style devices with networking functions. Pretty much everything that applies to embedded applies to them. If anything, it’s just a new phrase for tiny, networked computers which have been around a while. I re-suggest “embedded” or similar tag like before since it will cover IoT and non-Internet-connected embedded.

                    1. 4

                      IoT are embedded-style devices with networking functions.

                      Every time I hear the term IoT, it reminds me of this:

                      When a user presses the open door button on the mobile app, the app accesses the cloud server.

                    1. 16

                      Nope. That’s why I quit tech. What to do next, I don’t know.

                      I worked at a popular streaming service for the past 7 years.

                      1. 5

                        Have you considered working a tech position for a successful non-profit like a hospital? Probably face plenty of BS like anywhere else in tech but at least doing net good with range of pay available. If you feel you need out, though, then good luck on next move whatever it is. :)

                        1. 8

                          First of all I’m leaving the US and moving back to the Nordic (Iceland.) I had a really hard time in the Bay Area, such incredible wealth distributed so unevenly. I’d step over homeless people who might not even be alive, on my walk to work, and feel less human every day. The treatment of my own mental health issues was also appalling, and I knew I’d be one of those people if it got bad enough (or more realistically, I’d be deported.) Once I’m back in a place that’s got the basics right (IMO) my quality of life won’t be tied to my salary/work benefits so much, so that will make it easier to find something that feels meaningful, I think.

                          1. 5

                            Yeah, they do seem to take care of their own much better in the Nordic countries. I’ve enviously noticed that. Well, if they have good place to live that’s cheap and reliable Internet, then you should be able to find good remoting opportunities. Maybe a mix of local and remote, too, if trying to balance something that will definitely keep paying you vs something fulfilling that’s uncertain over time.

                          2. 7

                            I second this. Get into non-profits.

                            For example libraries are facing an incredible challenge – to aggregate knowledge and make it widely accessible. Right now, when you research a topic, you evaluate scattered pieces of information from all over the web or maybe fire up an outdated library search engine to find some papers that never made it to the Google-verse.

                            Can’t we do any better?

                        1. 5

                          … we might introduce paid LKRG Pro…

                          – Openwall: bringing security into open environments

                          Sigh.

                          1. 12

                            We have tried to look into it a few years back… It was not good. Basically an overengineered bigco consultant’s dream. Rackspace supposedly used it a lot, but not Dreamhost or DigitalOcean. It tries to play catch up with Amazon for some reason.

                            Has anything changed?

                            1. 1

                              When you think about it, whichever character is going to be used as delimiter it’s probably also going to be allowed in the filename (anything other than \n or \0). So I’m not sure what the solution to this would be.

                              1. 6

                                Then put the filename alone on the first line, or put the file name last so that you can read in the numeric parameters then the rest of the line is the filename. Right?

                                1. 2

                                  Seems like the simplest solution. And it’s not like files in /proc don’t use multilines already. I guess it’s made that way for legacy support. However, as mort mentioned you can also have \n in a filename.

                                2. 5

                                  Files can actually contain newlines. Try for example touch "$(printf "hello\nworld")".

                                  It would probably work to use a slash as a separator though, unless the executable name might be a path.

                                  EDIT: added quotes around $(printf "hello\nworld")

                                  1. 1

                                    touch $(printf “hello\nworld”).

                                    This creates two files where I’m testing. But it seems like I’m able to do it without the printf.

                                    1. 3

                                      Sorry, I should’ve written touch "$(printf "hello\nworld")".

                                      If you just run ls, it will show you 'hello'$'\n''world', but if you redirect pipe ls (for example to less), it will show up on two separate lines.

                                  2. 4

                                    Or use a “safer” format like netstring, tnetstrings, or maybe even Bencode.

                                    1. 3

                                      Or expose structs over sysctls and ioctls instead of making these damn virtual filesystems…

                                      1. 2

                                        I’ve been testing yesterday getting those information through netlinks. There’s a kernel configuration called CONFIG_TASKSTATS (check if it’s enabled in your kernel config first /boo/config* or /proc/config.gz).
                                        The documentation can be found here: https://www.kernel.org/doc/Documentation/accounting/taskstats.txt There are a bunch of C headers to be able to access the features and there are even Go and Python libraries. I’ve been testing with the Python one, gnlpy (https://www.pydoc.io/pypi/gnlpy-0.1.2/autoapi/taskstats/index.html).
                                        However I’ve been running into trouble with the permission part, the capabilities(7). This is something that I haven’t found that much documentation on. This rfc http://www.faqs.org/rfcs/rfc3549.html and this manpage https://linux.die.net/man/7/netlink say that users need the capability cap_net_admin but it’s not doing it for me.

                                        5.  Security Considerations
                                        
                                           Netlink lives in a trusted environment of a single host separated by
                                           kernel and user space.  Linux capabilities ensure that only someone
                                           with CAP_NET_ADMIN capability (typically, the root user) is allowed
                                           to open sockets.
                                        

                                        I’ve been trying sudo setcap 'cap_net_admin=p cap_net_admin+i cap_net_admin+e' t.py but it still doesn’t execute as a normal user. But it works perfectly as root.

                                        Maybe someone here has more info on the topic.

                                        EDIT: As 1amzave, copying the python interpreter and assigning the capabilities on it works fine.

                                        1. 1

                                          I’m gonna hazard a guess that your setcap not being effective might have something to do with it being interpreted (via a shebang line) rather than a directly-executed binary. Maybe create a copy of your python interpreter (presumably you don’t want to blindly grant CAP_NET_ADMIN to all python code), setcap that, and change your shebang line to use it instead.

                                          1. 1

                                            You’re right, that was the issue. Copying the python interpreter in a home directory and setting the capabilities on it did the trick. The python script itself doesn’t need capabilities.

                                            Overall, I think netlink is great but unlike procfs it’s not that easily accessible.

                                    2. 1

                                      ASCII actually has field delimiter characters, which are very rarely used – it’s a shame, because it would make parsing trivial in cases like that.

                                      1. 5

                                        Not really, because file names can contain those field delimiter characters.

                                        It would’ve been nice if there were stricter rules about what charcaters are allowed in a file name. When would you ever want a newline, or field delimiter, or carriage return, or BEL, in a file name?

                                        1. 2

                                          BEL, in a file name

                                          When I want to play a small prank by making ls in a given directory make the terminal beep. :)

                                        2. 4

                                          Field delimiters are also valid file names…

                                      1. 1

                                        Pay people to store things.

                                        A crypto currency which mines not blockchain but content would encourage people to donate their disks to the rare- hoovering up all the worlds data, since anyone who wants it would have to pay a premium inverse to availability. Think of it as a tax-on-demand library of Congress.

                                        1. 3

                                          This wouldn’t work by itself. Soon, most disks would be occupied by useless junk and someone will need to decide what to ditch and what to keep. Which is the other function of the Library of Congress.

                                          Some library evaluation methods include the checklists method, circulation and interlibrary loan statistics, citation analysis, network usage analysis, vendor-supplied statistics and faculty opinion.

                                          – Wikipedia on Collection development

                                          1. 1

                                            Soon, most disks would be occupied by useless junk and someone will need to decide what to ditch and what to keep.

                                            I’m envisioning a recurring storage fee that would eventually run out unless topped-up. Somewhat like Ethereum distributed apps that stop running when they run out of ‘gas’.

                                          2. 2

                                            FileCoin aims to be that. Its initial token sale raised over $200M, showing that a lot of big players want in on that market opportunity. Right now it seems they are massively expanding their team, and it’s not clear yet when it will be available to the general public.

                                            Considering P2P rewards, private torrent trackers have been doing this for a really long time, converting seed time into virtual community credits or something similar, enabling recognition and opportunities to contributing members. But like in other parts of the online world, spending time, money, and equipment for a cause rather than a product becomes less and less convenient for the average user. Many people lamented the downfall of what.cd, but it illustrates the two sides of the P2P coin pretty well: it can have huge potential if many people are willing to invest their resources, but it is still very much illegal for much of the shared content, and there is a powerful force behind the corporations and authorities to stop these things (namely, huge piles of money).

                                            1. 1

                                              A reward system like you describe appeals to me. A market can be an efficient way to allocate a finite supply of resources. This would also enable things like bounties for data that exists out of band. I wonder if valuing the data inversely proportional to its availability would eventually bring about an equilibrium where most things were within the same range of availability. I also agree with the sibling that storage space is a complicating factor. In theory, the value of the data would rise and attract more hosts until the supply met the demand. So the effect would be a general pay-wall.

                                            1. 5

                                              Brings me back to my LFS & Gentoo years. According to the BLFS handbook, Chromium takes whole 94 SBU (with 4 threads) to build. LibreOffice takes 41 (with 8 threads), which is about the same. At the time, OpenOffice was the king of the long build times. The only package Gentoo users were actually downloading precompiled instead of building it themselves.

                                              Oh my, I still remember how happy I was when my OpenOffice build finished successfully after 13 hours. You never knew what could go wrong. It was pretty cool memtest, too. :-)

                                              1. 2

                                                Maybe 10-12 years ago I ran Slackware on all my home machines, and liked to build all of the software I used on a daily basis from source. I remember Firefox being painful (lots of dependencies), but after a day or two of installing dependencies (also from source), I was up and running.

                                                OpenOffice was the only one I ever gave up on, and a decade later I still dislike it because of that. I remember it being a nightmare of extensive dependencies (often requiring specific versions) and requiring multiple languages (at least C++ and Java, I think also Perl and others?). And it required a TON of disk space and memory to build. After struggling for a while, I decided it just wasn’t worth it.

                                              1. 11

                                                Hey @loige, nice writeup! I’ve been aching to asks a few questions to someone ‘in the know’ for a while, so here goes:

                                                How do serverless developers ensure their code performs to spec (local testing), handles anticipated load (stress testing) and degrades deterministically under adverse network conditions (Jepsen-style or chaos- testing)? How do you implement backpressure? Load shedding? What about logging? Configuration? Continuous Integration?

                                                All instances of applications written in a serverless style that I’ve come across so far (admittedly not too many) seemed to offer a Faustian bargain: “hello world” is super easy, but when stuff breaks, your only recourse is $BIGCO support. Additionally, your business is now non-trivially coupled to the $BIGCO and at the mercy of their decisions.

                                                Can anyone with production experience chime in on the above issues?

                                                1. 8

                                                  Great questions!

                                                  How do serverless developers ensure their code performs to spec (local testing)

                                                  AWS e.g. provides a local implementation of Lambda for testing. Otherwise normal testing applies: abstract out business logic into testable units that don’t depend on the transport layer.

                                                  handles anticipated load (stress testing)

                                                  Staging environment.

                                                  and degrades deterministically under adverse network conditions (Jepsen-style or chaos- testing)?

                                                  Trust Amazon / Microsoft / Google. Exporting this problem to your provider is one of the major value adds of serverless architecture.

                                                  How do you implement backpressure? Load shedding?

                                                  Providers usually have features for this, like rate limiting for different events. But it’s not turtles all the way down, eventually your code will touch a real datastore that can overload, and you have to detect and propagate that condition same as any other architecture.

                                                  What about logging?

                                                  Also a provider value add.

                                                  Configuration?

                                                  Providers have environment variables or something spiritually similar.

                                                  Continuous Integration?

                                                  Same as local testing, but automated?

                                                  but when stuff breaks, your only recourse is $BIGCO support

                                                  If their underlying infrastructure breaks, yep. But every architecture has this problem, it just depends on who your provider is. When your PaaS provider breaks, when your IaaS provider breaks, when your colo provider breaks, when your datacenter breaks, when your electrical provider blacks out, when your fuel provider misses a delivery, when your fuel mines have an accident. The only difference is how big the provider is, and how much money its customers pay it to not break. Serverless is at the bottom of the money food chain, if you want less problems then you take on more responsibility and spend the money to do it better than the provider for your use case, or use more than one provider.

                                                  Additionally, your business is now non-trivially coupled to the $BIGCO and at the mercy of their decisions.

                                                  Double-edged sword. You’ve non-trivially coupled to $BIGCO because you want them to make a lot of architectural decisions for you. So again, do it yourself, or use more than one provider.

                                                  1. 4

                                                    And great answers, thank you ;)

                                                    Having skimmed the SAM Local doc, it looks like they took the same approach as they did with DynamoDB local. I think this alleviates a lot of the practical issues around integrated testing. DynamoDB Local is great, but it’s still impossible to toggle throttling errors and other adverse conditions to check how the system handles these, end-to-end.

                                                    The staging-env and CI solution seems to be a natural extension of server-full development, fair enough. For stress testing specifically, though, it’s great to have full access to the SUT, and to be able to diagnose which components break (and why) as the load increases. This approach goes contrary to the opaque nature of the serverless substrate. You only get the metrics AWS/Google/etc. can provide you. I presume dtrace and friends are not welcome residents.

                                                    f their underlying infrastructure breaks, yep. But every architecture has this problem, it just depends on who your provider is. When your PaaS provider breaks, when your IaaS provider breaks, when your colo provider breaks, when your datacenter breaks, (…)

                                                    Well, there’s something to be said for being able to abstract away the service provider and just assume that there are simply nodes in a network. I want to know the ways in which a distributed system can fail – actually recreating the failing state is one way to find out and understand how the system behaves and what kind of countermeasures can be taken.

                                                    if you want less problems then you take on more responsibility

                                                    This is something of a pet peeve of mine. Because people delegate so much trust to cloud providers, individual engineers building software on top of these clouds are held to a lower and lower standard. If there is a hiccup, they can always blame “AWS issues”[1]. Rank-and-file developers won’t get asked why their software was not designed to gracefully handle these elusive “issues”. I think the learned word for this is the deskilling of the workforce.

                                                    [1] The lack of transparency on the part of the cloud providers around minor issues doesn’t help.

                                                    1. 3

                                                      For stress testing specifically, though, it’s great to have full access to the SUT, and to be able to diagnose which components break (and why) as the load increases.

                                                      It is great, and if you need it enough you’ll pay for it. If you won’t pay for it, you don’t need it, you just want it. If you can’t pay for it, and actually do need it, then that’s not a new problem either. Plenty of businesses fail because they don’t have enough money to pay for what they need.

                                                      This is something of a pet peeve of mine. Because people delegate so much trust to cloud providers, individual engineers building software on top of these clouds are held to a lower and lower standard. If there is a hiccup, they can always blame “AWS issues”[1]. Rank-and-file developers won’t get asked why their software was not designed to gracefully handle these elusive “issues”

                                                      I just meant to say you don’t have access to your provider’s infrastructure. But building more resilient systems takes more time, more skill, or both. In other words, money. Probably you’re right to a certain extent, but a lot of the time the money just isn’t there to build out that kind of resiliency. Businesses invest in however much resiliency will make them the most money for the cost.

                                                      So when you see that happening, ask yourself “would the engineering cost required to prevent this hiccup provide more business value than spending the same amount of money elsewhere?”

                                                  2. 4

                                                    @pzel You’ve hit the nail on the head here. See this post on AWS Lambda Reserved Concurrency for some of the issues you still face with Serverless style applications.

                                                    The Serverless architecture style makes a ton of sense for a lot of applications, however there are lots of missing pieces operationally. Things like the Serverless framework fill in the gaps for some of these, but not all of them. In 5 years time I’m sure a lot of these problems will have been solved, and questions of best practices will have some good answers, but right now it is very early.

                                                    1. 1

                                                      I agree with @danielcompton on the fact that serverless is still a pretty new practice in the market and we are still lacking an ecosystem able to support all the possible use cases. Time will come and it will get better, but having spent the last 2 years building enterprise serverless applications, I have to say that the whole ecosystem is not so immature and it can be used already today with some extra effort. I believe in most of the cases the benefits (not having to worry too much on the underlying infrastructure, don’t pay for idle, higher focus on business logic, high availability and auto-scalability) overcome by a lot the extra effort needed to learn and use serverless today.

                                                    2. 3

                                                      Even though @peter already gave you some great answers, I will try to complement them with my personal experience/knowledge (I have used serverless on AWS for almost 2 years now building fairly complex enterprise apps).

                                                      How do serverless developers ensure their code performs to spec (local testing)

                                                      The way I do is a combination of the following practices:

                                                      • unit testing
                                                      • acceptance testing (with mocked services)
                                                      • local testing (manual, mostly using the serverless framework invoke local functionality, but pretty much equivalent to SAM). Not everything could be locally tested depending on which services you use.
                                                      • remote testing environment (to test things that are hard to test locally)
                                                      • CI pipeline with multiple environments (run automated and manual tests in QA before deploying to production)
                                                      • smoke testing

                                                      What about logging?

                                                      In AWS you can use cloudwatch very easily. You can also integrate third parties like loggly. I am sure other cloud providers will have their own facilities around logging.

                                                      Configuration?

                                                      In AWS you can use parameters storage to hold sensible variables and you can propagate them to your lambda functions using environment variables. In terms of infrastructure as code (which you can include in the broad definition of “configuration”) you can adopt tools like terraform or cloudformation (in AWS specifically, predefined choice by the serverless framework).

                                                      Continuous Integration?

                                                      I tried serverless successfully with both Jenkins and CircleCI, but I guess almost any CI tool will do it. You just need to configure your testing steps and your deployment strategy into a CI pipeline.

                                                      when stuff breaks, your only recourse is $BIGCO support

                                                      Sure. But it’s kind of proof that your hand-rolled solution will be more likely to break than the one provided by any major cloud provider. Also, those cloud providers very often provide refunds if you have outages given by the provider infrastructure (assuming you followed their best practices on high availability setups).

                                                      your business is now non-trivially coupled to the $BIGCO

                                                      This is my favourite as I have a very opinionated view on this matter. I simply believe it’s not possible to avoid vendor lock-in. Of course vendor lock-in comes in many shapes and forms and at different layers, but my point is that it’s fairly unpractical to come up with an architecture that is so generic that it’s not affected by any kind of vendor lock-in. When you are using a cloud provider and a methodology like serverless it’s totally true you have a very high vendor lock-in, as you will be using specific services (e.g. API Gateway, Lambda, DynamoDB, S3 in AWS) that are unique in that provider and equivalent services will have very different interfaces with other providers. But I believe the question should be: is it more convenient/practical to pay the risk of the vendor lock-in, rather than spending a decent amount of extra time and effort to come up with a more abstracted infrastructure/app that allows switching the cloud provider if needed? In my experience, I found out that it’s very rarely a good idea to over-abstract solutions only to reduce the vendor lock-in.

                                                      I hope this can add another perspective to the discussion and enrich it a little bit. Feel free to ask more questions if you think my answer wasn’t sufficient here :)

                                                      1. 6

                                                        This is my favourite as I have a very opinionated view on this matter. I simply believe it’s not possible to avoid vendor lock-in. Of course vendor lock-in comes in many shapes and forms and at different layers, but my point is that it’s fairly unpractical to come up with an architecture that is so generic that it’s not affected by any kind of vendor lock-in.

                                                        Really? I find it quite easy to avoid vendor lock-in - simple running open-source tools on a VPS or dedicated server almost completely eliminates it. Even if a tool you use is discontinued, you still can use it, and have the option of maintaining it yourself. That’s not at all the case with AWS Lambda/etc. Is there some form of vendor lock in I should be worried about here, or do you simply consider this an unpractical architecture?

                                                        When you are using a cloud provider and a methodology like serverless it’s totally true you have a very high vendor lock-in, as you will be using specific services (e.g. API Gateway, Lambda, DynamoDB, S3 in AWS) that are unique in that provider and equivalent services will have very different interfaces with other providers. But I believe the question should be: is it more convenient/practical to pay the risk of the vendor lock-in, rather than spending a decent amount of extra time and effort to come up with a more abstracted infrastructure/app that allows switching the cloud provider if needed? In my experience, I found out that it’s very rarely a good idea to over-abstract solutions only to reduce the vendor lock-in.

                                                        The thing about vendor lock-in is that there’s a quite low probability that you will pay an extremely high price (for example, the API/service you’re using being shut down). Even if it’s been amazing in all the cases you’ve used it in, it’s still entirely possible for the expected value of using these services to be negative, due to the possibility of vendor lock-in issues. Thus, I don’t buy that it’s worth the risk - you’re free to so your own risk/benefit calculations though :)

                                                        1. 1

                                                          I probably have to clarify that for me “vendor lock-in” is a very high level concept that includes every sort of “tech lock-in” (which would probably be a better buzz word!).

                                                          My view is that even if you use an open source tech and you host it yourself, you end up taking a lot of complex tech decisions from which is going to be difficult (and expensive!) to move away.

                                                          Have you ever tried to migrate from redis to memcache (or vice versa)? Even though the two systems are quite similar and a migration might seem trivial, in a complex infrastructure, moving from one system to the other is still going to be a fairly complex operation with a lot of implications (code changes, language-driver changes, different interface, data migration, provisioning changes, etc.).

                                                          Also, another thing I am very opinionated about is what’s valuable when developing a tech product (especially if in a startup context). I believe delivering value to the customers/stakeholders is the most important thing while building a product. Whatever abstraction makes easier for the team to focus on business value I think it deserves my attention. On that respect I found Serverless to be a very good abstraction, so I am happy to pay some tradeoffs in having less “tech-freedom” (I have to stick to the solutions given by my cloud provider) and higher vendor lock-in.

                                                        2. 2

                                                          I simply believe it’s not possible to avoid vendor lock-in.

                                                          Well, there is vendor lock-in and vendor lock-in… Ever heard of Oracle Forms?

                                                      1. 17

                                                        I love the smell of outrage in the morning. Or afternoon. Whichever.

                                                        I like Firefox. I’m not terribly concerned about this. A sure way to get them to stop is to donate monthly.

                                                        1. 15

                                                          How much? How much would it cost to stop this, over and above Mozilla’s existing income? Why doesn’t Mozilla, as a user first organization, tell its users “we need this much money or we’re going to add the looking glass extension”?

                                                          1. 6

                                                            Oh, yes, a 1000 times this! I’d happily pay for Firefox more than I pay for every single web service I use.

                                                            1. 5

                                                              It would cost an infinite amount, because that’s the amount companies need to aim to earn in a capitalist economy.

                                                              1. 5

                                                                It’s a non-profit.

                                                                1. 3

                                                                  Even non-profits have expenses to cover. Over infinite time these approach infinity as well.

                                                                  The point still stands that them having a transparent budget would make it easier for us the users to “pay off” these kinds of threats.

                                                                  1. 3

                                                                    The mozilla corporation (a branch of the mozilla non-profit Foundation) is for-profit.

                                                                    1. 3

                                                                      Only legally. It’s “keep the lights on”.

                                                                      The problem is that software development and the services the corp provides is not considered non-profit under most jurisdictions.

                                                                      This setup (a foundation and a corporation) is straight from the playbook for non-profits that have substantial non-eligible parts.

                                                              2. 4

                                                                A sure way to get them to stop is to donate monthly.

                                                                They made $360,000,000 last year: https://www.ghacks.net/2017/12/02/mozillas-revenue-increased-significantly-in-2016/

                                                                Why would you want to throw more money at the corporation who’s pissing on you while telling you that it’s raining?

                                                                1. 8

                                                                  When you are donating to Mozilla, you are donating to the Foundation, which is not involved much in Firefox, but in a lot of other tech and policy advocacy things. This includes net neutrality lobbying, discussing the copyright reform in Europe and support many many tech teaching projects all over the globe.

                                                                  You’d be hurting all those projects instead of Firefox development over your anger with the product.

                                                                  Making your anger known in a different fashion will have more impact.

                                                                  (FWIW: I don’t want to keep you from stopping to donate if you don’t feel like Mozilla Foundation is not following their mission anymore)

                                                                  1. 4

                                                                    Regrettably, the net neutrality thing didn’t pan out, I’m not sure about the copyright work, and the educational stuff is probably better left to local efforts (if my own experience is to be believed).

                                                                    I’d rather they focus on Firefox, Thunderbird, and documentation and free up people and resources to go do other things.

                                                                    1. 6

                                                                      Regrettably, the net neutrality thing didn’t pan out, I’m not sure about the copyright work, and the educational stuff is probably better left to local efforts (if my own experience is to be believed).

                                                                      Policy is no “put enough money here, it’ll work” game. The debate about net neutrality has been going back and forth in the recent years and Mozilla has always been involved. Losing Mozilla as a campaigner there would not be helpful in any way.

                                                                      The educational stuff is probably better left to local efforts (if my own experience is to be believed).

                                                                      This is obviously very personal, but in my experience, Mozilla has reach to a lot of people and other groups that other tech groups can only dream of. I would highly recommend looking at who’s around at MozFest. Also, the Foundation does a lot of these things through co-operations like with the Ford Foundation, which are usually quite productive and the output brings a lot of worthwhile reading.

                                                                      I’d rather they focus on Firefox, Thunderbird, and documentation and free up people and resources to go do other things.

                                                                      Thunderbird obviously left out, Mozilla Corporation has most employees on precisely these products. It is their focus.

                                                                      The Corp is just not the Foundation and merging them also makes no sense, IMHO.

                                                                2. 2

                                                                  Or use a fork if all else fails. Sad day if it comes to that, after all the good work otherwise gone into Firefox.

                                                                1. 9

                                                                  These few HTTP verbs

                                                                  You can have as many HTTP verbs as you want.

                                                                  HTTP actually is an RPC system with verbs as the method names. REST, though, is about switching API design away from verbs and towards nouns. The problem outlined in this article is that the author is still designing verb-first RPC APIs but then trying to shoehorn those into “REST” – which is of course painful.

                                                                  1. 7
                                                                    1. 9

                                                                      As it turns out, some applications aren’t exclusively CRUD operations on JSON objects. As soon as you have one vaguely interesting operation that works on more than one object type, REST falls apart. In other words, every interesting API will be “shoehorned” into REST.

                                                                      1. 2

                                                                        REST is not designed for CRUD operations specifically. It is based around the idea of performing some ACTION against some kind of ENTITY, and if you are running into issues like these you probably don’t have very well defined entities or you are trying to wedge everything into one bucket instead of knowing where to draw the lines.

                                                                        1. 6

                                                                          Okay but what do you do when you need to perform some ACTION against three kinds of ENTITIES? It just doesn’t work.

                                                                          And that isn’t even true. REST is about Transfering REpresentation State, i.e. CRUD. Where do you see “perform arbitrary actions on entities” in “transfer representation state”?

                                                                          1. 3

                                                                            I can think of a few options:

                                                                            1. Chain the three actions so they have to be performed in order, such as with a multi-page form. Mastadon’s API requires that photos be uploaded before they’re used in a message.
                                                                            2. Create a unified entity that wraps all three. Old-style multipart file uploads do this in a single request.
                                                                            3. Model a collection that provides space for three sub-resources. This SO answer explains it succinctly.

                                                                            This is not challenging at all. REST can handle all kinds of applications just fine, and the decisions required to make it work are exactly the same ones you’ll be making in your own server code. You might have to do some up-front modeling work to adapt your application model to REST to save your end-users a headache if they’re already confident with other APIs, if it’s a worthwhile trade-off for you. It’s hardly an example of REST falling apart!

                                                                            1. 7

                                                                              Cherry picking a few obvious examples is not particularly convincing.

                                                                              But if it’s “not challenging at all” then maybe you can solve this for me. I have a REST API that handles task scheduling for multiple teams. Tasks have different interactions, A blocks B, B can only be done Tuesday-Thursday, C can only be done on weekends but not concurrently with A or B, but doesn’t depend on either of them being done.

                                                                              Now we need to reschedule A. PUT /task/A {“start_time”: “new time”}. This will reflow the entire schedule and require user approval of changes. So how do I make this RESTful? In practice, possibly hundreds of entities are affected, and that’s not an infrequent edge case.

                                                                              Hint, things that will not work:

                                                                              • action chain, this would potentially involve hundreds of RPCs and would need substantial business logic on the client side, and we must support 4 client platforms
                                                                              • unified entity, this is just the entire project
                                                                              • a virtual reschedule operation “resource”, this is just an RPC, if you suggest this you have failed
                                                                              1. 6

                                                                                Interesting problem! If I’m understanding correctly, you want to make it possible for Task A to be updated, but you need the result to be provisional in some form so that the implications can be approved? Does the application do the reflow on its own and then ask for a thumbs-up?

                                                                                Sounds like a branching operation, since you have that user approval in the middle. So, the client might POST a proposed new Task A and get a 303 response with changes for approval: “Had to move B to Wednesday to fit A, is this okay?”. Nothing would actually change in the master schedule until the user had followed the chain to approve the various constraint resolutions. The model (and associated resources) could look a lot like Git trees, with alternative branching histories of different options that later get merged depending on conflicts.

                                                                                If you’d rather require the user to resolve conflicts themselves ahead of time, the client might PUT a change to Task A and get a 409 response with a list of conflicting tasks. Then it’d be up to the user to figure out that moving Task B to Wednesday will make room for Task A, which would be a bummer for the user but would save the effort of attempting to represent alternative possibilities.

                                                                                In both cases, the client app is following prompts and links as it moves between requests and responses. The resources change along the way depending on whether it’s the user or the application that’s tracking alternative states, but the client need only walk the API piece by piece the same way you can make complex branching histories on Github by typing into boxes and pressing buttons on web pages.

                                                                                1. 6

                                                                                  Good effort. We considered of all of those ideas in some form but ultimately just gave up on REST.

                                                                                  you want to make it possible for Task A to be updated, but you need the result to be provisional in some form so that the implications can be approved? Does the application do the reflow on its own and then ask for a thumbs-up?

                                                                                  Yes and yes. Except potentially the client will send a batch of edits to update, create, and delete multiple tasks. That initially looked like a POST to our /tasks endpoint with a list of operations. I don’t think that’s particularly RESTful. We ended up with an RPC API that allowed you to submit multiple RPCs as a transaction. That made the change endpoints less special case-y, since the batched RPCs and regular RPCs were the same code.

                                                                                  In both cases, the client app is following prompts and links as it moves between requests and responses.

                                                                                  This would generate far too many requests. We really wanted to keep it to 1 request, 1 response per save attempt.

                                                                                  Unfortunately, in order for the conflicts to be meaningfully displayed, the entire result of the change had to be returned, which means returning every changed entity. We ended up returning property-level diffs of every entity, allowing the user to reflow manually, update, and get a new list of diffs from their previous attempt. Every request ended up being really stateful, for this particular UI action and for most other write UI actions.

                                                                                  You’re spot on that the model looked like git trees, the application ended up like a git repo: a user’s entire project cached locally, and most updates done with sync-from-revision-N style RPCs. We didn’t think it would be valuable to have git-like resources representing change sets, since the change sets were never actually stored in our data model. It makes more sense for e.g. GitHub when those change sets are the data model. And I definitely don’t think this makes any sense as a general purpose way to handle complex changes in REST applications. IMO it’s pretending you’re RESTful but really just placing a RESTful transaction layer under your application, rather than actually writing your application in a RESTful way.

                                                                                  It’s not like I hate REST or anything. It works great for applications where you really are transferring representation state around. But for us, the client effectively had no say in what the data would look like, it could only perform actions and get the new state. Modeling that as proposed changes to state just did not work, especially when faced with concurrent modification. Many of our actions potentially affected multiple entities based on complex business logic, and it would just be too much work for the client to perform even some of that logic and submit a new state that made any sense.

                                                                                  So I guess my real problem isn’t with multiple entities, it’s that REST pushes business logic into client side code, and that logic can become exponentially more complicated as more and more entities get involved. For APIs meant to be used by third parties to create new functionality, the logic is supposed to be in the client side code, and REST provides a lot of flexibility. But for your own application code you have the option of building the exact operations you want directly into the server. If your application is all about navigating through different resources, and extremely reusable resources like photos, REST would work just fine even for working on multiple entities as you described. Not all applications work like that.

                                                                    1. 8

                                                                      Although I live in the middle of EU and speak Czech, I prefer to have my phone set to English same as my laptop and other electronics. I dislike reading crude translations and most software I use has English as it’s primary language.

                                                                      Recently, Google has decided that I cannot understand Czech local names on it’s Maps and started translating them to English. Looks like adding Czech to the languages I understand in Google preferences did the trick as it has stopped translating metro station names.

                                                                      1. 24

                                                                        If your score is above your post count you’re doing fantastic. I don’t think anyone in this community treats them as a popularity contest.

                                                                        1. 7

                                                                          I also don’t have the impression. There’s a couple of highly active users here, they have a lot of karma. I can also tell that by seeing their name all the time.

                                                                          Personally, I rarely look up the total score of others and mostly look at mine as a generic “people took interest in my comments this week”. I don’t care that much about the long-term sum. But I do somewhat care about the uptake after investing much time in that platform.

                                                                          I sometimes look up the average score of others, but only out of curiosity. That happens maybe once every three months.

                                                                          It was a useful info for me though, when I was new to this community. Am I talking to a regular? A newbie? A lurker? This is all useful context.

                                                                          1. 3

                                                                            It was a useful info for me though, when I was new to this community. Am I talking to a regular? A newbie? A lurker? This is all useful context.

                                                                            This is the important part for me. It would be fine to hide the specific score and only show a classification like “newbie”, “link poster”, “active commenter”, “senior” which might include more data like “age of account” and “rank”.

                                                                          2. 2

                                                                            This is actually a brilliant idea! I think showing the average would be way more helpful and would guide participants toward writing less, higher quality comments.

                                                                            I think it’s way more useful:

                                                                            If we consider that someone who has written 10 comments and has a score 100 adds much more to a debate than a user with 1000 comments and a score of 1000.

                                                                            Currently the user with the better contributions looks worse than the person with the lower quality contributions.

                                                                            1. 5

                                                                              would guide participants toward writing less, higher quality comments.

                                                                              That assumes the comment vote = quality. It really doesn’t. It means it’s what one or more people in that thread in that context wanted to see, what a pile didn’t, or something in between. The metrics are inconsistent. Many comments with info in them also get either no votes or just one. There’s also whether it’s a hot-button topic where taking a certain position always gets a vote boost.

                                                                              There’s enough problems connecting comment votes to any objective metric of quality that I don’t use averages for it. I have a guess that it’s probably OK if over 2. People’s responses to individual comments or private messages have been more reliable indicator for me.

                                                                              1. 4

                                                                                I think showing the average would be way more helpful and would guide participants toward writing less, higher quality comments.

                                                                                I don’t think so, a high average score often only shows who’s expressing popular opinions because they receive a lot of upvotes.

                                                                                edit: s/get/receive

                                                                              2. 1

                                                                                Thanks for the reply. I will meditate on that.

                                                                              1. 7

                                                                                Using karma points for communities is the best practise for a reason. Don’t get me wrong it is far from being perfect, but probably the least annoying thing you can do. Collecting karma points tend to be motivating for a lot of users which is great for the community since these users bring in most of the interesting content (of course it’s an other story with comments). For the rest of us, I don’t think it is a popularity contest, at least I don’t see it that way. IMO Lobsters did a good job by not pushing karma points too much. (yeah, I know mine’s terribly low) :)

                                                                                1. 2

                                                                                  You have one more point now!

                                                                                  1. 2

                                                                                    Yay!

                                                                                  2. 1

                                                                                    I’ve never though about it that way. I might have wished for a non-zero score in the beginning, which might have pushed me to post a comment. Much like one leaves his first torrent seeded a little while longer to build a ratio or something. :-)

                                                                                  1. 8

                                                                                    Next up: a complete flight simulator inside cat

                                                                                    1. 24

                                                                                      Or a text editor within emacs!

                                                                                      1. 16

                                                                                        Or a text editor within emacs!

                                                                                        It’s been done: https://www.emacswiki.org/emacs/Evil

                                                                                      2. 8

                                                                                        You jest, but I only realised in the last year that cat can read from unix sockets as well as files. (I was reimplementing it to learn a new language and read the manpage carefully.) Never realised/knew that before.

                                                                                        1. 4

                                                                                          well, bash can read from a tcp socket too:

                                                                                          exec 3<>/dev/tcp/lobste.rs/80
                                                                                          echo -e "GET /\r\n" >&3
                                                                                          cat <&3
                                                                                          
                                                                                          1. 4

                                                                                            A similar feature is also implemented in GNU awk - and someone wrote a web server for it.

                                                                                            1. 2

                                                                                              That’s fascinating. That’s so new to me, I don’t even know how to look that up. Could you give me some more info about what’s going on there? Also, it interestingly doesn’t work on the Mac (it just crashes my terminal ¯_(ツ)_/¯). I suppose it could be because of an outdated version of bash, but, again, I can’t even look that up.

                                                                                            2. 2

                                                                                              That is cool, I never knew that and kept using socat. Will try cat the next time.

                                                                                            3. 1

                                                                                              I read that as “a complete flight simulator inside a cat”, I was worried you were some kind of mad scientist.

                                                                                              1. 1

                                                                                                I thought I’d created something great for the world, a flight simulator inside a cat! But little did I know, I’d created a flying man-eating MONSTER!!

                                                                                            1. 8

                                                                                              Poor, but not in the exactly same sense as laid out by the OP. Most of my balancing issues stem from the fact that I am poor at time management and don’t really take care of myself well enough to have some energy reserves left for the busier days.

                                                                                              I am frequently tired, but unwilling to rest at the same time, until I finally crash and get sick (for example).

                                                                                              I am working on that…

                                                                                              1. 2

                                                                                                There may be a cure for this, but I haven’t found it - I have periods of mania when I work on something, then there is the denumont and I feel burned out for a week or so and I’ve had this all my life. What I’m recently learning to do is to do something different during the burnout, like writing, or learning a new topic and that seems to work.

                                                                                                1. 2

                                                                                                  I’ve had the same problem and doing something different worked for me too. Although something different usually winds up being a different coding project.

                                                                                                  1. 2

                                                                                                    I think merely giving yourself a quiet time with some music and swimming once a week might help a lot. I’ll see.

                                                                                                    1. 3

                                                                                                      +1 to physical exercise!
                                                                                                      +1 to physical exercise!

                                                                                                1. 3

                                                                                                  Anyone else have trouble writing module docs due to the haddock syntax being so weird?

                                                                                                  1. 3

                                                                                                    I don’t really get this. Creating a canonical data model of anything implies getting everyone involved on the same boat language-wise. Just look at chemistry and biology.

                                                                                                    Saying that it’s not a good idea to build an understanding between people using the same terms is a complete resignation on the belief that people can actually, in time, understand each other. If we can’t, why bother doing anything?

                                                                                                    On the other side, yes, you have to start building such an understanding by having people actually talk to each other and then codify the stuff for the newcomers. Much like we do in the linguistics. It wouldn’t make sense to try to push an English 2.0 by force.