1. 3

    This is the general pattern that most of Apple’s recent security features follow: add new levels of security that limit freedom (in exchange for security) as the default, but keep the old methods around. See Gatekeeper, Boot Security, Sandboxing, Kernel extension approval.

    1.  

      This time though, it’s hardware level, and it’s hard to get the freedom (of using the internal SSD) back. You can access the SSD on Linux, but the machine shuts down in a few seconds if you try that.

    1. 4

      A Turin turambar turún’ ambartanen. Another shell that isn’t shell, shells that aren’t shells aren’t worth using because shell’s value is it’s ubiquity. Still, interesting ideas.

      This brought to you with no small apology to Tolkien.

      1. 13

        I’ve used the Fish shell daily for 3-4 years and find it very much worth using, even though it isn’t POSIX compatible. I think there’s great value in alternative shells, even if you’re limited in copy/pasting shell snippets.

        1. 12

          So it really depends on the nature of your work. If you’re an individual contributor, NEVER have to do devops type work or actually operate a production service, you can absolutely roll this way and enjoy your highly customized awesomely powerful alternative shell experience.

          However, if you’re like me, and work in environments where being able to execute standardized runbooks is absolutely critical to getting the job done, running anything but bash is buying yourself a fairly steady diet of thankless, grinding, and ultimately pointless pain.

          I’ve thought about running an alternative shell at home on my systems that are totally unconnected with work, but the cognitive dissonance of using anything other than bash keeps me from going that way even though I’d love to be using Xonsh by the amazing Anthony Scopatz :)

          1. 5

            I’d definitely say so – I’d probably use something else if I were an IC – and ICs should! ICs should be in the habit of trying lots of things, even stuff they don’t necessarily like.

            I’m a big proponent of Design for Manufacturing, an idea I borrow from the widgety world of making actual things. The idea, as defined by an MFE I know, is that one should build things such that: “The design lends itself to being easily reproduced identically in a reliable, cost-effective manner.”

            For a delivery-ops guy like me, working in a tightly regulated, safety-critical world of Healthcare, having reproducible, reliable architecture, that’s cheap to replace and repair is critical. Adding a new shell doesn’t move in that needle towards reproducibility, so it’s value has to come from reliability or cheapness, and once you add the fact that most architectures are not totally homogeneous, the cost goes up even more.

            That’s the hill new shells have to climb, they have to get over ‘sh is just easier to use, it’s already there.’ That’s a very big hill.

            1. 2

              “The design lends itself to being easily reproduced identically in a reliable, cost-effective manner.” “That’s the hill new shells have to climb,”

              Or, like with the similar problem posed by C compilers, they just provide a method to extract to whatever the legacy shell is for widespread, standard usage.

              EDIT: Just read comment by @ac which suggested same thing. He beat me to it. :)

              1. 2

                I’ve pondered about transpilers a bit before, for me personally, I’ve learned enough shell that it doesn’t really provide much benefit, but I like that idea a lot more then a distinct, non-compatible shell.

                I very much prefer a two-way transpiler. Let me make my old code into new code, so I can run the new code everywhere and convert my existing stuff to the new thing, and let me go back to old code for the machines where I can’t afford to figure out how to get new thing working. That’s a really big ask though.

                The way we solve this at $work is basically by writing lots of very small amounts of shell, orchestrated by another tool (ansible and Ansible Tower, in our case). This covers about 90% of the infrastructure, with the remaining bits being so old and crufty (and so resource-poor from an organization perspective) that bugs are often tolerated rather than fixed.

            2. 4

              The counter to alternative shells sounds more like a reason to develop and use alternative shells that coexist with a standard shell. Maybe even with some state synchronized so your playbooks don’t cause effects the preferred shell can’t see and vice versa. I think a shell like newlisp supporting a powerful language with metaprogramming sounds way better than bash. Likewise, one that supports automated checking that it’s working correctly in isolation and/or how it uses the environment. Also maybe something on isolation for security, high availability, or extraction to C for optimization.

              There’s lots of possibilities. Needing to use stuff in a standard shell shouldn’t stop them. So, they should replace the standard shell somehow in a way that still lets it be used. I’m a GUI guy whose been away from shell scripting for a long time. So, I can’t say if people can do this easily, already are, or whatever. I’m sure experts here can weigh in on that.

            3. 7

              I work primarily in devops/application architecture – having alternative shells is just a big ol’ no – tbh I’m trying to ween myself off bash 4 and onto pure sh because I have to deal with some pretty old machines for some of our legacy products. Alternative shells are cool, but don’t scale well. They also present increased attack surface for potential hackers to privesc through.

              I’m also an odd case, I think shell is a pretty okay language, wart-y, sure, but not as bad as people make it out to be. It’s nice having a tool that I can rely on being everywhere.

              1. 13

                I work primarily in devops/application architecture

                Alternative shells are cool, but don’t scale well.

                Non-ubiquitous shells are a little harder to scale, but the cost should be controllable. It depends on what kind of devops you are doing:

                • If you are dealing with a limited number of machines (machines that you probably pick names yourself), you can simply install Elvish on each of those machines. The website offers static binaries ready to download, and Elvish is packaged in a lot of Linux distributions. It is going to be a very small part of the process of provisioning a new machine.

                • If you are managing some kind of cluster, then you should already be doing most devops work via some kind of cluster management system (e.g. Kubernetes), instead of ssh’ing directly into the cluster nodes. Most of your job involves calling into some API of the cluster manager, from your local workstation. In this case, the number of Elvish instances you need to install is one: that on your workstation.

                • If you are running some script in a cluster, then again, your cluster management system should already have a way of pulling in external dependencies - for instance, a Python installation to run Python apps. Elvish has static binaries, which is the easiest kind of external dependency to deal with.

                Of course, these are ideal scenarios - maybe you are managing a cluster but it is painful to teach whatever cluster management system to pull in just a single static binary, or you are managing some old machines with an obscure CPU architecture that Elvish doesn’t even cross-compile to. However, those difficulties are by no means absolute, and when the benefit of using Elvish (or any other alternative shell) far outweighs the overheads, large-scale adoption is possible.

                Remember that bash – or every shell other than the original bourne shell - also started out as an “alternative shell” and it still hasn’t reached 100% adoption, but that doesn’t prevent people from using it on their workstation, servers, or whatever computer they work with.

                1. 4

                  All good points. I operate on a couple different architectures at various scales (all relatively small, Xe3 or so). Most of the shell I write is traditional, POSIX-only bourne shell, and that’s simply because it’s everywhere without any issue. I could certainly install fish or whatever, or even standardized version of bash, but it’s an added dependency that only provides moderate convenience at the cost of another ansible script to maintain, and increased attack surface.

                  The other issue is that ~1000 servers or so have very little in common with each other, about 300 of them support one application, that’s the biggest chunk, 4 environments of ~75 machines each, all more or less identical.

                  The other 700 are a mish mash of versions of different distros, different OSes, different everything, that’s where /bin/sh comes in handy. These are all legacy applications, none of them get any money for new work, they’re all total maintenance mode, any time I spend on them is basically time lost from the business perspective. I definitely don’t want to knock alternative shells as a tool for an individual contributor, but it’s ultimately a much simpler problem for me to say, “I’m just going to write sh” then “I’m going to install elvish across a gagillion arches and hope I don’t break anything”

                  We drive most cross-cutting work with ansible (that Xe3 is all vms, basically – not quite all, but like 98%), bash really comes in as a tool for debugging more than managing/maintaining. If there is an issue across the infra – say like meltdown/spectre, and I want to see what hosts are vulnerable, it’s really fast for me (and I have to emphasize – for me – I’ve been writing shell for a lot of years, so that tweaks things a lot) to whip up a shell script that’ll send a ping to Prometheus with a 1 or 0 as to whether it’s vulnerable, deploy that across the infra with ansible and set a cronjob to run it. If I wanted to do that with elvish or w/e, I’d need to get that installed on that heterogenous architecture, most of which my boss looks at as ‘why isn’t Joe working on something that makes us money.’

                  I definitely wouldn’t mind a better sh becoming the norm, and I don’t want to knock elvish, but from my perspective, that ship has sailed till it ports, sh is ubiquitous, bash is functionally ubiquitous, trying to get other stuff working is just a time sink. In 10 years, if elvish or fish or whatever is the most common thing, I’ll probably use that.

                  1. 1

                    The other 700 are a mish mash of versions of different distros, different OSes, different everything, that’s where /bin/sh comes in handy.

                    So, essentially, whatever alternative is built needs to use cross-platform design or techniques to run on about anything. Maybe using cross-platform libraries that facilitate that. That or extraction in my other comment should address this problem, eh?

                    Far as debugging, alternative shells would bring both a cost and potential benefits. The cost is unfamiliarity might make you less productive since it doesn’t leverage your long experience with existing shell. The potential benefits are features that make debugging a lot easier. They could even outweigh cost depending on how much time they save you. Learning cost might also be minimized if the new shell is based on a language you already know. Maybe actually uses it or a subset of it that’s still better than bash.

                2. 6

                  My only real beef with bash is its array syntax. Other than that, it’s pretty amazing actually, especially as compared with pre bash Bourne Shells.

                  1. 4

                    Would you use a better language that compiles to sh?

                    1. 1

                      Eh, maybe? Depends on your definition of ‘better.’ I don’t think bash or pure sh are all that bad, but I’ve also been using them for a very long time as a daily driver (I write more shell scripts then virtually anything else, ansible is maybe a close second); so I’m definitely not the target audience.

                      I could see if I wanted to do a bunch of math, I might need use something else, but if I’m going to use something else, I’m probably jumping to a whole other language. Shell is in a weird place, if the complexity is high enough to need a transpiler, it’s probably high enough to warrant writing something and installing dependencies.

                      I could see a transpiler being interesting for raising that ceiling, but I don’t know how much value it’d bring.

                3. 10

                  Could not disagree more. POSIX shell is unpleasant to work with and crufty; my shell scripting went through the roof when I realized that: nearly every script I write is designed to be launched by myself; shebangs are a thing; therefore, the specific language that an executable file is written in is very, very often immaterial. I write all my shell scripts in es and I use them everywhere. Almost nothing in my system cares because they’re executable files with the path to their interpreter baked in.

                  I am really pleased to see alternative non-POSIX shells popping up. In my experience and I suspect the experience of many, the bulk of the sort of scripting that can make someone’s everyday usage smoother need not look anything like bash.

                  1. 5

                    Truth; limiting yourself to POSIX sh is a sure way to write terribly verbose and slow scripts. I’d rather put everything into a “POSIX awk” that generates shell code for eval when necessary than ever be forced to write semi-complex pure sh scripts.

                    bash is a godsend for so many reasons, one of the biggest being process substitution feature.

                    1. 1

                      For my part, I agree – I try to generally write “Mostly sh compatible bash” – defaulting to sh-compatible stuff until performance or maintainability warrant using the other thing. Most of the time this works.

                      The other mitigation is that I write lots of very small scripts and really push the worse-is-better / lots of small tools approach. Lots of the scripting pain can be mitigated by progressively combining small scripts that abstract over all the details and just do a simple, logical thing.

                      One of the other things we do to mitigate the slowness problem is to design for asynchrony – almost all of the scripts I write are not time-sensitive and run as crons or ats or whatever. We kick ‘em out to the servers and wait the X hours/days/whatever for them to all phone home w/ data about what they did, work on other stuff in the meantime. It really makes it more comfortable to be sh compatible if you can just build things in a way such that you don’t care if it takes a long time.

                      All that said, most of my job has been “How do we get rid of the pile of ancient servers over there and get our assess to a disposable infrastructure?” Where I can just expect bash 4+ to be available and not have to worry about sh compatibility.

                    2. 1

                      A fair cop, I work on a pretty heterogenous group of machines, /bin/sh works consistently on all of them, AIX, IRIX, BSD, Linux, all basically the same.

                      Despite our (perfectly reasonable) disagreement, I am also generally happy to see new shells pop up. I think they have a nearly impossible task of ousting sh and bash, but it’s still nice to see people playing in my backyard.

                    3. 6

                      I don’t think you can disqualify a shell just because it’s not POSIX (or “the same”, or whatever your definition of “shell” is). The shell is a tool, and like all tools, its value depends on the nature of your work and how you decide to use it.

                      I’ve been using Elvish for more than a year now. I don’t directly manage large numbers of systems by logging into them, but I do interact quite a bit with services through their APIs. Elvish’s native support for complex data structures, and the built-in ability to convert to/from JSON, makes it extremely easy to interact with them, and has allowed me to build very powerful toolkits for doing my work. Having a proper programming language in the shell is very handy for me.

                      Also, Elvish’s interactive experience is very customizable and friendly. Not much that you cannot do with bash or zsh, but much cleaner/easier to set up.

                      1. 4

                        I’ve replied a bunch elsewhere, I don’t mean to necessarily disqualify the work – it definitely looks interesting, and for an individual contributor somewhere who doesn’t have to manage tools at scale, or interact with tools that don’t speak the JSON-y api it offers, etc – that’s where it starts to get tricky.

                        I said elsewhere in thread, “That’s [the ubiquity of sh-alikes] the hill new shells have to climb, they have to get over ‘sh is just easier to use, it’s already there.’ That’s a very big hill.”

                        I’d be much more interested if elvish was a superset of sh or bash. I think that part of the reason bash managed to work was that sh was embedded underneath, it was a drop-in replacement. If you’re a guy who, like me, uses a lot of shell to interact with systems, adding new features to that set is valuable, removing old ones is devastating. I’m really disqualifying (as much as I am) on that ground, not just that it’s not POSIX, but that it is less-than-POSIX with the same functionality. That keeps it out of my realm.

                        Now this may be biased, but I think I’m the target audience in terms of adoption – you convince a guy like me that your shell is worth it, and I’m going to go drop it on my big pile of servers where-ever I’m working. Convincing ICs who deal with their one machine gets you enough adoption to be a curiousity, convince a DevOps/Delivery guy and you get shoved out to every new machine I make and suddenly you’ve got a lot of footprint that someone is going to have to deal with long after I’m gone and onto Johhny Appleshelling the thing at whatever poor schmuck hires me next.

                        Here’s what I’d really like to see, a shell that offers some of these JSON features as an alternative pipe (maybe ||| is the operator, IDK), adds some better numbercrunching support, and maybe some OO features. All while remaining a superset of POSIX. That’d make the cost of using it very low, which would make it easy to justify adding to my VM building scripts. It’d make the value very high (not having to dip out to another tool to do some basic math would be fucking sweet,), having OO features so I could operate on real ‘shell objects’ and JSON to do easier IO would be really nice as well. Ultimately though you’re fighting uphill against a lot of adoption and a lot of known solutions to these problems (there are patterns for writing shell to be OOish, there’s awk for output processing, these are things which are unpleasant to learn, but once you do, the problem JSON solves drops to a pretty low priority).

                        I’m really not trying to dismiss the work. Fixing POSIX shell is good work, it’s just not likely to be successful by replacing. Improving (like bash did) is a much better route, IMO.

                      2. 2

                        I’d say you’re half right. You’ll always need to use sh, or maybe bash, they’re unlikely to disappear anytime soon. However, why limit yourself to just sh when you’re working on your local machine? You could even take it a step further and ask why are you using curl locally when you could use something like HTTPie instead? Or any of the other “alternative tools” that make things easier, but are hard to justify installing everywhere. Just because a tool is ubiquitous does not mean it’s actually good, it just means that it’s good enough.

                        I personally enjoy using Elvish on my local machines, it makes me faster and more efficient to get things done. When I have to log into a remote system though I’m forced to use Bash, it’s fine and totally functional, but there’s a lot of stupid things that I hate. For the most ridiculous and trivial example, bash doesn’t actually save it’s history until the user logs out, unlike Elvish (or even IPython) which saves it after each input. While it’s a really minor thing, it’s really, really, really useful when you’re testing low level hardware things that might force an unexpected reboot or power cycle on a server.

                        I can’t fault you if you want to stay POSIX, that’s a personal choice, but I don’t think it’s fair to write off something new just because there’s something old that works. With that mindset we’d still be smashing two rocks together and painting on cave walls.

                      1. 3

                        Just want to point out (since I couldn’t find it on the site at first) “the license is in development” while it’s in beta, but the plan is not to make this free software. AFAICT the community edition will be Creative Commons BY-NC-SA. So that’s a bummer.

                        1. 1

                          What exactly does it mean that the community edition is BY-NC-SA? That you have to attribute and publicly share any code that you write with Alan?

                          1. 1

                            The BY bit means you have to provide attribution if you make a derivative work. The SA means that derivative works have to be under the same license, similar to the GPL. NC means that commerical use is forbidden (the NC stands for non-commercial), which violates freedom 0 and makes this not free software. They don’t specify what version they’d use but presumably it would be the latest, which would make https://creativecommons.org/licenses/by-nc-sa/4.0/ the license in question.

                            1. 1

                              What does derivative work mean here? An app you build using their framework, or just making changes to the framework itself?

                              1. 1

                                I don’t know. There may be an answer out there, but Creative Commons is not designed for software. If we were discussing, say, the GPL, there might be a clearer answer. I have a very hazy guess, but it’s based on no research and overall I’m so uninformed about your question that I don’t want to speculate in public :P

                                I don’t know how prevalent software using CC licenses is, but it’s possible no one would know and we’d have to wait for a court to decide.

                        1. 1

                          I’ve really been enjoying this podcast, I’ve been following along since it started, and it’s helped me pick up some tips for my enacts dabbling.

                          1. 1

                            Cool!

                            I’m honestly quite surprised that experienced Emacs users listen to my show :)

                            Thank you.

                          1. 4

                            A friend and I dug up most of our gravel driveway with a tractor so that we can put grass down instead. There’s about 60m2 of unnecessary gravel driveway that we’re going to use to extend the kids grassy play area. When it’s done it’ll be great, but it’s looking pretty messy half-way through.

                            1. 4

                              Microsoft’s perceived lack of clarity in the roadmap (.NET Standard, .NET Core, .NET Framework, etc) and history of killing off or deprecating frameworks (Silverlight, Winforms, should I use WPF or UWP?) are a couple more reasons why startups don’t turn to .NET. Add what others have mentioned, the closed source and history of high cost, the lack of ecosystem, and the long history of being actively against open source and copyleft licenses, and Microsoft just doesn’t look like a startup choice. Microsoft was also relatively late as a cloud computing choice. Maybe something will emerge from their Bizspark program and their open source efforts to change their perceived position.

                              I didn’t include PHP because there were a lot of startups that had nothing but PHP and Apache Server. That’s partly why I looked at 100 startups and ended up with 23. Startups with just PHP are probably e-commerce websites or non-software at all.

                              I wonder if this is reasonable to exclude PHP? I could see the point of excluding it because there’s a Wordpress blog hanging off the domain or if, as the author states, an e-commerce startup kicked things off with Magento or the like. On the other hand, is PHP just being excluded because, well, PHP?

                              1. 3

                                I read it as the author saying that they couldn’t distinguish between shops using PHP for a webshop/CMS and doing new software development with it, so it was excluded from the analysis.

                              1. 3

                                This is great for debugging, but I’m not so keen to see it in production on user machines. From the gist it sounds like it would work on prod code.

                                save user recordings when an exception is fired

                                I’m not keen on surveillance of other users browsing sessions, especially without their consent. Building this kind of feature into a browser normalises it, especially if it’s not something that the user has to opt in to.

                                1. 3

                                  No mention of app engine?

                                  1. 2

                                    Sorry, I should have mentioned that I only reviewed services I used. Due to load balancer upload limits on App Engine I wasn’t able to use App Engine as the application server so I didn’t look into it too deeply. It definitely looks good though.

                                    1. 4

                                      If you can make your use-case fit into AppEngine’s constrained data & runtime model then it is absolute nirvana. If you can’t then you’re stuck using something else.

                                  1. 3

                                    It’s probably way out of the intended scope, but could Mitogen be used for basic or throwaway parallel programming or analytics? I’m imagining a scenario where a data scientist has a dataset that’s too big for their local machine to process in a reasonable time. They’re working in a Jupyter notebook, using Python already. They spin up some Amazon boxes, each of which pulls the data down from S3. Then, using Mitogen, they’re able to push out a Python function to all these boxes, and gather the results back (or perhaps uploaded to S3 when the function finishes).

                                    1. 3

                                      It’s not /that/ far removed. Some current choices would make processing a little more restrictive than usual, and the library core can’t manage much more than 80MB/sec throughput just now, limiting its usefulness for data-heavy IO, such as large result aggregation.

                                      I imagine a tool like you’re describing with a nice interface could easily be built on top, or maybe as a higher level module as part of the library. But I suspect right now the internal APIs are just a little too hairy and/or restrictive to plug into something like Jupyter – for example, it would have to implement its own serialization for Numpy arrays, and for very large arrays, there is no primitive in the library (yet, but soon!) to allow easy streaming of serialized chunks – either write your own streaming code or double your RAM usage, etc.

                                      Interesting idea, and definitely not lost on me! The “infrastructure” label was primarily there to allow me to get the library up to a useful point – i.e. permits me to say “no” to myself a lot when I spot some itch I’d like to scratch :)

                                      1. 3

                                        This might work, though I think you’d be limited to pure python code. On the initial post describing it:

                                        Mitogen’s goal is straightforward: make it childsplay to run Python code on remote machines, eventually regardless of connection method, without being forced to leave the rich and error-resistant joy that is a pure-Python environment.

                                        1. 1

                                          If it are just simple functions you run, you could probably use pySpark in a straight-forward way to go distributed (although Spark can handle much more complicated use-cases as well).

                                          1. 2

                                            That’s an interesting option, but presumably requires you to have Spark setup first. I’m thinking of something a bit more ad-hoc and throwaway than that :)

                                            1. 1

                                              I was thinking that if you’re spinning up AWS instances automatically, you could probably also configure that a Spark cluster is setup with it as well, and with that you get the benefit that you neither have to worry much about memory management and function parallelization nor about recovery in case of instance failure. The performance aspect of pySpark (mainly Python object serialization/memory management) is also actively worked on transitively through pandas/pyArrow.

                                              1. 2

                                                Yeah that’s a fair point. In fact there’s probably an AMI pre-built for this already, and a decent number of data-science people would probably be working with Spark to begin with.

                                        1. 7

                                          This version takes Clojars’ playbook runtimes from 16 minutes to 1 minute 30 seconds. It is my favourite piece of software in recent years. Highly recommended if you use Ansible.

                                          1. 4

                                            Adding/testing support for Clojure’s tools.deps CLI for Deps. Theoretically there shouldn’t be much needed if anything but I need to document it for customers, and will probably write a guide for how to use it in CI.

                                            I’m also accumulating instructions for enough different build tools that I need to add tabs or some other information hiding mechanism on the setup page.

                                            1. 3

                                              I like the Moderation Log for this post:

                                              Story: Rails Asset Pipeline Directory Traversal Vulnerability (CVE-2018-3760)
                                              Action: changed tags from “ruby” to “ruby security web”
                                              Reason: Adding a couple tags… after checking the Lobsters production.rb.

                                              1. 1

                                                Unfortunately, while the headline is clever, it’s not true.

                                                Palantir’s worst is done with code written in house, with the same open source codebase we all start with. So long as there are people willing to work there, bad things are going to be written into code and deployed.

                                                1. 15

                                                  One note, the specific company wasn’t Palantir, but was in a similar space.

                                                  I agree that not serving this company has a very small effect on them, but it was better than the alternative. Additionally, if enough companies refuse to work with companies like Palantir, it would begin to hinder their efforts.

                                                  1. 8

                                                    not serving this company has a very small effect on them

                                                    It has a big effect, instead. On the system. On their employees. On your employes and your customers…

                                                    Capitalism fosters a funny belief through its propaganda (aka marketing): that humans’ goals are always individualistic and social improvements always come from collective fights. This contraddiction (deeply internalized as many other disfunctional ones) fool many people: why be righteous (whatever it means to me) if it doesn’t change anything to me?

                                                    It’s just a deception, designed to marginalize behaviours that could challenge consumerism.

                                                    But if you cannot say “No”, you are a slave. You proved to be not.

                                                    And freedom is always revolutionary, even without a network effect.

                                                    1. 1

                                                      Sounds like it was https://www.wired.com/story/palmer-luckey-anduril-border-wall/ ? Palantir at least has some positive clients, like the SEC and CDC.

                                                    2. 4

                                                      But….that wasn’t his moral question? He was being offered a chance to be a vendor of services to a palantir-like surveillance outfit engaged in ethnic cleansing, not offered a job with a workstation. So yeah, the headline was absolutely true. It is up to individuals to refuse, and by publicly refusing to engage, not necessarily internally, they will inspire others to not profit by these horrors.

                                                      1. 0

                                                        It wasn’t. But the quip implies that we can act like a village, when the sad truth is that the low barrier to entry in software development means we can’t really act like a village, and stop people with our skillset from putting vile stuff into code.

                                                        1. 3

                                                          yeah, not really understanding this from the original post. and for the record the low barrier to entry is absolutely not what is allowing people to put vile stuff in code. extremely talented, well educated, highly intelligent people do horrifying stuff every single day.

                                                          1. 1

                                                            This is the best attitude one can desire from slaves. Don’t question the masters. It’s pointless.

                                                            1. 1

                                                              We can act like a village, we just can’t act like the entire population. Choosing not to work at completely unethical places when we can afford it does at the very least increase the cost and decrease the quality of the evil labor. Things could even reach a point where the only people willing to work there are saboteurs.

                                                        1. 6

                                                          The main comment themes I found were:

                                                          • Error messages: still a problem
                                                          • Spec: promising, but how to adopt fully unclear
                                                          • Docs: still bad, but getting a little better
                                                          • Startup time: always been a problem, becoming more pressing with serverless computing
                                                          • Marketing/adoption: Clojure still perceived as niche/unknown by non-technical folk
                                                          • Language: some nice ideas for improvement
                                                          • Language Development Process: not changing, still an issue
                                                          • Community: mostly good, but elitist attitudes are a turnoff, and there is a growing perception CLJ is shrinking
                                                          • Libraries: more guidance needed on how to put them together
                                                          • Other targets: a little interest in targeting non JS/JVM targets
                                                          • Typing: less than in previous years, perhaps people are finding spec meets their needs?
                                                          • ClojureScript: improving fast, tooling still tricky, NPM integration still tricky
                                                          • Tooling: still hard to put all the pieces together
                                                          • Compliments: “Best. Language. Ever.”

                                                          Lots of room for improvement here, but I still love working with Clojure and am thankful that I get to do so.

                                                          1. 3

                                                            I’m running on Google Cloud Platform, but there’s enough similarities to AWS that hopefully this is helpful.

                                                            I use Packer to bake a golden VM image that includes monitoring, logging, e.t.c. based on the most recent Ubuntu 16.04 update. I rebuild the golden image roughly monthly unless there is a security issue to patch. Then when I release new versions of the app I build an app specific image based on the latest golden image. It copies in an Uberjar from Google Cloud Storage (built by Google Cloud Builder). All of the app images live in the same image family

                                                            I then run a rolling update to replace the current instances in the managed instance group with the new instances.

                                                            The whole infrastructure is managed with Terraform, but I only need to touch Terraform if I’m changing cluster configuration or other resources. Day to day updates don’t need to go through Terraform at all, although now that the GCP Terraform provider supports rolling updates, I may look at doing it with Terraform.

                                                            It’s just me for everything, so I’m responsible for it all.

                                                            1. 3

                                                              I just backed this project on Kickstarter. If it can be made to work like it promises, it would be a huge productivity boost for me on several projects. Currently with Deps, I bake an image with Packer and Ansible for every new deployment (based on a golden image). That has been getting a bit slow, so I was looking at other deployment options. Having super fast Ansible builds would be great, and make that not as necessary.

                                                              1. 2

                                                                Hi Daniel, I keep forgetting to reply here – thanks so much for your support! For every neat complementary comment I’ve been receiving 5 complex questions elsewhere. I’ve just posted a short update, and although it is running a little behind, it looks like the campaign still has legs. I’m certainly here until the final hour. :) Thanks again!

                                                              1. 3

                                                                At work we’ve adopted ADR’s - Architecture Decision Records. These are similar to RFC’s but a little bit lighter weight. We generally use them for any architectural decision we make which is likely to affect more than one person, which took a while to understand, or will be impactful over a long time.

                                                                The great thing about them is that they’re structured to be able to be written in a stream of consciousness, to articulate the context (this is usually the most important thing), the decision, and its impact.

                                                                If we don’t have a decision immediately we can leave it open as a PR for discussion before finishing it.

                                                                1. 5

                                                                  I’m really pleased with the quality of projects that were submitted to Clojurists Together, and my only regret is that we couldn’t pick more of them. A huge thanks to our awesome members, we couldn’t do it without y’all.

                                                                  1. 26

                                                                    https://hackerone.com/reports/293359#activity-2203160 via https://twitter.com/infosec_au/status/945048806290321408 seems to at least shed a bit more light on things. I don’t find this kind of behavior to be OK at all:

                                                                    ”Oh my God.

                                                                    Are you seriously the Program Manager for Uber’s Security Division, with a 2013 psych degree and zero relevant industry experience other than technical recruiting?

                                                                    LULZ”

                                                                    1. 6

                                                                      The real impact with this vulnerability is the lack of rate limiting and/or IP address blacklisting for multiple successive failed authentication attempts, both issues of which were not mentioned within your summary dismissal of the report. Further, without exhaustive entropy analysis of the PRNG that feeds your token generation process, hand waving about 128 bits is meaningless if there are any discernible patterns that can be picked up in the PRNG.

                                                                      Hrm. He really wants to be paid for this?

                                                                      1. 3

                                                                        I mean, it’s a lot better than, say, promising a minimum of 500 for unlisted vulnerabilities and then repeatedly not paying it. Also, that’s not an unfair critique–if you’re a program manager in a field, I’d expect some relevant experience. Or, maybe, we should be more careful about handing out titles like program manager, project manager, product manager, etc. (a common issue outside of security!).

                                                                        At the core of it, it seems like the fellow dutifully tried to get some low-hanging fruit and was rebuffed, multiple times. This was compounded when the issues were closed as duplicate or known or unimportant or whatever…it’s impossible to tell the difference from the outside between a good actor saying “okay this is not something we care about” and a bad actor just wanting to save 500 bucks/save face.

                                                                        Like, the correct thing to have done would have been to say “Hey, thanks for reporting that, we’re not sure that that’s a priority concern right now but here’s some amount of money/free t-shirt/uber credits, please keep at it–trying looking .”

                                                                        The fact that the company was happy to accept the work product but wouldn’t compensate the person for what sounded like hours and hours of work is a very bad showing.

                                                                        1. 9

                                                                          Also, that’s not an unfair critique–if you’re a program manager in a field, I’d expect some relevant experience.

                                                                          No-one deserves to be talked to in that way, in any context, but especially not in a professional one.

                                                                          Or, maybe, we should be more careful about handing out titles like program manager, project manager, product manager, etc. (a common issue outside of security!).

                                                                          There is no evidence that the title was “handed out”, especially since we don’t even know what the job description is.

                                                                          1. 3
                                                                            1. open the hackerone thread
                                                                            2. open her profile to find her name
                                                                            3. look her up on linkedin

                                                                            I don’t presume to know what her job entails or whether or not she’s qualified, but titles should reflect reality or they lose their value. She certainly has a lot of endorsements on linkedin, which often carry more value than formal education.

                                                                            It’s “Program Manager, Security” btw.

                                                                            1. 2

                                                                              There is no evidence that the title was “handed out”, especially since we don’t even know what the job description is.

                                                                              There’s no evidence that it wasn’t–the point I’m making is that, due to practices elsewhere in industry, that title doesn’t really mean anything concrete.

                                                                        1. 11

                                                                          Hey @loige, nice writeup! I’ve been aching to asks a few questions to someone ‘in the know’ for a while, so here goes:

                                                                          How do serverless developers ensure their code performs to spec (local testing), handles anticipated load (stress testing) and degrades deterministically under adverse network conditions (Jepsen-style or chaos- testing)? How do you implement backpressure? Load shedding? What about logging? Configuration? Continuous Integration?

                                                                          All instances of applications written in a serverless style that I’ve come across so far (admittedly not too many) seemed to offer a Faustian bargain: “hello world” is super easy, but when stuff breaks, your only recourse is $BIGCO support. Additionally, your business is now non-trivially coupled to the $BIGCO and at the mercy of their decisions.

                                                                          Can anyone with production experience chime in on the above issues?

                                                                          1. 8

                                                                            Great questions!

                                                                            How do serverless developers ensure their code performs to spec (local testing)

                                                                            AWS e.g. provides a local implementation of Lambda for testing. Otherwise normal testing applies: abstract out business logic into testable units that don’t depend on the transport layer.

                                                                            handles anticipated load (stress testing)

                                                                            Staging environment.

                                                                            and degrades deterministically under adverse network conditions (Jepsen-style or chaos- testing)?

                                                                            Trust Amazon / Microsoft / Google. Exporting this problem to your provider is one of the major value adds of serverless architecture.

                                                                            How do you implement backpressure? Load shedding?

                                                                            Providers usually have features for this, like rate limiting for different events. But it’s not turtles all the way down, eventually your code will touch a real datastore that can overload, and you have to detect and propagate that condition same as any other architecture.

                                                                            What about logging?

                                                                            Also a provider value add.

                                                                            Configuration?

                                                                            Providers have environment variables or something spiritually similar.

                                                                            Continuous Integration?

                                                                            Same as local testing, but automated?

                                                                            but when stuff breaks, your only recourse is $BIGCO support

                                                                            If their underlying infrastructure breaks, yep. But every architecture has this problem, it just depends on who your provider is. When your PaaS provider breaks, when your IaaS provider breaks, when your colo provider breaks, when your datacenter breaks, when your electrical provider blacks out, when your fuel provider misses a delivery, when your fuel mines have an accident. The only difference is how big the provider is, and how much money its customers pay it to not break. Serverless is at the bottom of the money food chain, if you want less problems then you take on more responsibility and spend the money to do it better than the provider for your use case, or use more than one provider.

                                                                            Additionally, your business is now non-trivially coupled to the $BIGCO and at the mercy of their decisions.

                                                                            Double-edged sword. You’ve non-trivially coupled to $BIGCO because you want them to make a lot of architectural decisions for you. So again, do it yourself, or use more than one provider.

                                                                            1. 4

                                                                              And great answers, thank you ;)

                                                                              Having skimmed the SAM Local doc, it looks like they took the same approach as they did with DynamoDB local. I think this alleviates a lot of the practical issues around integrated testing. DynamoDB Local is great, but it’s still impossible to toggle throttling errors and other adverse conditions to check how the system handles these, end-to-end.

                                                                              The staging-env and CI solution seems to be a natural extension of server-full development, fair enough. For stress testing specifically, though, it’s great to have full access to the SUT, and to be able to diagnose which components break (and why) as the load increases. This approach goes contrary to the opaque nature of the serverless substrate. You only get the metrics AWS/Google/etc. can provide you. I presume dtrace and friends are not welcome residents.

                                                                              f their underlying infrastructure breaks, yep. But every architecture has this problem, it just depends on who your provider is. When your PaaS provider breaks, when your IaaS provider breaks, when your colo provider breaks, when your datacenter breaks, (…)

                                                                              Well, there’s something to be said for being able to abstract away the service provider and just assume that there are simply nodes in a network. I want to know the ways in which a distributed system can fail – actually recreating the failing state is one way to find out and understand how the system behaves and what kind of countermeasures can be taken.

                                                                              if you want less problems then you take on more responsibility

                                                                              This is something of a pet peeve of mine. Because people delegate so much trust to cloud providers, individual engineers building software on top of these clouds are held to a lower and lower standard. If there is a hiccup, they can always blame “AWS issues”[1]. Rank-and-file developers won’t get asked why their software was not designed to gracefully handle these elusive “issues”. I think the learned word for this is the deskilling of the workforce.

                                                                              [1] The lack of transparency on the part of the cloud providers around minor issues doesn’t help.

                                                                              1. 3

                                                                                For stress testing specifically, though, it’s great to have full access to the SUT, and to be able to diagnose which components break (and why) as the load increases.

                                                                                It is great, and if you need it enough you’ll pay for it. If you won’t pay for it, you don’t need it, you just want it. If you can’t pay for it, and actually do need it, then that’s not a new problem either. Plenty of businesses fail because they don’t have enough money to pay for what they need.

                                                                                This is something of a pet peeve of mine. Because people delegate so much trust to cloud providers, individual engineers building software on top of these clouds are held to a lower and lower standard. If there is a hiccup, they can always blame “AWS issues”[1]. Rank-and-file developers won’t get asked why their software was not designed to gracefully handle these elusive “issues”

                                                                                I just meant to say you don’t have access to your provider’s infrastructure. But building more resilient systems takes more time, more skill, or both. In other words, money. Probably you’re right to a certain extent, but a lot of the time the money just isn’t there to build out that kind of resiliency. Businesses invest in however much resiliency will make them the most money for the cost.

                                                                                So when you see that happening, ask yourself “would the engineering cost required to prevent this hiccup provide more business value than spending the same amount of money elsewhere?”

                                                                            2. 4

                                                                              @pzel You’ve hit the nail on the head here. See this post on AWS Lambda Reserved Concurrency for some of the issues you still face with Serverless style applications.

                                                                              The Serverless architecture style makes a ton of sense for a lot of applications, however there are lots of missing pieces operationally. Things like the Serverless framework fill in the gaps for some of these, but not all of them. In 5 years time I’m sure a lot of these problems will have been solved, and questions of best practices will have some good answers, but right now it is very early.

                                                                              1. 1

                                                                                I agree with @danielcompton on the fact that serverless is still a pretty new practice in the market and we are still lacking an ecosystem able to support all the possible use cases. Time will come and it will get better, but having spent the last 2 years building enterprise serverless applications, I have to say that the whole ecosystem is not so immature and it can be used already today with some extra effort. I believe in most of the cases the benefits (not having to worry too much on the underlying infrastructure, don’t pay for idle, higher focus on business logic, high availability and auto-scalability) overcome by a lot the extra effort needed to learn and use serverless today.

                                                                              2. 3

                                                                                Even though @peter already gave you some great answers, I will try to complement them with my personal experience/knowledge (I have used serverless on AWS for almost 2 years now building fairly complex enterprise apps).

                                                                                How do serverless developers ensure their code performs to spec (local testing)

                                                                                The way I do is a combination of the following practices:

                                                                                • unit testing
                                                                                • acceptance testing (with mocked services)
                                                                                • local testing (manual, mostly using the serverless framework invoke local functionality, but pretty much equivalent to SAM). Not everything could be locally tested depending on which services you use.
                                                                                • remote testing environment (to test things that are hard to test locally)
                                                                                • CI pipeline with multiple environments (run automated and manual tests in QA before deploying to production)
                                                                                • smoke testing

                                                                                What about logging?

                                                                                In AWS you can use cloudwatch very easily. You can also integrate third parties like loggly. I am sure other cloud providers will have their own facilities around logging.

                                                                                Configuration?

                                                                                In AWS you can use parameters storage to hold sensible variables and you can propagate them to your lambda functions using environment variables. In terms of infrastructure as code (which you can include in the broad definition of “configuration”) you can adopt tools like terraform or cloudformation (in AWS specifically, predefined choice by the serverless framework).

                                                                                Continuous Integration?

                                                                                I tried serverless successfully with both Jenkins and CircleCI, but I guess almost any CI tool will do it. You just need to configure your testing steps and your deployment strategy into a CI pipeline.

                                                                                when stuff breaks, your only recourse is $BIGCO support

                                                                                Sure. But it’s kind of proof that your hand-rolled solution will be more likely to break than the one provided by any major cloud provider. Also, those cloud providers very often provide refunds if you have outages given by the provider infrastructure (assuming you followed their best practices on high availability setups).

                                                                                your business is now non-trivially coupled to the $BIGCO

                                                                                This is my favourite as I have a very opinionated view on this matter. I simply believe it’s not possible to avoid vendor lock-in. Of course vendor lock-in comes in many shapes and forms and at different layers, but my point is that it’s fairly unpractical to come up with an architecture that is so generic that it’s not affected by any kind of vendor lock-in. When you are using a cloud provider and a methodology like serverless it’s totally true you have a very high vendor lock-in, as you will be using specific services (e.g. API Gateway, Lambda, DynamoDB, S3 in AWS) that are unique in that provider and equivalent services will have very different interfaces with other providers. But I believe the question should be: is it more convenient/practical to pay the risk of the vendor lock-in, rather than spending a decent amount of extra time and effort to come up with a more abstracted infrastructure/app that allows switching the cloud provider if needed? In my experience, I found out that it’s very rarely a good idea to over-abstract solutions only to reduce the vendor lock-in.

                                                                                I hope this can add another perspective to the discussion and enrich it a little bit. Feel free to ask more questions if you think my answer wasn’t sufficient here :)

                                                                                1. 6

                                                                                  This is my favourite as I have a very opinionated view on this matter. I simply believe it’s not possible to avoid vendor lock-in. Of course vendor lock-in comes in many shapes and forms and at different layers, but my point is that it’s fairly unpractical to come up with an architecture that is so generic that it’s not affected by any kind of vendor lock-in.

                                                                                  Really? I find it quite easy to avoid vendor lock-in - simple running open-source tools on a VPS or dedicated server almost completely eliminates it. Even if a tool you use is discontinued, you still can use it, and have the option of maintaining it yourself. That’s not at all the case with AWS Lambda/etc. Is there some form of vendor lock in I should be worried about here, or do you simply consider this an unpractical architecture?

                                                                                  When you are using a cloud provider and a methodology like serverless it’s totally true you have a very high vendor lock-in, as you will be using specific services (e.g. API Gateway, Lambda, DynamoDB, S3 in AWS) that are unique in that provider and equivalent services will have very different interfaces with other providers. But I believe the question should be: is it more convenient/practical to pay the risk of the vendor lock-in, rather than spending a decent amount of extra time and effort to come up with a more abstracted infrastructure/app that allows switching the cloud provider if needed? In my experience, I found out that it’s very rarely a good idea to over-abstract solutions only to reduce the vendor lock-in.

                                                                                  The thing about vendor lock-in is that there’s a quite low probability that you will pay an extremely high price (for example, the API/service you’re using being shut down). Even if it’s been amazing in all the cases you’ve used it in, it’s still entirely possible for the expected value of using these services to be negative, due to the possibility of vendor lock-in issues. Thus, I don’t buy that it’s worth the risk - you’re free to so your own risk/benefit calculations though :)

                                                                                  1. 1

                                                                                    I probably have to clarify that for me “vendor lock-in” is a very high level concept that includes every sort of “tech lock-in” (which would probably be a better buzz word!).

                                                                                    My view is that even if you use an open source tech and you host it yourself, you end up taking a lot of complex tech decisions from which is going to be difficult (and expensive!) to move away.

                                                                                    Have you ever tried to migrate from redis to memcache (or vice versa)? Even though the two systems are quite similar and a migration might seem trivial, in a complex infrastructure, moving from one system to the other is still going to be a fairly complex operation with a lot of implications (code changes, language-driver changes, different interface, data migration, provisioning changes, etc.).

                                                                                    Also, another thing I am very opinionated about is what’s valuable when developing a tech product (especially if in a startup context). I believe delivering value to the customers/stakeholders is the most important thing while building a product. Whatever abstraction makes easier for the team to focus on business value I think it deserves my attention. On that respect I found Serverless to be a very good abstraction, so I am happy to pay some tradeoffs in having less “tech-freedom” (I have to stick to the solutions given by my cloud provider) and higher vendor lock-in.

                                                                                  2. 2

                                                                                    I simply believe it’s not possible to avoid vendor lock-in.

                                                                                    Well, there is vendor lock-in and vendor lock-in… Ever heard of Oracle Forms?