1. 5

    Very nice, thank you. I am a long time Kinesis Advantage user, but I’m always interested in finding alternatives. I like the look of the ErgoDox, but I am having trouble with the idea of spending that sort of money without getting finger time in. I wonder if there’s anyone in the Toronto area with an ErgoDox who’d like to try a Kinesis. Hmmm.

    1. 1

      I’m also a long time Kinesis Advantage user. I’ve tried ErgoDox and Keyboard.io. Both are lovely keyboards, but neither have the bowl shape for finger keys that the Advantage has. I found both of them less comfortable to use than the Advantage due to all the keys being on the same plane.

      On the ErgoDox, I found it hard to reach all of the thumb keys in the cluster. I have pretty normal-sized hands (I think??) but found it a little painful to reach the furthest away ones.

      On the Keyboard.io I liked the sculpted keys but found my fingers slipping off the thumb keys, or pressing the wrong one. Perhaps with time, I could have gotten used to it, but for me, the Advantage is already perfect. I’m not super interested in programmable keyboard layouts, so that side of things didn’t hold much appeal for me either.

      1. 1

        I switch between a Keyboard.io M01 (home) and a Kinesis Advantage 2 (work). If I had to use just one I’m honestly not sure which I’d chose — both are great keyboards.

      2. 1

        Ooh the Advantage! Touched it once in a Dvorak layout when I was young (read seven years ago). Looks like a tank, quality and sturdy.

        It has a thumb cluster as well I thought,

        1. 2

          Yeah, the Advantage was the first keyboard I used with the thumb cluster, and it just makes so much sense. Too, the negative camber of the main keys is a crucial feature. If I could design a perfect keyboard for myself, it would basically be an Advantage with better function keys, in a wood case, split like the Ergodox. And then it’d also have Super and Hyper keys and we’d redesign the USB HIID to have fewer golf controllers and more programmable modifier keys.

          1. 2

            The Dactyl looks pretty great and fits a lot of your criteria, but it’s DIY only at the moment.

      1. 2

        The one slightly surprising change to me was the removal of options from the formula hombrew-core. The discussion isn’t linked in the post, but you can see more about it at https://github.com/Homebrew/homebrew-core/issues/31510. Note that you can still install from HEAD though.

        1. 5

          Quite a few of the links in the post have URL encoded byte order marks in the URLs, breaking them, e.g. https://openconnect.netflix.com/en/software/%EF%BB%BF.

          1. 1

            For anyone using clojure professionally, do a lot of JVM issues crop up? I mean do clojure devs regularly experience the same kinds of pains Java devs do?

            I’ve never worked with a JVM based language except Java.

            1. 5

              I’ve been developing Clojure programs for quite a while and I wouldn’t say I have many problems with the JVM. There are some breaking changes when upgrading Java versions, but I wouldn’t say I get a lot of JVM issues. Clojure is a very flexible language, so you can easily express some kinds of things which would be awkward to express in Java.

              This is a good demonstration of how Clojure lets you explore Java APIs, which might be a good entry-point for your work?

              1. 4

                It was rock-solid until Java 9 where Oracle threw a lot of the compatibility guarantees out the window and started introducing breaking changes, none of which were actually beneficial to anyone using Clojure. (except maaaaybe compact strings?) Now they’ve ramped way up the pace of breaking; non-LTS releases go unsupported before anyone has a chance to try out their successor; it’s kinda nuts.

              1. 3

                This is the general pattern that most of Apple’s recent security features follow: add new levels of security that limit freedom (in exchange for security) as the default, but keep the old methods around. See Gatekeeper, Boot Security, Sandboxing, Kernel extension approval.

                1. 2

                  This time though, it’s hardware level, and it’s hard to get the freedom (of using the internal SSD) back. You can access the SSD on Linux, but the machine shuts down in a few seconds if you try that.

                1. 4

                  A Turin turambar turún’ ambartanen. Another shell that isn’t shell, shells that aren’t shells aren’t worth using because shell’s value is it’s ubiquity. Still, interesting ideas.

                  This brought to you with no small apology to Tolkien.

                  1. 13

                    I’ve used the Fish shell daily for 3-4 years and find it very much worth using, even though it isn’t POSIX compatible. I think there’s great value in alternative shells, even if you’re limited in copy/pasting shell snippets.

                    1. 12

                      So it really depends on the nature of your work. If you’re an individual contributor, NEVER have to do devops type work or actually operate a production service, you can absolutely roll this way and enjoy your highly customized awesomely powerful alternative shell experience.

                      However, if you’re like me, and work in environments where being able to execute standardized runbooks is absolutely critical to getting the job done, running anything but bash is buying yourself a fairly steady diet of thankless, grinding, and ultimately pointless pain.

                      I’ve thought about running an alternative shell at home on my systems that are totally unconnected with work, but the cognitive dissonance of using anything other than bash keeps me from going that way even though I’d love to be using Xonsh by the amazing Anthony Scopatz :)

                      1. 5

                        I’d definitely say so – I’d probably use something else if I were an IC – and ICs should! ICs should be in the habit of trying lots of things, even stuff they don’t necessarily like.

                        I’m a big proponent of Design for Manufacturing, an idea I borrow from the widgety world of making actual things. The idea, as defined by an MFE I know, is that one should build things such that: “The design lends itself to being easily reproduced identically in a reliable, cost-effective manner.”

                        For a delivery-ops guy like me, working in a tightly regulated, safety-critical world of Healthcare, having reproducible, reliable architecture, that’s cheap to replace and repair is critical. Adding a new shell doesn’t move in that needle towards reproducibility, so it’s value has to come from reliability or cheapness, and once you add the fact that most architectures are not totally homogeneous, the cost goes up even more.

                        That’s the hill new shells have to climb, they have to get over ‘sh is just easier to use, it’s already there.’ That’s a very big hill.

                        1. 2

                          “The design lends itself to being easily reproduced identically in a reliable, cost-effective manner.” “That’s the hill new shells have to climb,”

                          Or, like with the similar problem posed by C compilers, they just provide a method to extract to whatever the legacy shell is for widespread, standard usage.

                          EDIT: Just read comment by @ac which suggested same thing. He beat me to it. :)

                          1. 2

                            I’ve pondered about transpilers a bit before, for me personally, I’ve learned enough shell that it doesn’t really provide much benefit, but I like that idea a lot more then a distinct, non-compatible shell.

                            I very much prefer a two-way transpiler. Let me make my old code into new code, so I can run the new code everywhere and convert my existing stuff to the new thing, and let me go back to old code for the machines where I can’t afford to figure out how to get new thing working. That’s a really big ask though.

                            The way we solve this at $work is basically by writing lots of very small amounts of shell, orchestrated by another tool (ansible and Ansible Tower, in our case). This covers about 90% of the infrastructure, with the remaining bits being so old and crufty (and so resource-poor from an organization perspective) that bugs are often tolerated rather than fixed.

                        2. 4

                          The counter to alternative shells sounds more like a reason to develop and use alternative shells that coexist with a standard shell. Maybe even with some state synchronized so your playbooks don’t cause effects the preferred shell can’t see and vice versa. I think a shell like newlisp supporting a powerful language with metaprogramming sounds way better than bash. Likewise, one that supports automated checking that it’s working correctly in isolation and/or how it uses the environment. Also maybe something on isolation for security, high availability, or extraction to C for optimization.

                          There’s lots of possibilities. Needing to use stuff in a standard shell shouldn’t stop them. So, they should replace the standard shell somehow in a way that still lets it be used. I’m a GUI guy whose been away from shell scripting for a long time. So, I can’t say if people can do this easily, already are, or whatever. I’m sure experts here can weigh in on that.

                        3. 7

                          I work primarily in devops/application architecture – having alternative shells is just a big ol’ no – tbh I’m trying to ween myself off bash 4 and onto pure sh because I have to deal with some pretty old machines for some of our legacy products. Alternative shells are cool, but don’t scale well. They also present increased attack surface for potential hackers to privesc through.

                          I’m also an odd case, I think shell is a pretty okay language, wart-y, sure, but not as bad as people make it out to be. It’s nice having a tool that I can rely on being everywhere.

                          1. 13

                            I work primarily in devops/application architecture

                            Alternative shells are cool, but don’t scale well.

                            Non-ubiquitous shells are a little harder to scale, but the cost should be controllable. It depends on what kind of devops you are doing:

                            • If you are dealing with a limited number of machines (machines that you probably pick names yourself), you can simply install Elvish on each of those machines. The website offers static binaries ready to download, and Elvish is packaged in a lot of Linux distributions. It is going to be a very small part of the process of provisioning a new machine.

                            • If you are managing some kind of cluster, then you should already be doing most devops work via some kind of cluster management system (e.g. Kubernetes), instead of ssh’ing directly into the cluster nodes. Most of your job involves calling into some API of the cluster manager, from your local workstation. In this case, the number of Elvish instances you need to install is one: that on your workstation.

                            • If you are running some script in a cluster, then again, your cluster management system should already have a way of pulling in external dependencies - for instance, a Python installation to run Python apps. Elvish has static binaries, which is the easiest kind of external dependency to deal with.

                            Of course, these are ideal scenarios - maybe you are managing a cluster but it is painful to teach whatever cluster management system to pull in just a single static binary, or you are managing some old machines with an obscure CPU architecture that Elvish doesn’t even cross-compile to. However, those difficulties are by no means absolute, and when the benefit of using Elvish (or any other alternative shell) far outweighs the overheads, large-scale adoption is possible.

                            Remember that bash – or every shell other than the original bourne shell - also started out as an “alternative shell” and it still hasn’t reached 100% adoption, but that doesn’t prevent people from using it on their workstation, servers, or whatever computer they work with.

                            1. 4

                              All good points. I operate on a couple different architectures at various scales (all relatively small, Xe3 or so). Most of the shell I write is traditional, POSIX-only bourne shell, and that’s simply because it’s everywhere without any issue. I could certainly install fish or whatever, or even standardized version of bash, but it’s an added dependency that only provides moderate convenience at the cost of another ansible script to maintain, and increased attack surface.

                              The other issue is that ~1000 servers or so have very little in common with each other, about 300 of them support one application, that’s the biggest chunk, 4 environments of ~75 machines each, all more or less identical.

                              The other 700 are a mish mash of versions of different distros, different OSes, different everything, that’s where /bin/sh comes in handy. These are all legacy applications, none of them get any money for new work, they’re all total maintenance mode, any time I spend on them is basically time lost from the business perspective. I definitely don’t want to knock alternative shells as a tool for an individual contributor, but it’s ultimately a much simpler problem for me to say, “I’m just going to write sh” then “I’m going to install elvish across a gagillion arches and hope I don’t break anything”

                              We drive most cross-cutting work with ansible (that Xe3 is all vms, basically – not quite all, but like 98%), bash really comes in as a tool for debugging more than managing/maintaining. If there is an issue across the infra – say like meltdown/spectre, and I want to see what hosts are vulnerable, it’s really fast for me (and I have to emphasize – for me – I’ve been writing shell for a lot of years, so that tweaks things a lot) to whip up a shell script that’ll send a ping to Prometheus with a 1 or 0 as to whether it’s vulnerable, deploy that across the infra with ansible and set a cronjob to run it. If I wanted to do that with elvish or w/e, I’d need to get that installed on that heterogenous architecture, most of which my boss looks at as ‘why isn’t Joe working on something that makes us money.’

                              I definitely wouldn’t mind a better sh becoming the norm, and I don’t want to knock elvish, but from my perspective, that ship has sailed till it ports, sh is ubiquitous, bash is functionally ubiquitous, trying to get other stuff working is just a time sink. In 10 years, if elvish or fish or whatever is the most common thing, I’ll probably use that.

                              1. 1

                                The other 700 are a mish mash of versions of different distros, different OSes, different everything, that’s where /bin/sh comes in handy.

                                So, essentially, whatever alternative is built needs to use cross-platform design or techniques to run on about anything. Maybe using cross-platform libraries that facilitate that. That or extraction in my other comment should address this problem, eh?

                                Far as debugging, alternative shells would bring both a cost and potential benefits. The cost is unfamiliarity might make you less productive since it doesn’t leverage your long experience with existing shell. The potential benefits are features that make debugging a lot easier. They could even outweigh cost depending on how much time they save you. Learning cost might also be minimized if the new shell is based on a language you already know. Maybe actually uses it or a subset of it that’s still better than bash.

                            2. 6

                              My only real beef with bash is its array syntax. Other than that, it’s pretty amazing actually, especially as compared with pre bash Bourne Shells.

                              1. 4

                                Would you use a better language that compiles to sh?

                                1. 1

                                  Eh, maybe? Depends on your definition of ‘better.’ I don’t think bash or pure sh are all that bad, but I’ve also been using them for a very long time as a daily driver (I write more shell scripts then virtually anything else, ansible is maybe a close second); so I’m definitely not the target audience.

                                  I could see if I wanted to do a bunch of math, I might need use something else, but if I’m going to use something else, I’m probably jumping to a whole other language. Shell is in a weird place, if the complexity is high enough to need a transpiler, it’s probably high enough to warrant writing something and installing dependencies.

                                  I could see a transpiler being interesting for raising that ceiling, but I don’t know how much value it’d bring.

                            3. 10

                              Could not disagree more. POSIX shell is unpleasant to work with and crufty; my shell scripting went through the roof when I realized that: nearly every script I write is designed to be launched by myself; shebangs are a thing; therefore, the specific language that an executable file is written in is very, very often immaterial. I write all my shell scripts in es and I use them everywhere. Almost nothing in my system cares because they’re executable files with the path to their interpreter baked in.

                              I am really pleased to see alternative non-POSIX shells popping up. In my experience and I suspect the experience of many, the bulk of the sort of scripting that can make someone’s everyday usage smoother need not look anything like bash.

                              1. 5

                                Truth; limiting yourself to POSIX sh is a sure way to write terribly verbose and slow scripts. I’d rather put everything into a “POSIX awk” that generates shell code for eval when necessary than ever be forced to write semi-complex pure sh scripts.

                                bash is a godsend for so many reasons, one of the biggest being process substitution feature.

                                1. 1

                                  For my part, I agree – I try to generally write “Mostly sh compatible bash” – defaulting to sh-compatible stuff until performance or maintainability warrant using the other thing. Most of the time this works.

                                  The other mitigation is that I write lots of very small scripts and really push the worse-is-better / lots of small tools approach. Lots of the scripting pain can be mitigated by progressively combining small scripts that abstract over all the details and just do a simple, logical thing.

                                  One of the other things we do to mitigate the slowness problem is to design for asynchrony – almost all of the scripts I write are not time-sensitive and run as crons or ats or whatever. We kick ‘em out to the servers and wait the X hours/days/whatever for them to all phone home w/ data about what they did, work on other stuff in the meantime. It really makes it more comfortable to be sh compatible if you can just build things in a way such that you don’t care if it takes a long time.

                                  All that said, most of my job has been “How do we get rid of the pile of ancient servers over there and get our assess to a disposable infrastructure?” Where I can just expect bash 4+ to be available and not have to worry about sh compatibility.

                                2. 1

                                  A fair cop, I work on a pretty heterogenous group of machines, /bin/sh works consistently on all of them, AIX, IRIX, BSD, Linux, all basically the same.

                                  Despite our (perfectly reasonable) disagreement, I am also generally happy to see new shells pop up. I think they have a nearly impossible task of ousting sh and bash, but it’s still nice to see people playing in my backyard.

                                3. 6

                                  I don’t think you can disqualify a shell just because it’s not POSIX (or “the same”, or whatever your definition of “shell” is). The shell is a tool, and like all tools, its value depends on the nature of your work and how you decide to use it.

                                  I’ve been using Elvish for more than a year now. I don’t directly manage large numbers of systems by logging into them, but I do interact quite a bit with services through their APIs. Elvish’s native support for complex data structures, and the built-in ability to convert to/from JSON, makes it extremely easy to interact with them, and has allowed me to build very powerful toolkits for doing my work. Having a proper programming language in the shell is very handy for me.

                                  Also, Elvish’s interactive experience is very customizable and friendly. Not much that you cannot do with bash or zsh, but much cleaner/easier to set up.

                                  1. 4

                                    I’ve replied a bunch elsewhere, I don’t mean to necessarily disqualify the work – it definitely looks interesting, and for an individual contributor somewhere who doesn’t have to manage tools at scale, or interact with tools that don’t speak the JSON-y api it offers, etc – that’s where it starts to get tricky.

                                    I said elsewhere in thread, “That’s [the ubiquity of sh-alikes] the hill new shells have to climb, they have to get over ‘sh is just easier to use, it’s already there.’ That’s a very big hill.”

                                    I’d be much more interested if elvish was a superset of sh or bash. I think that part of the reason bash managed to work was that sh was embedded underneath, it was a drop-in replacement. If you’re a guy who, like me, uses a lot of shell to interact with systems, adding new features to that set is valuable, removing old ones is devastating. I’m really disqualifying (as much as I am) on that ground, not just that it’s not POSIX, but that it is less-than-POSIX with the same functionality. That keeps it out of my realm.

                                    Now this may be biased, but I think I’m the target audience in terms of adoption – you convince a guy like me that your shell is worth it, and I’m going to go drop it on my big pile of servers where-ever I’m working. Convincing ICs who deal with their one machine gets you enough adoption to be a curiousity, convince a DevOps/Delivery guy and you get shoved out to every new machine I make and suddenly you’ve got a lot of footprint that someone is going to have to deal with long after I’m gone and onto Johhny Appleshelling the thing at whatever poor schmuck hires me next.

                                    Here’s what I’d really like to see, a shell that offers some of these JSON features as an alternative pipe (maybe ||| is the operator, IDK), adds some better numbercrunching support, and maybe some OO features. All while remaining a superset of POSIX. That’d make the cost of using it very low, which would make it easy to justify adding to my VM building scripts. It’d make the value very high (not having to dip out to another tool to do some basic math would be fucking sweet,), having OO features so I could operate on real ‘shell objects’ and JSON to do easier IO would be really nice as well. Ultimately though you’re fighting uphill against a lot of adoption and a lot of known solutions to these problems (there are patterns for writing shell to be OOish, there’s awk for output processing, these are things which are unpleasant to learn, but once you do, the problem JSON solves drops to a pretty low priority).

                                    I’m really not trying to dismiss the work. Fixing POSIX shell is good work, it’s just not likely to be successful by replacing. Improving (like bash did) is a much better route, IMO.

                                  2. 2

                                    I’d say you’re half right. You’ll always need to use sh, or maybe bash, they’re unlikely to disappear anytime soon. However, why limit yourself to just sh when you’re working on your local machine? You could even take it a step further and ask why are you using curl locally when you could use something like HTTPie instead? Or any of the other “alternative tools” that make things easier, but are hard to justify installing everywhere. Just because a tool is ubiquitous does not mean it’s actually good, it just means that it’s good enough.

                                    I personally enjoy using Elvish on my local machines, it makes me faster and more efficient to get things done. When I have to log into a remote system though I’m forced to use Bash, it’s fine and totally functional, but there’s a lot of stupid things that I hate. For the most ridiculous and trivial example, bash doesn’t actually save it’s history until the user logs out, unlike Elvish (or even IPython) which saves it after each input. While it’s a really minor thing, it’s really, really, really useful when you’re testing low level hardware things that might force an unexpected reboot or power cycle on a server.

                                    I can’t fault you if you want to stay POSIX, that’s a personal choice, but I don’t think it’s fair to write off something new just because there’s something old that works. With that mindset we’d still be smashing two rocks together and painting on cave walls.

                                  1. 3

                                    Just want to point out (since I couldn’t find it on the site at first) “the license is in development” while it’s in beta, but the plan is not to make this free software. AFAICT the community edition will be Creative Commons BY-NC-SA. So that’s a bummer.

                                    1. 1

                                      What exactly does it mean that the community edition is BY-NC-SA? That you have to attribute and publicly share any code that you write with Alan?

                                      1. 1

                                        The BY bit means you have to provide attribution if you make a derivative work. The SA means that derivative works have to be under the same license, similar to the GPL. NC means that commerical use is forbidden (the NC stands for non-commercial), which violates freedom 0 and makes this not free software. They don’t specify what version they’d use but presumably it would be the latest, which would make https://creativecommons.org/licenses/by-nc-sa/4.0/ the license in question.

                                        1. 1

                                          What does derivative work mean here? An app you build using their framework, or just making changes to the framework itself?

                                          1. 1

                                            I don’t know. There may be an answer out there, but Creative Commons is not designed for software. If we were discussing, say, the GPL, there might be a clearer answer. I have a very hazy guess, but it’s based on no research and overall I’m so uninformed about your question that I don’t want to speculate in public :P

                                            I don’t know how prevalent software using CC licenses is, but it’s possible no one would know and we’d have to wait for a court to decide.

                                    1. 1

                                      I’ve really been enjoying this podcast, I’ve been following along since it started, and it’s helped me pick up some tips for my enacts dabbling.

                                      1. 1


                                        I’m honestly quite surprised that experienced Emacs users listen to my show :)

                                        Thank you.

                                      1. 4

                                        A friend and I dug up most of our gravel driveway with a tractor so that we can put grass down instead. There’s about 60m2 of unnecessary gravel driveway that we’re going to use to extend the kids grassy play area. When it’s done it’ll be great, but it’s looking pretty messy half-way through.

                                        1. 4

                                          Microsoft’s perceived lack of clarity in the roadmap (.NET Standard, .NET Core, .NET Framework, etc) and history of killing off or deprecating frameworks (Silverlight, Winforms, should I use WPF or UWP?) are a couple more reasons why startups don’t turn to .NET. Add what others have mentioned, the closed source and history of high cost, the lack of ecosystem, and the long history of being actively against open source and copyleft licenses, and Microsoft just doesn’t look like a startup choice. Microsoft was also relatively late as a cloud computing choice. Maybe something will emerge from their Bizspark program and their open source efforts to change their perceived position.

                                          I didn’t include PHP because there were a lot of startups that had nothing but PHP and Apache Server. That’s partly why I looked at 100 startups and ended up with 23. Startups with just PHP are probably e-commerce websites or non-software at all.

                                          I wonder if this is reasonable to exclude PHP? I could see the point of excluding it because there’s a Wordpress blog hanging off the domain or if, as the author states, an e-commerce startup kicked things off with Magento or the like. On the other hand, is PHP just being excluded because, well, PHP?

                                          1. 3

                                            I read it as the author saying that they couldn’t distinguish between shops using PHP for a webshop/CMS and doing new software development with it, so it was excluded from the analysis.

                                          1. 3

                                            This is great for debugging, but I’m not so keen to see it in production on user machines. From the gist it sounds like it would work on prod code.

                                            save user recordings when an exception is fired

                                            I’m not keen on surveillance of other users browsing sessions, especially without their consent. Building this kind of feature into a browser normalises it, especially if it’s not something that the user has to opt in to.

                                            1. 3

                                              No mention of app engine?

                                              1. 2

                                                Sorry, I should have mentioned that I only reviewed services I used. Due to load balancer upload limits on App Engine I wasn’t able to use App Engine as the application server so I didn’t look into it too deeply. It definitely looks good though.

                                                1. 4

                                                  If you can make your use-case fit into AppEngine’s constrained data & runtime model then it is absolute nirvana. If you can’t then you’re stuck using something else.

                                              1. 3

                                                It’s probably way out of the intended scope, but could Mitogen be used for basic or throwaway parallel programming or analytics? I’m imagining a scenario where a data scientist has a dataset that’s too big for their local machine to process in a reasonable time. They’re working in a Jupyter notebook, using Python already. They spin up some Amazon boxes, each of which pulls the data down from S3. Then, using Mitogen, they’re able to push out a Python function to all these boxes, and gather the results back (or perhaps uploaded to S3 when the function finishes).

                                                1. 3

                                                  It’s not /that/ far removed. Some current choices would make processing a little more restrictive than usual, and the library core can’t manage much more than 80MB/sec throughput just now, limiting its usefulness for data-heavy IO, such as large result aggregation.

                                                  I imagine a tool like you’re describing with a nice interface could easily be built on top, or maybe as a higher level module as part of the library. But I suspect right now the internal APIs are just a little too hairy and/or restrictive to plug into something like Jupyter – for example, it would have to implement its own serialization for Numpy arrays, and for very large arrays, there is no primitive in the library (yet, but soon!) to allow easy streaming of serialized chunks – either write your own streaming code or double your RAM usage, etc.

                                                  Interesting idea, and definitely not lost on me! The “infrastructure” label was primarily there to allow me to get the library up to a useful point – i.e. permits me to say “no” to myself a lot when I spot some itch I’d like to scratch :)

                                                  1. 3

                                                    This might work, though I think you’d be limited to pure python code. On the initial post describing it:

                                                    Mitogen’s goal is straightforward: make it childsplay to run Python code on remote machines, eventually regardless of connection method, without being forced to leave the rich and error-resistant joy that is a pure-Python environment.

                                                    1. 1

                                                      If it are just simple functions you run, you could probably use pySpark in a straight-forward way to go distributed (although Spark can handle much more complicated use-cases as well).

                                                      1. 2

                                                        That’s an interesting option, but presumably requires you to have Spark setup first. I’m thinking of something a bit more ad-hoc and throwaway than that :)

                                                        1. 1

                                                          I was thinking that if you’re spinning up AWS instances automatically, you could probably also configure that a Spark cluster is setup with it as well, and with that you get the benefit that you neither have to worry much about memory management and function parallelization nor about recovery in case of instance failure. The performance aspect of pySpark (mainly Python object serialization/memory management) is also actively worked on transitively through pandas/pyArrow.

                                                          1. 2

                                                            Yeah that’s a fair point. In fact there’s probably an AMI pre-built for this already, and a decent number of data-science people would probably be working with Spark to begin with.

                                                    1. 7

                                                      This version takes Clojars’ playbook runtimes from 16 minutes to 1 minute 30 seconds. It is my favourite piece of software in recent years. Highly recommended if you use Ansible.

                                                      1. 4

                                                        Adding/testing support for Clojure’s tools.deps CLI for Deps. Theoretically there shouldn’t be much needed if anything but I need to document it for customers, and will probably write a guide for how to use it in CI.

                                                        I’m also accumulating instructions for enough different build tools that I need to add tabs or some other information hiding mechanism on the setup page.

                                                        1. 3

                                                          I like the Moderation Log for this post:

                                                          Story: Rails Asset Pipeline Directory Traversal Vulnerability (CVE-2018-3760)
                                                          Action: changed tags from “ruby” to “ruby security web”
                                                          Reason: Adding a couple tags… after checking the Lobsters production.rb.

                                                          1. 1

                                                            Unfortunately, while the headline is clever, it’s not true.

                                                            Palantir’s worst is done with code written in house, with the same open source codebase we all start with. So long as there are people willing to work there, bad things are going to be written into code and deployed.

                                                            1. 15

                                                              One note, the specific company wasn’t Palantir, but was in a similar space.

                                                              I agree that not serving this company has a very small effect on them, but it was better than the alternative. Additionally, if enough companies refuse to work with companies like Palantir, it would begin to hinder their efforts.

                                                              1. 8

                                                                not serving this company has a very small effect on them

                                                                It has a big effect, instead. On the system. On their employees. On your employes and your customers…

                                                                Capitalism fosters a funny belief through its propaganda (aka marketing): that humans’ goals are always individualistic and social improvements always come from collective fights. This contraddiction (deeply internalized as many other disfunctional ones) fool many people: why be righteous (whatever it means to me) if it doesn’t change anything to me?

                                                                It’s just a deception, designed to marginalize behaviours that could challenge consumerism.

                                                                But if you cannot say “No”, you are a slave. You proved to be not.

                                                                And freedom is always revolutionary, even without a network effect.

                                                                1. 1

                                                                  Sounds like it was https://www.wired.com/story/palmer-luckey-anduril-border-wall/ ? Palantir at least has some positive clients, like the SEC and CDC.

                                                                2. 4

                                                                  But….that wasn’t his moral question? He was being offered a chance to be a vendor of services to a palantir-like surveillance outfit engaged in ethnic cleansing, not offered a job with a workstation. So yeah, the headline was absolutely true. It is up to individuals to refuse, and by publicly refusing to engage, not necessarily internally, they will inspire others to not profit by these horrors.

                                                                  1. 0

                                                                    It wasn’t. But the quip implies that we can act like a village, when the sad truth is that the low barrier to entry in software development means we can’t really act like a village, and stop people with our skillset from putting vile stuff into code.

                                                                    1. 3

                                                                      yeah, not really understanding this from the original post. and for the record the low barrier to entry is absolutely not what is allowing people to put vile stuff in code. extremely talented, well educated, highly intelligent people do horrifying stuff every single day.

                                                                      1. 1

                                                                        This is the best attitude one can desire from slaves. Don’t question the masters. It’s pointless.

                                                                        1. 1

                                                                          We can act like a village, we just can’t act like the entire population. Choosing not to work at completely unethical places when we can afford it does at the very least increase the cost and decrease the quality of the evil labor. Things could even reach a point where the only people willing to work there are saboteurs.

                                                                    1. 6

                                                                      The main comment themes I found were:

                                                                      • Error messages: still a problem
                                                                      • Spec: promising, but how to adopt fully unclear
                                                                      • Docs: still bad, but getting a little better
                                                                      • Startup time: always been a problem, becoming more pressing with serverless computing
                                                                      • Marketing/adoption: Clojure still perceived as niche/unknown by non-technical folk
                                                                      • Language: some nice ideas for improvement
                                                                      • Language Development Process: not changing, still an issue
                                                                      • Community: mostly good, but elitist attitudes are a turnoff, and there is a growing perception CLJ is shrinking
                                                                      • Libraries: more guidance needed on how to put them together
                                                                      • Other targets: a little interest in targeting non JS/JVM targets
                                                                      • Typing: less than in previous years, perhaps people are finding spec meets their needs?
                                                                      • ClojureScript: improving fast, tooling still tricky, NPM integration still tricky
                                                                      • Tooling: still hard to put all the pieces together
                                                                      • Compliments: “Best. Language. Ever.”

                                                                      Lots of room for improvement here, but I still love working with Clojure and am thankful that I get to do so.

                                                                      1. 3

                                                                        I’m running on Google Cloud Platform, but there’s enough similarities to AWS that hopefully this is helpful.

                                                                        I use Packer to bake a golden VM image that includes monitoring, logging, e.t.c. based on the most recent Ubuntu 16.04 update. I rebuild the golden image roughly monthly unless there is a security issue to patch. Then when I release new versions of the app I build an app specific image based on the latest golden image. It copies in an Uberjar from Google Cloud Storage (built by Google Cloud Builder). All of the app images live in the same image family

                                                                        I then run a rolling update to replace the current instances in the managed instance group with the new instances.

                                                                        The whole infrastructure is managed with Terraform, but I only need to touch Terraform if I’m changing cluster configuration or other resources. Day to day updates don’t need to go through Terraform at all, although now that the GCP Terraform provider supports rolling updates, I may look at doing it with Terraform.

                                                                        It’s just me for everything, so I’m responsible for it all.

                                                                        1. 3

                                                                          I just backed this project on Kickstarter. If it can be made to work like it promises, it would be a huge productivity boost for me on several projects. Currently with Deps, I bake an image with Packer and Ansible for every new deployment (based on a golden image). That has been getting a bit slow, so I was looking at other deployment options. Having super fast Ansible builds would be great, and make that not as necessary.

                                                                          1. 2

                                                                            Hi Daniel, I keep forgetting to reply here – thanks so much for your support! For every neat complementary comment I’ve been receiving 5 complex questions elsewhere. I’ve just posted a short update, and although it is running a little behind, it looks like the campaign still has legs. I’m certainly here until the final hour. :) Thanks again!