1. 8

    Concomitant with the development of Swift are the development of a new package manager (Swift Package Manager), an open-source drop-in replacement for the Foundation library, and an open-source port of XCUnitTest, Apple’s unit-testing framework. So they’re also going all-out on the infrastructure around the language, which is hugely encouraging.

    1. 5

      I was happily surprised to see that Max Howell (Homebrew creator) has been working on the Swift Package Manager, too.

    1. 5

      To save my editorialization for the comments: I thought this was a really kind, insightful look at how to improve the human factors of a software project.

      1. 2

        This is fantastic! Thanks for sharing!


        (Edit to add substance):
        I think this is a great example of the kind of general usability self-reflection even non-designers can provide their projects. Many open source projects could benefit from considering UX and what the onboarding process for new users and contributors looks like.

      1. 1

        This is where nix & guix shine. If you require the docker daemon to be installed and running, just make it a nix-daemon instead and avoid this mess.

        1. 2

          I love the ideas behind nix and guix, but my experience trying to use nix on OS X hasn’t been great yet. A lot of the packages I wanted to use seemed to only work on Linux (maybe because of libc dependencies or something?), and things took a very long time to compile because there weren’t cached builds for my platform. It’s definitely not something I’d want our designers to have to wrestle with at this point.

          As I said above, a statically linked binary (e.g. in Go) would probably be the ideal here (especially if I also had the time to give it a nice GUI), but short of that, Docker isn’t that awful of a dependency. Docker Toolbox has a pretty nice installer and they’re incentivized to maintain a certain degree of usability.

          Things in the nix/guix world may have improved since last I checked, so please tell me if my experience is out of date. :)

        1. [Comment removed by author]

          1. 5

            The ideal is a binary that does not even require the language installed. Go really wins here.

            I strongly agree! I think this is one of the best things Go has going for it. I wish Rust made static linking and cross compilation as easy as Go does.

            If I found a CLI app that I wanted to use and it required Docker, I would probably re-write it.

            I definitely did consider a re-write. In this case, I was starting with not so much a monolithic CLI app as much as a collection of Ruby, Node.js, and native libraries and tools that were being driven from a complex Rakefile. Switching to a GLI-based Ruby CLI app front end cleaned up the user interface significantly, but I still had the hard problem of providing the same behavior across different machines. More portable libraries that purported to support the same functionality repeatedly fell short of expectations (e.g. libsass). I opted to use this Docker image technique as a means of incremental improvement. In the future if I were to find, for example, a set of Go libraries that can exactly mimic the functionality of the current app, it would now not be hard to swap out the messy internals.

            A rewrite often sounds like the best idea, but when users already depend on the particulars of an existing implementation, there may be a lot of details that turn out to make rewrites more difficult than one might expect.

            This is a crude hack that papers over serious issues with the build.

            I mostly agree, and I think the fact that this was a practical way forward is telling about the tunnel vision language communities can have when it comes to software distribution.

            1. 1

              I’d be curious to see where you think libsass falls short. I’ve been considering using it to avoid a Ruby tool chain, but it does seem to lag quite a lot.

              1. 1

                Good question!

                I wish I could provide more details, but we did not exhaustively document the issues we encountered at the time. If I recall correctly, one of the issues was that Susy, didn’t work with libsass, though a quick Google search indicates that may have changed since we last tried. In general though, we got a sense that the switch was going to place an undue burden on our design team to remove dependencies on several compass extensions they were used to relying on.

                Things may be better now, and I should probably re-evaluate libsass for use on this project in the future.

          1. 2

            At work, I’m currently working on an internal-use command line tool that uses Docker images to isolate and standardize a set of build and deploy tools (mostly Ruby and JavaScript) for a custom Craft CMS-based website. It sounds a little out there, and it’s been a fair amount of work, but it’s actually helping with some “works on my machine” issues we had been having, and I get to improve the interface quite a bit, too. I’ll probably blog about it at some point, so if you’re interested at all let me know what you’d like to hear about.

            Separately, in fleeting moments of “free time,” I’ve started playing around with some of the Ruby exercises on exercism.io. I’ve known about the project for a while, but this is the first I’ve tried it. I really like the idea of getting feedback on some of these more subjective aspects of code like how to make it more readable, more succinct, or more idiomatic. I’ve also found it’s quite satisfying to write small bits of working code and be able to focus on the aesthetic pleasure of it. I hope to try out some OCaml exercises soon, too, to see how it works for a language I’m less familiar with.

            1. 1

              Update: I blogged about it (lobste.rs thread).

            1. 9

              It looks like the designer has also contributed to several other GNU and OSS projects. They’re lucky to have him. I was struck by how much more appealing Guile appeared with just this visual refresh. My subconscious picked up on some visual cues and said “ooo, new programming language! Let’s try it out!”

                1. 1

                  sigh Yeah, it’s kind of awful that published papers are always behind paywalls. This one doesn’t sound interesting enough, especially without a description of why it’s relevant to this audience. I know I’m going to change my mind ten times on whether it deserves the spam flag though.

                  1. 2

                    Sorry, when I posted the link it was still going through to the paper. This paper calls for a higher standard of evidence used as the basis for decision making in the design of programming languages. It also contains a number of citations of currently existing empirical research in this area.

                    It relates to the video I posted here: https://lobste.rs/s/dtqwtq/evidence-oriented_programming_by_andreas_stefik

                  2. 1

                    Argh. Sorry!

                    shakes fist at ACM

                    I even tested this a few times before posting and it went through to the PDF each time. :(

                    1. 2

                      No worries. It just means you have an employer or educator who can afford a site license for you, which is a nice situation to be in. :)

                      Similarly, I try not to submit stories with obnoxious pop-ups (99% of which ask for an email address - I know the business case for these but they still bother me) but my ad blocker catches the ones served from the big email services, so…

                      1. 1

                        Actually, we don’t have a site license. I think it may have to do with the way ACM allows authors to use direct links from their bibliographies. See my comment here: https://lobste.rs/s/g6qugv/the_programming_language_wars_questions_and_responsibilities_for_the_programming_language_community/comments/vtywan#c_vtywan

                        Frustrating. :-/

                        1. 1

                          Anyway, no worries. Accidents happen.

                  1. 2

                    Sorry for the bad link!

                    It may still be possible to get to this paper if you click the link from the author’s homepage first. ACM’s “Author-Izer” system must check referrers or something.

                    1. 4

                      Meta: could we add a unikernel tag? Or maybe just a general “operating systems” tag?

                      Edit: I did almost add a virtualization tag, too, but mainly because no operating systems tag exists. Erred on the side of too few tags this time.

                      1. 2

                        Agreed that an operating systems tag would be nice. This post could also have the virtualization tag.

                        1. 2

                          A systems or kernel tag has been suggested before, but presumably fell out of attention span before being added.

                        1. 1

                          Still recovering from all those gifs.

                          1. 1

                            Discussing that with my colleagues on our Slack now. I have mixed feelings about the gifs.
                            All the animation made the text harder to focus on, but they also communicated some essential ideas.

                            I’m reminded of Darmok.

                          1. 2

                            I hate that my flag for already posted counts against @englishm! Sorry!

                            1. 2

                              No worries! My mistake.

                              1. 1

                                Whoops! Didn’t see this earlier, and I had stripped ‘index.html’ from the URL so it didn’t automatically match when I submitted. Thanks for the catch!

                                Anyways, very cool project.

                              1. 4

                                If you think a command-line interface is not an API, then you are ignoring the millions of lines of shell scripts that keep the Internet running. You are wrong. Please consult your nearest sysadmin for an attitude readjustment.

                                No U. There are indeed millions of shell scripts - and I have never seen one that wasn’t a disaster waiting to happen. Remember that bash function expansion bug? Particularly in a security context, any use of shell scripts for production systems is simply terrible engineering. I am genuinely angry that someone would try to defend it.

                                Heartbleed and many other vulnerabilities are direct results of crypto and keys living the same process space as protocol and application logic.

                                No, they’re a direct result of using non-memory-safe languages. Any Heartbleed attacker could have achieved arbitrary code execution, at which point process separation doesn’t offer much in the way of protection - if the attacker controls the process that talks to the process that handles the key, then they can still use the key to e.g. sign their own nefarious messages.

                                That said, there is some value in process separation. But it’s much less than the cost of parsing a text interface. If you want to do process separation then great, but use shared memory or protobuf or heck JSON if you have to (a suggestion the original post did make).

                                We looked at the libraries, encountered an apalling lack of documentation and hoped the command-line interface would be nicer to work with.

                                Sure, I understand the pragmatic business decision here. But as I said, this makes you part of the problem. If all the companies that have wrapped GPG over the years had instead put that effort into either a) the JSON format interface to GPG the article talks about or b) a well-documented library, we would have solved these problems many times over. But as long as it’s marginally easier for each individual company to maintain a horrible wrapper, the tragedy of the commons continues.

                                Treat the command-line interface of gpg as if it were an API; keep it stable and machine friendly. There should be command-line alternatives to all of the interactive dialogs.

                                This is the wrong approach. GPG must have the freedom to make its interface more user-friendly, or we’ll never get anywhere. One binary can’t serve two masters; by all means write a tool that offers GPG functionality in a stable, machine-friendly interface, but make it a different tool from gpg itself.

                                To reiterate and expand the suggestion I made on the HN thread:

                                • Pick up one of the libraries and bring it up to production quality. Alternately, split GPG itself into library and UI. Compare something like ffmpeg or ImageMagick, which offers ffmpeg-the-frontend-tool but is mostly used as a backend for other programs that link to it as a library. (And yes, ffmpeg has a stable set of command line arguments, but it’s not an interactive program. gpg should be the equivalent of e.g. superadduser or, hell, openssl).
                                • If you want a separate process that communicates via JSON then great, make a gpgme-tool or similar that uses this library as a backend.
                                1. 5

                                  A lot of Unix programs, dare I say, most programs intended for Unix, are explicitly written expecting their CLI to also be their API. It’s part of the reason why git’s CLI has to stay the way it is, weird and inconsistent, because its CLI is used as an API. It’s explicitly part of Mercurial’s compatibility rules, and bear in mind that hg was built by the same group of kernel hackers who built Linux and git. (There may also be such explicit rules for git, but I don’t know their dev community as well, so I don’t know where to find them. I do know that Linus yells very loudly if people break userspace in Linux, so I assume a similar attitude carries over to git.)

                                  Having worked with Mercurial development, I can tell you: it’s annoying to have to make sure the CLI is stable-as-an-API, but I guarantee that it’s part of the reason why hg hasn’t completely died out. People using it in 2008 as part of some bigger contraption still use hg in 2015, thanks to this stability. GnuPG would do well to start treating its CLI as an API as well.

                                  1. 2

                                    Quoting from Werner’s reply to the first Mailpile post about GnuPG:

                                    GnuPG output. I have dutifully read, memorized chunks of, and bookmarked this file for posterity. It is immensely helpful. For example, it gives […] Now here comes issue the first: this is essentially a colon separated value (CSV!) data structure, but the data being provided is a) inconsistent, and b) structured.

                                    You mean that there is no top-down design? Right that is how life is.

                                    GnuPG stated as a PGP 2 replacement and soon I figured that a machine readable interface is useful to avoid problems with localization and changing output intended for humans. Over the years more and more status information has been added to this interface - in a compatible way. This makes it a bit hard to use but it does not break existing applications. If you can start from scratch, you can do a nice API design but we were not able to do that.

                                    So it seems that Werner is already well aware of the fact that GnuPG’s CLI is used as an API and consequently must remain stable.

                                    Edit to add: thanks for the link to the Mercurial compatibility rules. I really like that explicit documentation.

                                  2. 5

                                    No U. There are indeed millions of shell scripts - and I have never seen one that wasn’t a disaster waiting to happen. Remember that bash function expansion bug? Particularly in a security context, any use of shell scripts for production systems is simply terrible engineering. I am genuinely angry that someone would try to defend it.

                                    Bash having security issue is a separate issue than if a CLI is an API. There are other shells out there and most non-shell programs call other programs directly, bypassing any shell.

                                    1. 2

                                      A shell script implies a shell; the article defends not just calling a shell program directly but using shell scripts as part of a production system.

                                  1. 1

                                    I’m glad to see more projects that make use of GnuPG starting to participate on the mailing list. (GPGTools is another that recently started participating in discussion there.)

                                    I’ve been following the list for the past year or two, and I’ve been surprised how infrequently issues and complaints I hear about elsewhere actually make it on to the list – let alone the bug tracker.

                                    I’m hopeful that better communication between projects will result in better integrations and overall safer software systems.

                                    1. 1

                                      I recently brought an unreliable Mac mini from our office into the Apple store and left it with them to run diagnostics. I can confirm that I was asked for the password. I instead gave them permission to wipe the hard drive. They do have many network bootable diagnostic utilities (I’ve seen them used), but seem to also want access to the system as configured to rule out software issues.

                                      1. 2

                                        What does this have to do with programming, security or design?

                                        1. 3

                                          I don’t know, but it is a great film.

                                          1. 3

                                            “Put on the goddamn glasses!”

                                          2. 1

                                            Perhaps it’s not clear without the context that this is a post by Moxie Marlinspike, former head of product security at Twitter now working on developing usable secure communication tools with Open Whisper Systems (TextSecure, RedPhone, Signal, etc.). He’s reflecting on his work at a high level, and blogging, he jests, “at knifepoint.”

                                            The post is about how identifying as someone able to use “Privacy enhancing technology” (someone who has mastered the massive incidental complexity of current tools) can make it difficult to actually design and develop usable “Privacy enhancing technology” for others. I think it’s a trap that we can fall into when it comes to a lot of technical skills and very much worth reflecting on.

                                            See these two paragraphs in particular:

                                            “Privacy enhancing technology” has always been inaccessible, which means that the few of us who are weirdly motivated enough to figure it out are left in real danger of creating an identity around having figured it out.

                                            The stickers on the backs of our laptops can sometimes read like merit badges in things we’ve mastered. Maybe that means we care, but there’s also a risk that “mastery” is only worth touting when the skills are difficult. Just like any subculture, if we set ourselves up to feel special for identifying with something obscure, it is potentially in our interest to maintain that thing’s obscurity. If we identify with the depths of privacy technology, we might hesitate (subconsciously or not) to make internet privacy simple and ubiquitous, since we’d be putting our projections of ourselves at risk.

                                            1. 1

                                              To me it stills seems to “high level” to justify using those tags. I believe the ‘programming’ tag indicates the article touches on programming practices and possibly shows off code. The ‘design’ tag indicates the article touches on design concepts whether they be for user interfaces or software architecture or typography. The ‘security’ tag usually indicates the article touches on software security from a technical perspective.

                                              The only tags I would put on this kind of article would be ‘culture,’ ‘privacy’ and perhaps ‘security.’ This concerns me and likely many others because this kind of content is what turned Hacker News into the cesspool it is today (an unhealthy mix of interests, communities and egos).

                                              1. 1

                                                That’s fair. My thought process when posting was that the content of this post applies to all these different domains, but I can see now how narrowly scoped tags would be better and cross-cutting content might better be relegated to a tag like ‘culture’. I’ll try to be more conservative in tagging this type of content in the future.

                                          1. 8

                                            OCaml Weekly News (latest) summarizes messages posted to the OCaml mailing list.

                                            1. 2

                                              So… if I tried out the really early versions of Rust, had all the memory management stuff changed up on me, got frustrated with all the churn and set it aside for a while… now is a good time to come back?

                                              1. 14

                                                If you’re frustrated by churn, you should wait until at least the beta release, if not the 1.0 release itself.

                                                A sort of TL;DR:

                                                • now -> alpha: tons of churn. A total redux of I/O was announced this morning.
                                                • alpha -> beta: we’re going to try not to break, but we make no guarantees
                                                • beta -> release: bugfixes only, except in extreme circustances

                                                That said, the ‘memory management stuff’ is mostly stable, and has been for a while.

                                                1. 4

                                                  I’ve been working on rust-mqtt since 0.9 and I’ve had to basically rewrite the entire lib due to major changes to memory management. With that said, the latest incarnation of memory management is much more straightforward than it used to be. Also, cargo seems to force me into a better project layout. The Rust I’m using now definitely feels a lot more mature. The documentation seems to be pretty solid (examples in comments actually have to compile before documentation can be generated). Since being on the latest version (a 0.13-prealpha nightly) I only asked one question on the IRC channel; and I probably shouldn’t have even bothered asking there since it wasn’t even a good question. I’d definitely recommend giving it another look.

                                                  1. 2

                                                    That’s the sound I’ve heard from the folks I know who do rust. It sounds like they have largely stabilized the memory management stuff (though AFAIU it’s still subject to potential last minute changes before 1.0.0 proper).

                                                    I’ll be trying it out again when they release the alpha.

                                                  1. 1

                                                    I’ve been working on protocol.club and putting together slides for some upcoming talks (on “DevOps” tools and GPG).