1. 30

    Per their email to customers:

    We’re sending this note because people are now asking if this could happen with Keybase teams. Simple answer: no. While Keybase now has all the important features of Slack, it has the only protection against server break-ins: end-to-end encryption.

    This is a facile false equivalence on the part of Keybase. Slack’s extenuating incident was because of code injection in their client application. If an attacker achieved code injection in a Keybase client the breach would be exactly as bad as Slack’s.

    End-to-end encryption is worth little if the client doing the encryption/decryption is compromised, and Keybase’s implicit claim that end-to-end encryption protects against compromised clients is dangerously inaccurate.

    1. 5

      where did you see that the injection was client side? I’m wondering if I’m parsing the disclosure incorrectly, but I’m not seeing that spelled out explicitly.

      From Slack’s post:

      In 2015, unauthorized individuals gained access to some Slack infrastructure, including a database that stored user profile information including usernames and irreversibly encrypted, or “hashed,” passwords. The attackers also inserted code that allowed them to capture plaintext passwords as they were entered by users at the time.

      You’re of course correct about the vulnerability of keybase clients. They talk about that here: https://keybase.io/docs/server_security

      EDIT: After a reread of the Keybase post, I’m not seeing anywhere that they claim Keybase can 100% protect against client side attacks, but their assertion about server side attacks is true. Where did you see that they’re claiming e2e crypto protects against client attacks?

      1. 1

        I’m not seeing that spelled out explicitly

        capture plaintext passwords as they were entered by users

        You can’t do that without injecting code into the client.. plus, modification of server side code is usually not called “injection” at all

        1. 9

          A) Yes, you can (by modifying server code) - basically no sites hash passwords before sending them over the wire.

          B) Modifying running code on the server without changing the code on disk is usually called injection in my experience. Happened at twitter (remote code execution in the rails app exploited to add an in-memory-only middleware that siphoned off passwords).

          1. 2

            basically no sites hash passwords before sending them over the wire.

            Is there a good scheme for doing that?

            You can’t just hash on the client, because then the server is just directly storing the credential sent by the client, i.e. as far as the server-side is concerned, you are back to merely directly storing passwords in the clear.

            You can implement a scheme where the client proves knowledge of the password by using it to sign a token sent by the server (as in APOP, HTTP Digest Auth, etc.). But then server needs to have the plaintext password stored, otherwise it can’t check the client’s proof of its knowledge.

            So either way, the server is storing in the clear whatever thing the client needs to authenticate itself.

            The advantage of the usual scheme where the client sends the actual password and the server then stores a derivative of it is that the client sends one thing, and then the server stores another, and the thing the client needs to send cannot be reversed out of the thing the server has stored. That yields the property that exfiltrating the stored credentials data from the server doesn’t allow you to impersonate its users.

            But to get this property, the server must know what the actual password is – at least at one instant in time – because the client needs to prove knowledge of this actual password. So you cannot never send the actual password to the server.

            Well, that’s not the only way to get that property. The other way is public key cryptography.

            Of course trying to go into that direction runs into entirely other trust issues: if you ship the crypto code to the client, you might as well not bother. Notably, “send the actual password to the server” avoids that whole issue too.

            1. 2

              If you don’t want the server to know the password, you can use client certs, which have worked since the nineties. The browser UI for it is universally horrid, though, and whatever terminates your TLS needs to provide info to your application server.

              This is a hurdle in both development in production - coupled with the browser UX being bad - has left client certs criminally underused.

              1. 2

                Oh, right. I mentioned public key crypto myself… but I still didn’t even think of client certificates.

                1. 1

                  The missing feature is some mechanism to create the TLS certificate.

                  Like a “Do you want to use a secure passwordless authentication with that site?” prompt that create a user@this.site CSR, upload it for signing in a POST request, and get the signed cert back, and store it for the next time this.site asks for it.

                  1. 2

                    … at which point you need a mechanism to extract your credential from the browser and sync it across devices and applications. Hmm.

                    1. 1

                      Yes, unless you remember them all (them all how many?!), mind space wasted…

                      I use an USB key on which my password are stored, and add that to my physical keyring.

                      I am now more vunerable to phisical access and less exposed to remote attackers.

                      It is not perfect. It is working.

              2. 1

                Private key derivation from the password gives you a private key, from which you may derive a public key, and get public key crypto in javascript in the browser.

                So… you are trusting JavaScript code uploaded from the serverv while doing that (uh oh). If that is compromised, it can upload the password somewhere else in clear : if you trust the server, just send the password in clear to it (within TLS using TLS certificates).

            2. 1

              capture plaintext passwords as they were entered by users

              You can’t do that without injecting code into the client..

              Even today, Slack sends credentials from the client to the server in plaintext (just like almost every other website).

              Try it yourself: https://tmp.shazow.net/screenshots/screenshot_2019-07-21_3d7d.png

              Having a remote code execution to modify the server-side code to consume the plain-text passwords that their server receives and exfiltrate them would work just fine.

              Who knows what else they might have modified.

              1. 2

                my assumption was that “as they were entered by users” meant “letter by letter, keylogger style”

              2. 1

                That seems like a huge assumption about the architecture of Slack, unless you work there, I wouldn’t assert that. It doesn’t even seem very plausible — why only infect a subset of clients? Do electron apps get served off of a server somewhere? (Unless I’m massively understanding them, no) And if popping a shell on a database server gave an attacker lateral access to push malicious code to clients, Slack has HUGE problems.

                1. 1

                  electron apps

                  Ah, I didn’t even think about these. Slack is accessible as a normal web site too, I was only thinking about that.

                  Also, my assumption was that “as they were entered by users” meant “letter by letter, keylogger style” :D

            1. 2

              There has been a lot of discussion about the scope of microservices. When is a microservice too big? When is a microservice too small? I think a microservice that is “forgettable” might be too small.

              Although at first glance a microservice that you can set up and forget about sounds great, but all software rots, so the remaining struggle is how fast we’re noticing and dealing with that rot. If the typical workflow is to set up teeny, tiny microservice that’s finished and you start it and don’t look at it again until five months down the line when it’s too expensive, or it doesn’t report stats the way your other services do now, or the memory leak is fast enough that you’re not comfortable with monit just restarting it when it goes down, then that problem is that much harder to fix because you haven’t even glanced at the service in months, or years. We run into similar problems even if a service has been looked at recently if the one person who knew how to operate it leaves the company.

              A few things fall out of this. No service should be scoped so tightly that it’s “done” until there’s a team that is confident in handling long-term support for it. No service should be scoped so tightly that it’s owned by an individual. These policies have operational costs and benefits, but the benefit is organizational, and in keeping code alive and adaptable.

              1. 2

                A few things fall out of this. No service should be scoped so tightly that it’s “done” until there’s a team that is confident in handling long-term support for it. No service should be scoped so tightly that it’s owned by an individual. These policies have operational costs and benefits, but the benefit is organizational, and in keeping code alive and adaptable.

                Couldn’t agree more. Writing the code itself is almost always the easiest part, and what you’ve described (maintenance and socializing/communication) is the hardest.

              1. 1

                is there any more info on what libraries, features, or design patterns were used? (or point me to the source if I missed it because of my tiny mobile screen) I have been interested in rolling some services in rust, but good complete case studies have been hard to find.

                1. 1

                  No, because this is not a high-content article.

                  1. 1

                    is there any more info on what libraries, features, or design patterns were used? (or point me to the source if I missed it because of my tiny mobile screen)

                    This uses the metrics_distributor library to accomplish the aggregate-and-forward design pattern.

                    I have been interested in rolling some services in rust, but good complete case studies have been hard to find.

                    I’m working on a longer blog post with more in-depth examples and discussion of doing exactly that!

                  1. 1

                    Neat to see more Rust in production! How did you pitch doing it in Rust to management? Is Everlane pretty progressive on PL choice?

                    1. 3

                      We’re pretty conservative in our PL choice (eg. we’re steadfast Ruby on Rails users). Rust was actually an experiment in this regard: which is why we used it for a small service outside of our main application. Given the requirements of this service—high performance and strong'ish reliability guarantees—Rust seemed like a very good candidate. And so far it’s proven itself to have been a good choice!

                      I’m nowhere near as fast at building features in Rust as I am in Ruby, but in some cases—such as this—speed of delivery takes a back seat to other concerns (eg. safety and reliability).

                    1. 11
                      Crates.io disallows wildcards

                      If you maintain a crate on Crates.io, you might have seen a warning: newly uploaded crates are no longer allowed to use a wildcard when describing their dependencies. …

                      A wildcard dependency means that you work with any possible version of your dependency. This is highly unlikely to be true, and causes unnecessary breakage in the ecosystem. We’ve been advertising this change as a warning for some time; now it’s time to turn it into an error.

                      Delighted to see this step occur in the maturation and stabilization of Rust and Crates.io. (It’s also cool seeing the download counts of popular crates grow and the versions of those crates start to near/enter v1.x.x.)

                      1. 36

                        Reminds of this commit in sqlite3. Similar issue with people’s names showing up in open source software and end users harassing them over the vendor.

                        1. 2

                          I had forgotten about that #define in the SQLite source! You just made my morning by linking to it, thank you.

                        1. 5

                          @brinker, @steveklabnik, et al: This is truly excellent! Really appreciate what you guise and the Rust community at large are doing to continually improve the resources available for learning and understanding Rust.

                          1. 5

                            Glad you like it! This was actually my first contribution to Rust documentation, and it’s so encouraging to see it received so warmly. I definitely plan on doing more, on both the core Rust documentation and on community documentation (much of which could use some work).

                          1. 2

                            Work:

                            • Catching up with PRs (we’re in online retail so we don’t ship code during the second half of December to avoid rocking the boat in the holidays).
                            • Doing performance work on the browser2 gem and features on the async_cache gem.

                            Personal:

                            • Implementing the actual bytecode interpreter for hivm2 (the spiritual successor to hivm). Spent the holidays writing the syntax tree and the parser for the assembly language and a compiler to turn that higher-level assembly into lower-level bytecode instructions.
                            • Later this week I plan to ship a redesign to the personal site that I did over the holidays.
                            1. 1

                              Work: Published the async_cache gem we’re using at Everlane to always keep the site fast during the busy holiday season.

                              Non-Work: Fooling around with assemblies, intermediate languages, and VMs: hopefully with the end result of having a very generic, high-level, parallelizable, and live-code-reloadable virtual machine (we’ll see!). The repository is here and my design docs are here for anyone that is interested.

                              1. 3

                                Wrote a Markov chain ingester and generator for the Lita chat bot (in essence it listens to everyone in public channels and builds a Markov chain for each user, you can then query it to get a random generated “sentence” based on what the bot has seen that user say): https://github.com/dirk/lita-markov (and a blog post explaining it in a bit more detail)

                                This week I hope to work on:

                                • Faster production fixture downloading and loading by serializing to SQL: Everlane/fixtural#dirk/fixture-formats
                                • Making some more Rust community (and possibly core) contributions
                                • Catching up on blogging about work I’ve done over the past few months
                                1. 9

                                  Someone already reported this as a bug and it seems to already be fixed here: https://github.com/Microsoft/WinObjC/issues/36

                                    1. 1

                                      Do I mistake or when GenerateRandomNumber fail it return always 0 ? If it’s the case it looks like https://xkcd.com/221/

                                  1. 2

                                    Setting up the master package index system for Roost, my alpha-stage package manager for building and managing Swift projects without Xcode.

                                    Also experimenting with improving Jbuilder performance, especially with regard to working with caching systems.

                                    1. 8

                                      Getting the MVP of Roost, my package manager and build system for Swift, ready for prime time. If Apple really is going to free Swift from the limits of Mac and iOS apps then the community will need an open, extensible platform for developing, building, and deploying Swift programs.

                                      1. 2

                                        The Venmo iOS team will love this. Do you want an intro?

                                        1. 1

                                          Sure! Would love to get other people’s input on this and make something really useable for the community.

                                      1. 1

                                        According to the title—"[…] From Ruby to Go and Saved Our Sanity"—using Go will alleviate severe mental illness? Why isn’t Golang all the buzz in professional psychologist and psychiatrist circles?

                                        1. 55

                                          Why isn’t Golang all the buzz in professional psychologist and psychiatrist circles?

                                          Obviously, it is due to the lack of Generics! Those brand name drugs aren’t cheap.
                                          I kid, I kid.

                                          1. 12

                                            That was awful. :)

                                            1. 1

                                              Oh, hehehe, I see what you did there. ;)

                                          1. 4

                                            This is another article that calls out to “innovation” as some platonic goal we should all strive for, and it chastises government for stifling innovation, which must be some great crime indeed.

                                            Does government move slowly? Yes, but that is by design. When dealing with huge contracts being paid for with public money, the government wants to make sure that the company they are hiring can deliver, and that they can do it exactly within the parameters specified, and for the lowest possible cost. The process of figuring these issues out takes time. Does that long time stymie the ability of fast-paced start-ups to vie for government contracts? Yes it does. But it also avoids undue risk in the expenditure of the people’s money, which is taken via taxation with the express purpose of improving the lives of the citizenry.

                                            Now, could the government improve? Absolutely. I’d imagine there are very few people who would claim that the current glacial pace of government contracting is exactly the right one. In the end, a slower pace is better for the avoidance of risk, and the protection of public funds.

                                            1. 2

                                              Did you read the whole piece? You might be surprised to find we agree on this. The last two paragraphs literally say exactly what you just said: 1. The public bidding & vetting process is by design & very much worthwhile and 2. Some improvements could be made to speed it up to allow more competition.

                                            1. 3

                                              Working on implementing the LLVM-targeting compiler for my new Hummingbird language. The branch for that is on GitHub. It’s been a very interesting build-out so far, as last week I wrote the JavaScript target compiler for it, so I’m effectively getting to implement the same language features twice: once in JS and then the same one a week later in LLVM. It’s fun stuff and a great brain/learning exercise!

                                              1. 8

                                                Not saying that Apple doesn’t have a problem (slow dir merge of /usr/local?! wtf), but this is yet another reason why I don’t put homebrew in /usr/local. I dump it in ~/.brew/ and it works great.

                                                Putting homebrew in /usr/local and setting the whole mess owned as your user is just a bad practice and a very poor default on homebrew’s part.

                                                1. 1

                                                  ~/.brew does sound smart! I think I might try that out. Have you run into any issues with it in there versus in /usr/local, or does everything generally work?

                                                  1. 4

                                                    a few bottles specify /usr/local, so it results in a bit of extra compiling here and there, but generally not too much (for what I install at least).

                                                    In my .profile/.bash_profile/.bashrc I have a couple extra things added.

                                                    # added to profile file
                                                    [ -d $HOME/.brew/sbin ] &&
                                                        export PATH="${HOME}/.brew/sbin:${PATH}"
                                                    [ -d $HOME/.brew/bin ] &&
                                                        export PATH="${HOME}/.brew/bin:${PATH}"
                                                    [ -d $HOME/.brew/share/man ] &&
                                                        export MANPATH="${HOME}/.brew/share/man:$MANPATH"
                                                    
                                                    # added to rc file
                                                    [ -z "${BASH_COMPLETION}" ] && [ -f $HOME/.brew/etc/bash_completion ] &&
                                                        . $HOME/.brew/etc/bash_completion
                                                    [ -z "${BASH_COMPLETION}" ] && [ -f $HOME/.brew/share/bash-completion/bash_completion ] &&
                                                        . $HOME/.brew/share/bash-completion/bash_completion
                                                    

                                                    If I am building something outside of homebrew that requires a lib installed with homebrew, I generally just do this first:

                                                    export CFLAGS="-I$(brew --prefix)/include"
                                                    export LDFLAGS="-L$(brew --prefix)/lib"
                                                    

                                                    Then compile as normal.

                                                    I also exclude $HOME/.brew/ from time machine backups. Those are about the only differences I can recall offhand.

                                                    1. 2

                                                      So one thing I’m kinda leaning towards is to do something like /usr/homebrew vs /usr/local.

                                                      I used to do ~/homebrew but given the differences in upgrading I think it might be better to gate home-brew via the OS X version too.

                                                      So say instead of $BREW_PREFIX, I do $BREW_PREFIX/{10.10,10.9} that way I can just do things that way. I don’t normally upgrade anyway but it would mean its simple to just blitz /usr/homebrew and rebuild in vagrant and tar the thing up.

                                                      Also this post here is also why I hate /usr/local in principle, I get this dudes pissed but 1.8 million files is kinda crazy. Especially if you can rebuild it tar.xz it and nuke it and restore after install. I’m annoyed right now because stuff like mactex installes into /usr/local/texlive, so my old “nuke /usr/local and retry” no longer works.

                                                      Maybe /usr/local/brew. I just don’t like having my home-brew in my home directory, but i guess I don’t really share it so its somewhat of a wash.

                                                      1. 1

                                                        So this is a WIP (aka: I work on it when I get bored) but it works out pretty well so far.

                                                        https://github.com/mitchty/bootstrap/commit/663ad456e0ad7c77614fb7f48550d9b75e091808

                                                        https://github.com/mitchty/bootstrap/blob/663ad456e0ad7c77614fb7f48550d9b75e091808/bootstrap.sh#L7-L8

                                                        Is how I hacked it in. Will try this way out for a while and see how it shakes out. One thing I noticed, brew cask doesn’t work at all if its not installed in /usr/local. Eh, oh well.

                                                  2. 1

                                                    I’m not one to argue with anyone claiming that Apple are screwing the pooch lately (my household has a small list of shit-points about recent iOS updates), but I do challenge your veracity in this claim:

                                                    “Putting homebrew in /usr/local and setting the whole mess owned as your user is just a bad practice and a very poor default on homebrew’s part.”

                                                    I don’t agree. This is, finally, precisely what /usr/local/ is for, and anyway: Apple did the recent merge/cleanup thing precisely because they are relinquishing all control over /usr/local - it is now and forevermore for the user to hold their local bins/libs/etc., so that Unix users can continue to have a smooth-running system.

                                                    But, whats your problem with /usr/local? Perhaps you’re not a multi-user Mac setup, so you don’t need to share a cmd-line toolchain, and/or have other methods of maintaining isolation between your Unix toolchain, the OSX bundles, and various other all and sundry packaging systems around (port/homebrew, etc.)?

                                                    Well, here’s why I prefer /usr/local: our development Mac hosts multiple users. There is one homebrew administrator, and a perfectly smooth, functioning development environment where everyone can still, nevertheless, maintain a stable set of homebrew for all local users, who are after all developers. If its really important for a local dev to have isolation of forked common tools, well there’s not many in the gang who don’t already salt their builds of such things with a little –prefix=~/inst and path_prepend(~/inst/) and path_prepend(pkg_config_path, ~/inst/) and so on. But the toolchains for the build-server are common and maintained - everyone is using the same tools.

                                                    I for one, welcome our new /usr/local privileges, and don’t have any problems with homebrews' usage of it. To me its a welcome relief from the bundle warfare thats going on in all other quarters ..

                                                    1. 1

                                                      I don’t agree. This is, finally, precisely what /usr/local/ is for…

                                                      I guess this is where we agree to disagree then. /usr/local has always been to me, machine specific locally installed software, owned by root. That is the main problem I have with homebrew’s default. You have to set the whole tree as owned by the user who runs brew.

                                                      The alternative, I suppose, is running homebrew as root, which seems even worse to me. In addition, I generally prefer to run the configure and make steps of classic building as non-root. Unless this changed recently, I don’t believe homebrew drops privileges for the ‘make’ portion of its operations when run as root.

                                                      So…to avoid those issues, and because my /usr/local is not shared with other accounts anyway, I moved homebrew to a directory inside $HOME. This has actually payed off many times too, as I have had occasion to completely nuke $HOME/.brew/ and start over. I was able to do this without also removing any libraries or software I have installed manually in /usr/local with the classic ./configure;make;sudo make install invocation.

                                                  1. 4

                                                    Why specifically better for “better analytics for freelancers and agencies ”?

                                                    1. 2

                                                      I hope it’s a system where people actually enjoy exploring their analytics and don’t dread having to give access to and explain an analytics system to clients/non-power-users. It sometimes feels like half of client demo time is spent explaining Google Analytics, plus the inevitable emails/phone calls down the when people forget how to get to into an analytics system. Rangefinder aims to solve this; it even gives you a 6-line snippet to throw into a WordPress theme that puts a link into the WordPress panel to automatically sign users in (as guests) to view their site’s analytics.

                                                    1. 1

                                                      Shooting for rolling out a new feature every day on my new analytics app Rangefinder. Managed to easily do that last week and I’ve got tons of ideas/plans, so it shouldn’t be to hard to keep the streak going.