1. 17

    Reprising from HN:

    In cryptography, we have a concept of “misuse resistance”. Misuse-resistant cryptography is designed to make implementation failures harder, in recognition of the fact that almost all cryptographic attacks, even the most sophisticated of them, are caused by implementation flaws and not fundamental breaks in crypto primitives. A good example of misuse-resistant cryptography is NMR, nonce-misuse resistance, such as SIV or AEZ. Misuse-resistant crypto is superior to crypto that isn’t. For instance, a measure of misuse-resistance is a large part of why cryptographers generally prefer Curve25519 over NIST P-256.

    So, as someone who does some work in crypto engineering, arguments about JWT being problematic only if implementations are “bungled” or developers are “incompetent” are sort of an obvious “tell” that the people behind those arguments aren’t really crypto people. In crypto, this debate is over.

    I know a lot of crypto people who do not like JWT. I don’t know one who does. Here are some general JWT concerns:

    • It’s kitchen-sink complicated and designed without a single clear use case. The track record of cryptosystems with this property is very poor. Resilient cryptosystems tend to be simple and optimized for a specific use case.

    • It’s designed by a committee and, as far as anyone I know can tell, that committee doesn’t include any serious cryptographers. I joked about this on Twitter after the last JWT disaster, saying that JWT’s support for static-ephemeral P-curve ECDH was the cryptographic engineering equivalent of a “kick me” sign on the standard. You could look at JWT, see that it supported both RSA and P-curve ECDH, and immediately conclude that crypto experts hadn’t had a guiding hand in the standard.

    • Flaws in crypto protocols aren’t exclusive to, but tend to occur mostly in, the joinery of the protocol. So crypto protocol designers are moving away from algorithm and “cipher suite” negotiation towards other mechanisms. Trevor Perrin’s Noise framework is a great example: rather than negotiating, it defines a family of protocols and applications can adopt one or the other without committing themselves to supporting different ones dynamically. Not only does JWT do a form of negotiation, but it actually allows implementations to negotiate NO cryptography. That’s a disqualifying own-goal.

    • JWT’s defaults are incoherent. For instance: non-replayability, one of the most basic questions to answer about a cryptographic token, is optional. Someone downthread made a weird comparison between JWT and Nacl (weird because Nacl is a library of primitives, not a protocol) based on forward-security. But for a token, replayability is a much more urgent concern.

    • The protocol mixes metadata and application data in two different bag-of-attributes structures and generally does its best to maximize all the concerns you’d have doing cryptography with a format as malleable as JSON. Seemingly the only reason it does that is because it’s “layered” on JOSE, leaving the impression that making a pretty lego diagram is more important to its designers than coming up with a simple, secure standard.

    • It’s 2017 and the standard still includes X.509, via JWK, which also includes indirected key lookups.

    • The standard supports, and some implementations even default to, compressed plaintext. It feels like 2012 never happened for this project.

    For almost every use I’ve seen in the real world, JWT is drastic overkill; often it’s just an gussied-up means of expressing a trivial bearer token, the kind that could be expressed securely with virtually no risk of implementation flaws simply by hexifying 20 bytes of urandom. For the rare instances that actually benefit from public key cryptography, JWT makes a hard task even harder. I don’t believe anyone is ever better off using JWT. Avoid it.

    1. 7

      This is a pretty good post.

      The author should spend a little more time on the distinctions between IVs and nonces (this is a problem in the literature as well) because the constraints on both are subtly different. An IV is an implied first ciphertext block, and in CBC it needs to be unpredictable. A nonce is a number used just once; it is less important that a nonce be unpredictable, and in fact in some constructions (GCM being a good example) a random nonce can be problematic.

      I’d also nitpick that there are probably much more important common developer crypto mistakes that should push out, for instance, not using password hashes or having incoherent crypto designs. For instance:

      • Directly using RSA to encrypt plaintext (and, relatedly, using RSA without secure padding).

      • Failing to authenticate associated data (such as the IV of a CBC ciphertext.

      • Compressing before encrypting.

      I might also instead of recommending RSA-2048 and discussing key sizes instead just push people towards Curve25519.

      1. 3

        I wish I could just tell everyone to use Curve25519, but unfortunately as long as FIPS is still baring it I don’t think it will get adopted at the rate we all want.

        1. 3

          Directly using RSA to encrypt plaintext

          I’ll be the dumdum here and ask why you should not do this. I see that encrypting a symmetric key for the message using RSA is recommended instead. Why? :)

          1. 3

            A few reasons, one you can only encrypt things with RSA up to the size of the key, so if you want to encrypt a large message you just can’t in a single shot with RSA. You might design some sort of multi-RSA-encryption scheme, but then the problem you face is that RSA encryption is significantly slower than a symmetric cipher like AES.

            Finally, I’d like to note that in general I think people should be skeptical of designs that involve encrypting anything with long-term RSA keys: https://alexgaynor.net/2017/apr/26/forward-secrecy-is-the-most-important-thing/

            1. 1

              Also: the amount you can encrypt per “block” is deceptive, because there’s an amount of padding necessary for security, and encrypting correlated bits under RSA makes error oracle attacks more feasible. There is in practice virtually never a reason to encrypt directly with RSA.

        1. 5

          I mentioned this on HN (and of course pinboard made fun of me for it on twitter) but it’s not necessary for your phone to have all your email on it. I have a pretty extensive email archive (no idea why, I’ve never looked at it) but it’s in the form of tar files stored offline. I can’t access it in an airport.

          Even if you use gmail for everything, you can make a second account that all your travel email, hotel reservations and airline tickets etc., get sent to. Hook up U2F on the main account, and leave the key at home when you travel. This is a reasonable precaution to take even if you love border patrol. There’s no reason to have access to the email that subsequently grants access to your retirement account while hitchhiking with strangers across Europe. I don’t think such precautions are burdensome even for casual users, and 90% of what a privacy advocate would do to protect themselves from the jackboots is applicable to protecting oneself from pickpockets.

          1. 2

            I don’t see him making fun of you on Twitter but do think it’s a bad idea to encourage most users to get email off their phones (their most secure device) and onto their computers (their least secure device).

            1. 2

              Is it common for people to have email only on their phones? All the people I know who have the Gmail app on their phone use it in addition to the Gmail webapp on their computers, not instead of it. I could be out of the loop on usage trends though.

              1. 1

                The device used to access email is somewhat independent of how frequently you rotate your archives. Or, like I said, whether it’s logged into two accounts or just one while traveling.

              2. 1

                I don’t think such precautions are burdensome even for casual users

                I do, to be honest, every user test I do leaves me with the conclusion that we should offer less configuration for people and make things easier to get started with.

                1. 1

                  I bought my parents a chromebook and set them up with a gmail account for banking, separate from the one used to email friends. They’re not super savvy and they make it work. I could have just told them to do it too, and I think they could have done so.

                2. 1

                  I’m very happy with the two-account solution, though in my case it was motivated by not wanting work email on my phone more than by privacy (that’s just an added bonus). I initially resisted putting email on my phone at all, but it eventually became too much of a hassle not to have access to electronic boarding passes, hotel reservations, etc., so now I have an account only for those things.

                1. 12

                  There is something mind-numbing about HN that makes it difficult to be a part of that community. You almost know exactly how everyone is going to react to any given post and you have a pretty good idea of what’s going to get at the top of HN every day. It’s still useful as something similar to TechMeme, but the community just isn’t fun to be a part of.

                  1. 18

                    There is something mind-numbing about HN that makes it difficult to be a part of that community.

                    It’s the size.

                    Lobste.rs feels better because it’s still small, but if we grow, this feel will dwindle. There’s nothing wrong about it, it’s just how communities evolve.

                    1. 6

                      But nevertheless paradoxical. The larger the communities become less diverse, not more.

                      The solution would be to join many small communities, still giving access to many people, vs a few large communities.

                      1. 20

                        I found that Parable of the Polygons, a short essay with fun interactive visualizations, made that intuitive to me. Larger groups will default to be less diverse than smaller ones without constant work to keep that from happening.

                        1. 4

                          I love that visualization.

                          I am not sure how you arrived at that conclusion though. If I remember it correctly, the article arrives at a conclusion that demanding diversity lowers segregation. But this doesn’t necessarily mean larger groups are less diverse.

                          Do larger groups actually tend to be less diverse? I really don’t know, but looking at tropical rain forests, I wouldn’t bet on that. I guess it depends on the initial condition.

                          1. 3

                            Yes, your summary is accurate. I guess what I’m bringing in addition to what the article says is my knowledge that demanding diversity is easier in smaller groups because there are few enough people hostile to it that it’s practical to engage each one directly and talk through things. The article seems to mostly be thinking about where people live, which is certainly a very important topic, but it’s doing so at an abstract level that can be applied in other ways also.

                            There’s further discussion needed to adapt the article’s thesis - that diversity will not happen unless people actively prefer it, even if weakly, over homogeneity - to online communities. Everywhere you look at polygons “moving” in the visualizations, imagine that what they’re specifically doing is focusing more of their attention on Lobste.rs instead of on Hacker News… Then think of it from the narrow perspective of us being Lobste.rs; we perceive these movements as our community growing.

                            People who actively prefer diversity are a tiny fraction of the general population. It’s not something that there’s consensus on on Lobste.rs, but certainly the fraction is larger here. So most times when the community grows, the growth brings us towards the larger world’s status quo.

                            In retrospect, I am still glad I cited the article because it’s very important background information, but I appreciate your questioning my reasoning, and I hope I’ve elaborated a bit.

                        2. 2

                          I was thinking it would be awsome to have a reddit-esque platform that randomly creates the perfect size communities: large enough that there’s always discussion, but small enough that you begin recognize a lot of the posters.

                          1. 3

                            That sounds exactly like how subreddits work! There are many small, active communities there if you are able to find them.

                        3. 3

                          I agree with this. I’ve never seen a large community be surprising, change direction, or try to learn from its failures.

                          There’s still interesting stuff to be said about what makes it “large”, and how a large IRC channel is far fewer people than a large web forum. But I don’t have much in the way of insight there, so…

                          1. 1

                            Until lobster.s2, shhh, don’t tell anyone!

                          2. 5

                            There are some topics on lobste.rs that are completely predictable, but not many.

                            1. 7

                              INT. LOBSTE.RS CONFERENCE ROOM

                              PERSON 1: As an OpenBSD developer, I—

                              PERSON 2: You’re an OpenBSD developer? That’s funny, so am I!

                              PERSON 3 (from under the table): Hey, I’m also an OpenBSD developer!

                              PERSON 4 (emerging from the ductwork): Me too!

                              POTTED PLANT IN THE CORNER: I, too, am an OpenBSD developer.

                              1. 3

                                Anything about VPNs :P

                            1. 13

                              I was hoping for a section on “do use these things: ENV variables that hold filenames that contain secrets” or something like that. I don’t use docker, but I would like to keep secrets out of environment variables. What are good ways to do that?

                              1. 7

                                Generally you bake an encrypted file into your image, that file is read and decrypted on app start, you can fetch a key to decrypt the file from vault or similar.

                                If you use kubernetes, it supports exposing secrets as files as prescribed by the post.

                                1. 3

                                  A great tool for this is SOPS which supports both PGP and AWS’s KMS:

                                  https://github.com/mozilla/sops

                              1. 16

                                Speed isn’t why I prefer Go to Python; static typing is. I too used to think I was more productive in dynamically typed languages. Wow, was I wrong.

                                1. 3

                                  Yeah–projects like TypeScript and Flow (adding types to JS, with no perf boost) show that a decent number of folks value this. My experience is, it’s initially faster writing small bits of code without writing out static types. But eventually your work involves as much time tweaking and debugging and trying to make changes (sometimes in code used from many places) with high confidence you haven’t broken anything, which is where the navigation and other kinds of checking that tends to be associated with static typing can really help.

                                1. 7

                                  Dude makes HackForums accounts, writes a RAT (Trojan) claimed to be for “budget conscious school administrators” and the like, sells it for $25 a pop, gets arrested and charged with creating and selling a hacking tool.

                                  Seems reasonable enough. I’m not a fan of the FBI but I don’t see this guy’s defense standing up. He’s using the tired excuse of “it’s for Windows administrators, I had no idea hackers would use this for bad stuff!” while he exclusively sold it on HackForums. Get real.

                                  1. 11

                                    This seems like a really dangerous argument. The vast majority of security research tools are created with the certainty that they will be used for illegal purposes as well as (and possibly more than) for legal ones.

                                    Should the creators of metasploit or the aircrack suite or Kali Linux be sent to prison?

                                    1. 11

                                      I think there’s an educational aspect to metasploit. I’m having a harder time figuring what I’d learn from a tool that remote enables webcams without activating the recording light.

                                      1. 5

                                        Selling it, especially on a forum called “HackForums”, feels qualitatively different from producing it in the first place.

                                        1. 6

                                          If this were advertised on Hacker News, most people would feel the same way. Do you know anything about HackForums? It’s important to go beyond how things feel, especially when life-destroying consequences are at hand.

                                          1. 3

                                            I’m not sure that’s germane here. The “hack” in “HackForums” does in fact refer to the kind of “hacking” “most people” think of when they hear the word.

                                            1. 2

                                              It’s obvious to us what “hack” in “HackForums” stands for, but even if it was “cannibalrecipes.com” it doesn’t mean it’s malicious or harmful act to sell cookbooks there. This is not the same as giving a gun to a criminal. While the tool he sold was prefered by some malicious users, the tool’s existence didn’t impact availability of interchangeable ones.

                                        2. 2

                                          Of course not, but nobody is saying they would be.

                                          Metasploit wasn’t sold at all, let alone on “HackForums”.

                                        3. 6

                                          The article seems to indicate that he’d been active on that forum as a young kid.

                                          If you wrote a new tool, wouldn’t you tell your online friends about it?

                                          It looks bad, but his proactive efforts to prohibit illegal use should be sufficient to demonstrate that criminal use was not the only use and not his sole intent.

                                          1. 2

                                            He was more than simply active on HackForums. He was part of a secret Skype group of people who met on HackForums that included Zachary Shames, who was selling keyloggers to people specifically for them to use to own up machines, and he sold services to Shames.

                                          2. 5

                                            Is HackForums suspicious? I was under the impression it was mostly a place where curious security enthusiasts traded tutorials, rather than a carder market or something like that.

                                            1. 1

                                              People can visit the site and come to their own conclusions, but that’s not the impression I got browsing around the hackforums site for a few minutes.

                                              If it’s not suspicious, sketchy, and borderline illegal, then I don’t know what would be. “We’re just learning and having fun,” is not at all the message I got.

                                          1. 27

                                            I pulled the indictment from PACER. The story is oversimplifying the case.

                                            The indictment is far more concerned with Huddleston’s affiliation with Zachary Shames, who was convicted (apparently dead-to-rights) for selling a keylogger called “Limitless”. The indictment mentions Limitless more than it mentions NanoCore. Shames wasn’t very smart: the DOJ has records of him providing tech support to users who were clearly using his keylogger to harm people.

                                            Huddleston has two big problems. The first is that he sold licensing software to Shames for the Limitless keylogger. The second is that the DOJ apparently has Huddleston and Shames in a Skype group together talking about this stuff.

                                            The Beast article snarks about the indictment mentioning HackForums repeatedly. But the Beast article doesn’t think it’s important for you to know about the HackForums Skype group Huddleston and Shames shared; in fact, Shames himself gets only a tiny sliver of the article, despite being the fulcrum of the indictment.

                                            RAT software theoretically has legitimate uses. But, obviously, we all know that most RAT software isn’t legitimate. NanoCore sure wasn’t. It has a DDoS botnet tab, for Christ’s sake. Huddleston’s attempts to position it as legitimate software are about as compelling as the “no copyright claimed” comments on a Youtube video.

                                            But having said that: it’s unlikely Huddleston would be in the amount of trouble he is in had he simply written a malicious RAT. His problems are his connections to a criminal conspiracy that got busted.

                                            1. 3

                                              we all know that most RAT software isn’t legitimate. NanoCore sure wasn’t. It has a DDoS botnet tab, for Christ’s sake.

                                              Well, maybe the tool was intended to be sold to the government.

                                              1. 8

                                                Maybe the sale of the tool was actually an interpretive dance performance art.

                                                1. 1

                                                  ‘a’ government. Don’t know about that. Just pointing out there’s more than one.

                                              1. 3

                                                good article. one nit-pick:

                                                Memory safety will not prevent an attacker who has obtained your HMAC key from forging a malicious credential that, when deserialized, can call arbitrary Ruby methods (yes, this was a real vulnerability in older versions of Rails)

                                                HMAC keys are secrets, as MACs are symmetric signatures where the key is used to both sign and verify.

                                                and “I got your secret” => “I can execute code on your machine” is true for most server setups.

                                                1. 5

                                                  Not all secrets are equivalent. Losing just the session HMAC key, it’s reasonable to expect an attacker would only be able to forge sessions, not be able to execute arbitrary code. (This was a bug in the deserializer, btw—the HMAC part is only a little bit relevant.)

                                                  1. 4

                                                    Care to make a list of the popular environments in which this is true, and why?

                                                    1. 2

                                                      I don’t actually have the spreadsheet handy, but just about every web framework has had similar bugs where forged cookie results in deserialization hijinks or setting the admin flag and accessing some debug console or some other game over result. No?

                                                      More generally, people like to downplay file traversal vulns, but I kind of assume file traversal -> RCE is an easy escalation. Is that true(ish)?

                                                      1. 6

                                                        It’s truish. Like, candidly: if there was an HN thread where someone said “that’s no big deal it’s just file traversal” I’d play the “file traversal is usually RCE gameover” card, and if there was a thread where someone said “file traversal is always the end of the world” I’d play the “there are platforms where it isn’t” card.

                                                        1. 1

                                                          Heh, so the first draft of my comment was more like “I seem to recall you saying that losing the secret meant RCE”, but it wasn’t meant to be an accusation. Glad I didn’t misremember at least.

                                                      2. 1

                                                        SSH, GMail, Vanguard. It’s meant as a general statement — if you don’t keep your secrets secret, that’s a problem right there, not that giving away session tagging keys leads to RCE.

                                                        If cookie data is tagged, I think it’s reasonable to assume it might be executable (e.g. a Python pickle string), because you assume secrets stay secret and storing tagged pickle strings in cookies is a lot easier than serializing / deserializing objects more manually, and if secrets don’t stay secret, session tagging keys are probably not as high on your list of concerns as some API tokens.

                                                        1. 2

                                                          I think it’s worth observing that arbitrary-file-read is often worse than it looks, while also not accepting the legitimacy of designs where it is. We should work to avoid designs where file-read is RCE.

                                                    1. 5

                                                      Relevant:

                                                      https://twitter.com/dinodaizovi/status/845028625476976640

                                                      Bear in mind: a 2008 iPhone bears almost no resemblance to a 2017 iPhone. The platform security architecture of the phone has changed radically in the decade since it was released.

                                                      1. 5

                                                        I’m not seeing anything new that wasn’t already presented by researchers at @BlackHatEvents 2007-2012 or released in public jailbreaks.

                                                        It’s not just that (old) iPhones can be broken into, but that they made a strong effort to actually do so.

                                                        You could probably look at this extreme focus on compromising endpoints as a positive - it means that end-to-end encryption is working.

                                                        1. 8

                                                          I think the important thing to keep in mind is that we already know the the USG IC interdicts and tampers with computing equipment. That’s not newsworthy. What would be newsworthy would be if their efforts to do so with the iPhone had survived to the modern iPhone platform — which, because reasons, I believe has a design informed by knowledge of what USG is doing.

                                                      1. 36

                                                        From recent experience with 3 large teams that do pull-request-development, I reject the author’s premise that people don’t actually review code. I’m tasked with reviewing pull requests for security and I usually lose the race to point out flaws in code; the non-security reviewers beat me to the punch.

                                                        Most PRs I see have multiple stylebook and efficiency critiques and almost all get iterated at least once and usually multiple times before being merged.

                                                        I also reject the premise that the data being captured in Jira tickets is already present in the git log. No, it isn’t. The git log is “what”, but not “why”. Most Jira tickets have at least 2 people commenting in them, something no git commit comment has. Also: being able to demonstrate traceability of code back to tickets is super valuable if you have regulated security requirements.

                                                        From what I can tell, in real teams, this process works really well.

                                                        1. 9

                                                          I agree with your points overall. We do PR development and we do good, real reviews.

                                                          However, the git commit messages should be “why” not “what”. “what” is for the git diff, the message should tell you why it was changed.

                                                          A good point that that is only one person’s point of view though, and something like jira will often have multiple people commenting.

                                                          Edit: One thing I thought of, also the git summary line is typically “what” rather than “why”.

                                                          1. 3

                                                            Minor note: there is a case where people doing “code reviews” get bogged down in stylistic things or best practices and can gate and slow down deployment of new features.

                                                            That is a problem with the people, and not with the idea–it works pretty well once that’s addressed.

                                                            EDIT: Downvote incorrect? How, exactly, is this incorrect? I’ve seen this happen multiple times on teams that use code review. Fixing it is part of getting the team up to speed.

                                                            1. 5

                                                              A codebase should have a style guide. If the code is within the style guide, then there is no problem. If the code doesn’t fit the style guide then fix it. It’s not the reviewers fault that the author didn’t follow the style guide.

                                                              Sometimes I have seen people try to dismiss review comments as stylistic when they are really more than that. Spaces around parameters is a stylistic issue. Validating input and checking error codes are not. Avoiding code duplication is not a stylistic issue. Documenting code is not stylistic issue.

                                                              Often times the style comments come first because they jump out at the reader when they are reading the code to try to understand it. If the comments never progress beyond style then the reviewer hasn’t done their job.

                                                              1. 3

                                                                get bogged down in stylistic things or best practices

                                                                If it is being described as “bogged down” then it might be too much, but in general a large part of code reviews should be making sure everyone is following the team coding standards and best practices. Tests can ensure the thing actually does what it is supposed to so reviews aren’t really about verifying functionality.

                                                              2. 1

                                                                If your team isn’t actually providing meaningful feedback during code reviews, that’s usually a problem with the team, not the process. My team is hit or miss, we have a few people that don’t read anything and hit approve, and some that actually provide meaningful feedback. And that’s a people problem that we’re actively working on fixing.

                                                              1. 4

                                                                Better yet, use an editor that integrates gofmt, goimports, and go vet. It’s kind of magical: errors and warnings highlighted automatically, packages imported (or unimported) as needed, formatting is always correct.

                                                                  1. 1

                                                                    gometalinter delivers further magic, from third party static checks. A couple key ones are errcheck (yell about implicitly discarded errors) and ineffasign (catch unused assignments to existing vars, not only unused vars), but others (gosimple, unconvert, deadcode) find spots in your code where you did something redundant or just odd-looking, which sometimes point to editing or thinking errors. (You probably don’t want to use all of the linters all the time; you can pick your list with command-line options.)

                                                                    For Go beginners that aren’t already using vim/emacs (for which there’s vim-go/go-mode), I can recommend VS Code’s Go extension, which supports fmt, linting, and completion, and a bunch of other stuff. People vary in how much hand-holding they want once they’re familiar with Go, but an environment offering lots of info like this seems really helpful for getting started.

                                                                  1. 3

                                                                    Prepared statements resist SQL injection, but are not immune to injection, and I always break out in hives when people describe their complete SQL injection security strategy as “we use prepared statements”.

                                                                    1. 7

                                                                      There are roughly two kinds of SQL injection that are possible when prepared statements are used in a PHP application.

                                                                      1. Higher-order SQL injection (i.e. stored procedures).
                                                                      2. Weird bypasses if you’re using emulated prepared statements rather than actual prepared statements.

                                                                      Neither are relevant to our codebase.

                                                                      If there’s a third that I’m not aware of, I promise you most PHP developers aren’t either, and should be made aware.

                                                                      1. 3

                                                                        This depends on your database, doesn’t it? There are versions of MySQL where you can’t even parameterize a LIMIT. There are databases that can’t parameterize column names for search queries with selectable columns. It can be hard to build IN queries. There are lots of little exceptions.

                                                                        It’s the confidence that “parameterized queries means no SQL injection” that scares me. We still find SQLI in applications despite the fact that most teams now pretty reliably parameterize queries.

                                                                        1. 4

                                                                          Allowing end users to supply column names implicitly invites injection, and a strict whitelist is the sane solution here. The linked post in that section explains the caveats in detail.

                                                                          It sounds to me like “we aren’t actually using prepared statements for this query” is the culprit, however.

                                                                    1. 12

                                                                      Reading Tarn’s interviews makes me always want to become a video game developer.

                                                                      “But if it were something harder, like, what if the price of teleportation is uncontrolled nausea for a week and you lose a quarter of your blood, or something like that? I don’t know how much blood people can live without. But you’re just completely out of it for a week or a month. There’s still cases where teleport is valuable. So then you need to teach them sort of a cost/benefit analysis type thing. Which, I don’t want to be too flippant, but it’s not much different than having a different movement value for a forest than a grassland. There’s a cost to this movement, and the cost is, ‘how much do I value my blood? And how much do I value not being sick all the time?’”

                                                                      1. 9

                                                                        The flip side of this is that once you see Dwarf Fortress for graph traversal and topological sort, it loses a lot of its magic.

                                                                        1. 9

                                                                          Physics story time!

                                                                          In quantum mechanics, there’s this thing called the Schrödinger equation. As an extremely oversimplified description, it says that you can describe the entire quantum in terms of the “Hamiltonian” operator. It’s a nonlinear partial differential equation, so really messy to work with, but hypothetically you can reduce everything in quantum mechanics, classical mechanics, chemistry, biology, weather patterns, etc to solving the Hamiltonian. That doesn’t mean, though, that it’s easy. Here’s roughly where we are in terms of complete solutions.

                                                                          • Proton: Trivial.
                                                                          • Proton + 1 electron: Tricky, but we solved this almost a century ago.
                                                                          • Proton + 2 electrons: Holy shit what the fuck is going on

                                                                          Even with a single unified equation, you very quickly hit systems where you’re pretty much stuck. And that’s just three particles! Once you give up analytic solutions, you’re now in a world of emergent phenomena, where small quantum rules avalanche through a system and lead to bizarre macro-level properties. For example, if you model a metal as a free sea of electrons and add a slight force coming from the ions in the lattice, you suddenly get “forbidden zones” of electron energy, aka band gaps. Then that cascades to make insulators and semiconductors possible, which cascades into transistors, which cascades into, well, computers. So a very slight change in the electron model gives you a universe where I can ramble about my undergrad classes to a complete stranger who may or may not be on the other side of the world.

                                                                          Dwarf Fortress might just be graph traversal and topological sort. Glass is just a bunch of harmonic springs. Weather is just Newton’s equations spread over a lot of particles. Doesn’t mean that we understand it, can predict it, or don’t find it mysterious and full of wonder.

                                                                          1. 1

                                                                            Funny, that the same 1-2-3 pattern holds just for Newtonian gravity an orbits.

                                                                            Single object in empty space: trivial.
                                                                            Two objects: Kepler’s laws hold precisely.
                                                                            Star-planet-moon: Well, up to some approximation…
                                                                            Three stars of comparable masses: oh no not this.

                                                                          2. 8

                                                                            But does having a simple structure underneath weaken or strengthen magic-ness (especially if the details in the next level are carefully thought out)? After all, a digital clock is less magic than a digital clock running on Conway’s Life.

                                                                            That’s probably a matter of perspective.

                                                                            1. 6

                                                                              Is that different from seeing human relations as applied decision theory?

                                                                              Which immediately suggests that Tarn should add in irrationality and biases to dwarf logic… assuming he hasn’t already.

                                                                              Losing my blood probably will hurt a bit immediately and may have serious long-term impacts, but those are quite a bit more difficult to measure so let’s assign that negative value at 1/10th its actual cost.

                                                                              1. 4

                                                                                To be fair, we don’t know that the Universe we’re currently in isn’t much more than graph traversal and topological sorts.

                                                                                1. 1

                                                                                  What is the source for that, if I may ask? Not that I doubt you, but I’d be interested in explanations of how DF works under the hood.

                                                                              1. 16

                                                                                Am I the only one around here who doesn’t mind whiteboard interviews? Ultimately you’re just discussing a technical topic with someone and drawing a few boxes and arrows is really useful.

                                                                                The last time I did a whiteboard interview I didn’t 100% nail the CS puzzler question and, given the offer I got, the interviewer really was mostly interested in my thought process and not my ability to hand indent python code on a whiteboard. I’ve had this experience more than once. Written communication is a skill and being able to communicate your thought process to someone else isn’t as artificial an environment as some make it out to be.

                                                                                1. 11

                                                                                  I’m a broken record about how bad whiteboard interviews are, but I think I generally do pretty well on them. After ~15 years of consulting and product management, I’m pretty comfortable in a neutral-to-hostile room. I think I can talk my way through most situations.

                                                                                  But that’s one of the things that scares me about whiteboard interviews. Not that they’re insurmountable hurdles for my own career, but that they’re too easy, and that I get undue positive evaluations just from the ability to remain confident-sounding during them, and, more importantly, by being able to redirect questions and reframe interviews.

                                                                                  I’ve worked with too many people who interviewed well but were almost total zeroes when it came to delivery too put any faith in ad-hoc interview processes.

                                                                                  1. 9

                                                                                    I came here to say this. I’d much rather go to a whiteboard interview. I don’t do riddles or competitive programming, but I’d appreciate being tested a little more than “tells us about your last project”.

                                                                                    Seeing DHH’s tweet makes me cringe. Sure it works for him, and it works for a lot of people, but for me it is important to know some CS basics. I wouldn’t mind if he said quicksort, or something a little more complicated like that … but bragging that you can’t write the simplest algorithm there is, how is that a good thing?

                                                                                    However, I do agree that take home excercises are good, and I enjoy those too. But they’re not as representative if you don’t ask the candidate to do easy something live.

                                                                                    1. 3

                                                                                      I think it’s not necessarily a question about substance but more about style. Not everyone’s coding style is amenable to standing in front of people and hand-coding an algorithm while people are sitting there judging every character you write. You’re doing this while you’re expected to talk through what you’re writing on the board. That doesn’t come naturally to some people – when you’re sitting at a computer implementing an algorithm, you’re not talking about it out loud.

                                                                                      Another aspect of this is that a candidate can do better by studying for an interview. That’s a sign of a broken candidate vetting process. A candidate with years of experience and a top performer at their previous job could possibly be rejected based on questions that they may not have seen in years.

                                                                                      CS basics are important but some of that knowledge fades over time. I couldn’t give you a good definition of polymorphism without looking it up but I know what it means. I think the issue comes down to treating every candidate as though they were fresh out of school and the further they are from that, the more likely they are to fail those “basic” CS questions in this type of interview environment.

                                                                                      EDIT: better explanation than mine: https://medium.com/make-better-software/against-the-whiteboard-f1df0013954f#.hx2sgjnrl

                                                                                      1. 3

                                                                                        DHH has written some valuable software, but the world is bigger than rails. So when I hear that in the early days of rails things leaked memory so bad that app servers had to be restarted every few minutes I think that’s bad. I realize for the type of work his websites were used for that wasn’t a critical defect. In most software I write that would be a critical defect.

                                                                                        1. 1

                                                                                          So when I hear that in the early days of rails things leaked memory so bad that app servers had to be restarted every few minutes I think that’s bad.

                                                                                          more on this?

                                                                                          1. 3

                                                                                            more on this?

                                                                                            There was a popular (now depublished) blog post by Zed Shaw called “Rails is a ghetto” where he was railing against a lot of things he perceived as wrong in the Rails community. One of them was that he wrote a critical piece of software, but no one would pay him for that. It was one of the first specialized HTTP adapters in Ruby, called Mongrel. Along with that, he wrote a critical gem that fixed MRIs heavy threading problem back then.

                                                                                            In that post, he mentioned in passing that DHH told him that their initial web stack was so bad that they had to restart it every ~100 requests because it leaked memory and Mongrel improved things a lot.

                                                                                            Now, having programmed Ruby since version 1.6, I think this is probably true. Ruby back then was a fringe language built by a few people that were good language designers, but not necessarily runtime implementors. Also, the runtime was built for scripting workloads, so threading, servers and similar were a not so well tested case. With it becoming popular, thing improved massively and from 1.8.7 and later, I’d call MRI a runtime on par with Python and others. 1.9 finally made it somewhat modern.

                                                                                            But as /u/shanemhansen sais: that wasn’t too critical these kinds of applications. For example, it was standard that PHP websites would follow a CGI-like “one process per connection”-approach to make sure there’s no memory problems due to leaks. Restarting a set of webservers every nth request is also acceptable in such an environment, even if there is always another around.

                                                                                            I think the core of DHH’s argument is rather that he’d like people that understand those subleties and possibilities and you can’t really test that on a whiteboard.

                                                                                            1. 2

                                                                                              The relevant bit from Rails is a Ghetto:

                                                                                              I believe, if I could point at one thing it’s the following statement on 2007-01-20 to me by David H. creator of Rails:

                                                                                              (15:11:12) DHH: before fastthread we had ~400 restarts/day
                                                                                              (15:11:22) DHH: now we have perhaps 10
                                                                                              (15:11:29) Zed S.: oh nice
                                                                                              (15:11:33) Zed S.: and that’s still fastcgi right?

                                                                                              Notice how it took me a few seconds to reply. This one single statement basically means that we all got duped. The main Rails application that DHH created required restarting ~400 times/day. That’s a production application that can’t stay up for more than 4 minutes on average.

                                                                                        2. 2

                                                                                          I think people are too concerned about the whiteboard itself. It isn’t about using a whiteboard or not, it’s what you do with it.

                                                                                          Quizzing candidates about CS problems that have nothing to do with the day-to-day work and penalizing them if they don’t get it perfect on the first try is lame, whether or not you use a whiteboard for it.

                                                                                          On the other hand, getting candidates to write some kind of code somewhere that does something useful and discussing their thought process on techniques, tradeoffs, etc is a good idea, and I’d be worried about working somewhere that didn’t do that. A whiteboard can be a nice way to do that, but there’s lots of other ways, including paper, shared text documents, take-home projects, etc.

                                                                                        1. 2

                                                                                          These kinds of marketing posts seem to do better on Lobsters than on that other site. This one is by an appsec monitoring company, and employs the time honored tradition of making a top-10 list that sneakily embeds their product into it. I’ll save you a read: their list, annotated:

                                                                                          1. Follow the OWASP Top Ten. If it’s 2006, you might have a decent shot at covering what a smart tester will find just by working from the OWASP list. But it’s 2017. “OWASP Top 10” is a sort of useful shorthand in the trade for “all the different web app bugs, not all of which are captured in the OWASP Top 10, but at least you know that I’m talking about SQL Injection type stuff rather than use-after-free vulnerabilities”.

                                                                                          2. Get An Appsec Audit. Sure? A good audit is going to cost between $15,000 and $25,000. If you pick the wrong vendor (and there are lots of wrong vendors to pick), you get to spend that money for basically nothing. How often are you doing audits and how much does your app change in those intervals?

                                                                                          3. Implement Proper Logging. Okay.

                                                                                          4. Use Real-Time Security Monitoring and Protection or Web Application Firewalls. If it’s me, writing this marketing piece, I probably don’t lump my product in with WAFs, which are not an especially well-regarded product category.

                                                                                          5. Encrypt Everything. Missing practice: how to effectively encrypt anything so that a single game-over bug on your server doesn’t moot all the “encryption”.

                                                                                          6. Harden Everything. This step is the “???” between “collect underpants” and “profit”.

                                                                                          7. Keep Something Up To Date.

                                                                                          8. Keep Something Else Up To Date.

                                                                                          9. Know When To Keep Things Up To Date.

                                                                                          10. Never Stop Believing In Yourself.

                                                                                          1. 4

                                                                                            Maybe the admins should take a look at the account that submitted this? It’s submitted 30 stories, of which 21 were from blog.sqreen.io, and 9 from blog.codacy.com, most of them of this listicle-ad style. No lobste.rs comments, just those submissions. Few of the posts actually got any upvotes, but it still seems like a source of spam/noise.

                                                                                            1. 2

                                                                                              These kinds of marketing posts seem to do better on Lobsters than on that other site.

                                                                                              The smaller community size means it takes much much less to reach the front page. That has its upsides (lower barrier to participating meaningfully) and downsides (vulnerability to spam like this).

                                                                                              People also tend to assume good faith. See this post in particular which was one of the last of what turned out to be an auto-posting bot.

                                                                                            1. 40

                                                                                              Reprising and reformatting something I wrote on that other site about this:

                                                                                              The problem with JWT/JOSE is that it’s too complicated for what it does. It’s a meta-standard capturing basically all of cryptography which wasn’t written by or with cryptographers. Crypto vulnerabilities usually occur in the joinery of a protocol. JWT was written to maximize the amount of joinery.

                                                                                              Negotiation: Good modern crypto constructions don’t do complicated negotiation or algorithm selection. Look at Trevor Perrin’s Noise protocol, which is the transport for Signal. Noise is instantiated statically with specific algorithms. If you’re talking to a Chapoly Noise implementation, you cannot with a header convince it to switch to AES-GCM, let alone “alg:none”. The ability to negotiate different ciphers dynamically is an own-goal. The ability to negotiate to no crypto, or (almost worse) to inferior crypto, is disqualifying.

                                                                                              Defaults: A good security protocol has good defaults. But JWT doesn’t even get non-replayability right; it’s implicit, and there’s more than one way to do it.

                                                                                              Inband Signaling: Application data is mixed with metadata (any attribute not in the JOSE header is in the same namespace as the application’s data). Anything that can possibly go wrong, JWT wants to make sure will go wrong.

                                                                                              Complexity: It’s 2017 and they still managed to drag all of X.509 into the thing, and they indirect through URLs. Some day some serverside library will implement JWK URL indirection, and we’ll have managed to reconstitute an old inexplicably bad XML attack.

                                                                                              Needless Public Key: For that matter, something crypto people understand that I don’t think the JWT people do: public key crypto isn’t better than symmetric key crypto. It’s certainly not a good default: if you don’t absolutely need public key constructions, you shouldn’t use them. They’re multiplicatively more complex and dangerous than symmetric key constructions. But just in this thread someone pointed out a library — auth0’s — that apparently defaults to public key JWT. That’s because JWT practically begs you to find an excuse to use public key crypto.

                                                                                              These words occur in a JWT tutorial (I think, but am not sure, it’s auth0’s):

                                                                                              “For this reason encrypted JWTs are sometimes nested: an encrypted JWT serves as the container for a signed JWT. This way you get the benefits of both.”

                                                                                              There are implementations that default to compressing plaintext before encrypting.

                                                                                              There’s a reason crypto people table flip instead of writing detailed critiques of this protocol. It’s a bad protocol. You look at this and think, for what? To avoid the effort of encrypting a JSON blob with libsodium and base64ing the output? Burn it with fire.

                                                                                              1. 3

                                                                                                I have a related but somewhat OT question. In one of the articles linked to by the article [1], they say this:

                                                                                                32 bytes of entropy from /dev/urandom hashed with sha256 is sufficient for generating session identifiers.

                                                                                                What purpose does the hash serve here besides transforming the original random number into a different random number? Surely the only reason to use hashing in session ID generation is if there’s no good RNG available in which case one might do something like hash(IP, username, user_agent, server_secret) to generate a unique token? (And in the presence of server-side session storage there’d be no point to including the secret in the hash because its presence in the session table would prove its validity.)

                                                                                                [1] https://paragonie.com/blog/2015/04/fast-track-safe-and-secure-php-sessions

                                                                                                1. 2

                                                                                                  Yeah, if urandom is actually good, then hashing it serves no real purpose. (In fact if you want to get mathematical, it can only decrease the randomness, but luckily by an absolutely negligible amount). Certain kinds of less-than-great randomness can be improved by hashing (as a form of whitening), but no good urandom deserves to be treated that way.

                                                                                                  1. 2

                                                                                                    The reason for that is PHP is weird. PHP hashes session entropy with MD5 by default. Setting it to SHA256 just minimizes the entropy reduction by this step. There is no “don’t hash, just use urandom” configuration directive possible (unless you’re rolling your own session management code, in which case, please just use random_bytes()).

                                                                                                    This is no longer the case in PHP 7.1.0, but that blog post is nearly two years old.

                                                                                                  2. 2

                                                                                                    Thanks for that very thorough dissection of JWT. Are there web app frameworks/stacks that do have helpfully secure and well-engineered defaults that you’d recommend?

                                                                                                    1. 1

                                                                                                      The post itself offers a suggestion (at the bottom): use libsodium.

                                                                                                      1. 1

                                                                                                        The author refers to Fernet as a JWT alternative. https://github.com/fernet/spec/blob/master/Spec.md

                                                                                                        However, Fernet is not nearly as comprehensive as JOSE and does not appear to be a suitable alternative.

                                                                                                        1. 2

                                                                                                          Hah, it seems the article changed a few times, and not just the title…

                                                                                                    1. 5

                                                                                                      This seems slightly silly.

                                                                                                      I mean, I certainly agree with the EFF in principle. But the CIA’s mandate is essentially to hack foreign powers. It goes directly against that if they dig up and then report zero-days. Telling the CIA they should stop doing their job is not going to be effective no matter how persuasively you frame it. The change the EFF is arguing for here has to happen at a higher level, and the US government has never shown any particular concern for the privacy or security of its citizens (and right now it’s certainly at a low ebb even by the usual mediocre standard).

                                                                                                      1. 3

                                                                                                        You make a good point about the role of the CIA, and I wonder if it’s the globalization of software and hardware that is going to make such jurisdiction roles like the FBI, CIA, DHS, NSA much more confusing moving forward. Say the CIA in their efforts to gather intelegence on enemies of the state, discover something that can affect both the homeland and the enemy, what responsibilities are there for the CIA? Ethically, we should try to defend our citizens, but that’s not really the role of the CIA. Reporting it and getting the issue fixed could strengthen the defenses of the US and its citizens but then reduce our ability to attack or learn. Should the NSA try to strengthen our defenses, and in the process let the CIA know of any vulnerabilities it can take advantage of to eavesdrop on enemies?

                                                                                                        1. 1

                                                                                                          Yeah,

                                                                                                          I think it would be better that the CIA didn’t exist, that it’s very existence fundamentally undermines democracy.

                                                                                                          That said, if the CIA is going to exist, them fighting out a sort-of equal battle with the black hats in the realm of target surveillance seems far preferable to the various NSA programs that involuntarily enlist corporations and individuals in a program of mass surveillance.

                                                                                                          If the CIA suppressed a civilian agency or private company’s discovery of these things, it would be bad. But otherwise, this is doing the research that expect happens in “black hat” labs and foreign agencies.

                                                                                                          On the other hand, EFF pretty much has to wag its finger at every misdeed. Their position prevents them from saying “oh but this is OK, I guess” since doing so would result in someone arguing something much “worse” would result

                                                                                                          1. 1

                                                                                                            Did you miss the reference to the Vulnerabilities Equities Process?

                                                                                                            1. 3

                                                                                                              What about it? The equities process is a joke; it’s actually a little insulting to everyone’s intelligence. You can’t use a bug for a few months to compromise high-profile targets and then disclose it; the act of disclosing it stands a very good chance of alerting those targets that you compromised them, and how you did it.

                                                                                                          1. 4

                                                                                                            I mean, this is pretty much all bad and written as if by a marketing communications manager, but this bit right here is impressive:

                                                                                                            And my personal favorite: you could even copy-and-paste part of your code from security handbooks! If for example, you realize you use weak cryptography, you could quickly solve the problem by taking free, plug-and-play cryptographic functions implementations that are available online: the well-known OpenSSL could be such an example. Begone, countless hours of development!

                                                                                                            1. 1

                                                                                                              That’s hilarious.