1. 6

    None of these tactics remove or prevent vulnerabilities, and would therefore by rejected by a “defense’s job is to make sure there are no vulnerabilities for the attackers to find” approach. However, these are all incredibly valuable activities for security teams, and lower the expected value of trying to attack a system.

    I’m not convinced. “Perfect” is an overstatement for the sake of simplicity, but effective security measures need to be exponentially more costly to bypass than they are to implement, because attackers have much greater resources than defenders. IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers. Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.

    1. 2

      You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system? In this case and as your threats get more advanced I agree with the article. In higher levels of threats it becomes a problem of economics.

      1. 2

        You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system?

        I think just the opposite actually. The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals. Whereas following the truism would lead you to make changes that would protect against all attackers.

        1. 4

          The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals.

          That’s not what I understand from this article. The attitude proposed by the article should, IMO, lead you to think of the threat model of the system you’re trying to protect.

          If it’s your friend’s blog, you (probably) shouldn’t have to consider state actors. If it’s a stock exchange, you should. If you’re Facebook or Amazon, not the same as lobsters or your sister’s bike repair shop. If you’re a politically exposed individual, exploiting your home automation raspberry pi might be worth more than exploiting the same system belonging to someone who is not a public figure at all.

          Besides that, I disagree that all examples are too costly to be worth it. Hashing passwords is always worth it, or at least I can’t think of a case where it wouldn’t be.

          To summarize with an analogy, I don’t take the exact same care of my bag when my laptop (or other valuables) are in it than when it only contains my water bottle, and Edward Snowden should care more about the software he uses than the ones I use.

          Overall I really like the way of thinking presented by the author!

          1. 2

            Whereas following the truism would lead you to make changes that would protect against all attackers.

            Or mess with your sense of priority such that all vulnerabilities are equally important so “let’s just go for the easier mitigations”, rather than evaluating based on the cost of the attack itself.

            1. 1

              If you’re thinking about “mitigations” you’re already in the wrong mentality, the one the truism exists to protect you against.

              1. 1

                It’s important to acknowledge that it’s somewhat counterintuitive to think about the actual parties attempting to crack your defenses. It requires more mental work, in a world where people assume they can get all the info they need just by reading their codebase & judging it on its own merits. It requires methodical, needs-based analysis.

                The present mentality is not a pernicious truism; it’s an attractive fallacy.

        2. 2

          IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers.

          How do you figure it’s too costly? If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks. Additionally, there are services out there that scan dep. vulnerabilities if you give them a Gemfile, or access to your repo.

          Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.

          Perfect it all you want. The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up) If anything, what’s costly is keeping on your employees to not take shortcuts, and on stay alert to missing access cards, rouge network devices in the office, badge surfing, and that they don’t leave their assets lying around.

          1. 1

            If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks.

            I’d frame that as: deployment environments are increasingly set up so that everyone pays the costs.

            The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up)

            So fix that, with a real security measure like hardware tokens. Thinking in terms of costs to attackers doesn’t make that any easier; indeed it would make you more likely to ignore this kind of attack on the grounds that fake domains are costly.

        1. 6

          Work: beginning my last week at $job. Got a new job starting May 1st. Vacation in-between. (ElixirConf Europe early next week!) (Will miss !!con this year though :( )

          Not work: I started a small project using Vue and Phoenix, the goal is to be able to compose simple stream processing pipelines by drag’n’dropping blocks and drawing arrows between them. Source blocks could be sources such as uploading a CSV, polling a URL, receiving data over a websocket. Processing blocks would be simple maps, filters, reduces. Etc. Not sure how far I’ll take this, having fun so far.

          1. 5

            Wouldn’t this be solved by Netflix simply requiring email confirmation on sign-up (and on email changes)?

            1. 1

              Yes, this blog post is stupid.

            1. 3

              I would love to be able to use something similar (org-mode instead of markdown) but so far I’ve never gotten satisfactory results. To me the ideal case would be that I am able to easily export the source document into multiple formats so that I can distribute it as a pdf and html document.

              Where this fails is most often the interoperability of external tools and to some extent my own laziness. The process of setting up a framework to cite references exist in org-mode (and I believe in pandoc through cite-proc) it is cumbersome to set up. Another issue is the visualization. Since the fonts are most often different I would need to generate multiple versions of the same plot, which becomes even more work if I want to use TikZ in LaTeX and for example matplotlib for web publishing.

              While the tufte layout is great, what should I do with overlapping margin notes? They need to be manually corrected in LaTeX, at least as far as I know.

              Another thing: How can I generate a glossary? For LaTeX the glossaries package exists but you cannot use that straightforward in markdown.

              While this probably reads very negative, I would love it, if something like this existed! I just do not see that happening so far. And yet another point would be getting other people on board for collaboration.

              1. 3

                My experience with Pandoc was not excellent to convert Markdown to LaTeX. Pandoc is a bit complicated to extend as well.

                What I did was use a Markdown parser(1) producing an AST(2), plugging into the parsing to extend Markdown syntax to support for instance the glossary use case(3), then write a LaTeX stringifier(4) for this AST. (In this Markdown AST -> LaTeX stringifier I only support standard Markdown syntax, this abbr plugin I wrote gets stringified through yet another plugins package(5) I wrote. It’s straightforward: 6.) Then I simply put the latex doc into some sort of latex template using a custom class, in the latex template you’d have your tableofcontents, glossaries, etc.

                1. 1

                  I had a bunch of lecture slides written up in org that I was dumping to LaTeX, which was always a bit fragile, and then a change in org-mode broke everything. In the end I wasted more time fixing everything than if I’d just written LaTeX to start with.

                  I think the best thing is to leverage smart editing (e.g. AUCTeX) as much as possible, and not deal with trying to convert from something else into LaTeX.

                1. 31

                  The “identify language by location” issue is worse than anything account related honestly - I don’t use a google account, but if someone links me to a gdoc or google groups or something - I get Thai “in-page chrome” - all buttons, navigation etc is in a language I can’t read and barely speak.

                  Why? Because google has apparently decided that the Accept-Language header is a filthy communist plot and can’t be trusted.

                  This is more of an issue than the one described in the article because there is no obvious way on any google pages to change language. The author does understand the language shown to them it’s just not the one they wanted.

                  1. 10

                    Why? Because google has apparently decided that the Accept-Language header is a filthy communist plot and can’t be trusted.

                    you really couldn’t in the mid-90ies when the internet started taking off (it was normally set to the language spoken by the people who made the browser).

                    Unlike other things from the 90ies (like websafe colors), this one seems to stick around, possibly because only relatively few people are affected by it and because it doesn’t impact how the site looks on the CEOs machine (unlike websafe colors).

                    By now Accept-Language is very reliable and should be treated as a strong signal. Much stronger than geolocation or other pieces of magic (which also regularly fail spectacularly in multi-lingual countries. Damn French Youtube ads here in Switzerland)

                    1. 3

                      I mean - ok but that’s 25 years ago. You couldn’t stream video in the mid 90s, you couldn’t browse on a mobile device, shit you couldn’t do most of what people do now on the web in the 90s.

                      Even then it sounds like a bad idea - false positives where the users language isn’t that of accept-language means that they’re already managing to use a browser in a language other than their own.

                      Ultimately there are numerous better options that allow for potential mismatch of the browser language - but google uses none of them. They just base it on IP country code and fuck anyone who’s disadvantaged by it.

                      1. 0

                        You could definitely stream video for most of the 90s… I certainly did.

                      2. 2

                        Funny that you should mention French ads on YouTube in Switzerland, no later than yesterday I was complaining on IRC that I only get Swiss German ads on YouTube and Spotify although I’ve been living in the French speaking part of Switzerland forever. It’s even more surprising because I don’t speak German or Swiss German and Google absolutely certainly knows this about me, the locale sent by my browser is fr-CH, etc.

                      3. 10

                        And what a brain-dead decision not to make switching language the easiest possible UI interaction. At least some sites make it easy e.g. by listing the desired language in that language because guess what - I don’t know what the word English is in Arabic.

                        Not to mention the fact that there are MULTIPLE valid languages for many locations.
                        Worse still that so many other sites have blindly copied the idea. God help you if you travel or spend significant time where you don’t understand the local language and don’t want to be forced to log in (or even have an account).

                        I wrote a greasemonkey script to add a hl=en slug across all google locations and I dread the day they decide not to respect that.

                      1. 3

                        I don’t know anything about graphics programming and this was an interesting read.

                        The web demo is cool. The mountains visibly “growing” at the horizon made me nostalgic, so many old games behaved that way. Now I know why!

                        1. 1

                          So this was interesting, and I liked the explanation, but the sample code in the readme doesn’t exist in the repo?

                          1. 1

                            Not in Python, but the web demo code is in there, see for instance the render function.

                          1. 2

                            This is very interesting for people like me who don’t want to learn everything about Tensorflow or Caffe but still want to see what’s the current state of affairs and have fun with the tech.

                            For example I spent my weekend running places/scenes recognition models and objects recognition models on my Lightroom pictures library, all on my laptop, with a tiny script adding the recognized labels to the pictures as IPTC keywords using exiftools. Quite some manual wiring work but I can now filter my pictures by content! (Results aren’t very impressive though. I’ll try other models!)

                            1. 4

                              What exactly is the takeaway from this rant? “Read warning messages”? “Keep your software updated”?

                              I get that the author’s reason for writing was to tell off the complainers, essentially saying “it’s your fault for trusting library authors; it’s not the library authors’ fault for breaking your application.” But isn’t that logic twisted? If a library maintainer introduces a bug, obviously it’s the library maintainer’s fault. The question is whether the application developer has a Plan B for this type of situation.

                              If “have a Plan B” is indeed the intended takeaway, I wish the author would say that clearly, rather than the blame-shifting approach of “you are incompetent because you trusted open source.”

                              1. 4

                                The takeaway is: Always pin your dependencies (possibly except dev env). We used shrinkwrap for this, now npm provides a lockfile, same for yarn.

                                1. 4

                                  The takeaway I read into it is that if your production build process has the ability to install different versions of a million different dependencies than the ones that you previously tested on (for instance because it uses npm install and semver ranges and doesn’t lock things down), then you will at some point get screwed by that fact.

                                  You should have the ability to deploy the exact same code (including deps) that you test. You should have the sense to only do this. You should have tests, and they should not suck. If you do this, you won’t be the guy who said “this caused me hours of fumbling while prod was on fire”, you will be one of the people saying “my build broke, I changed the version pin, it’s OK for now”, which is better.

                                  If you want to be even more careful, then you don’t use ranges at all; you choose your dependencies, you maintain your own copies (so that no one else’s outages or pwnage spill onto you), and you don’t upgrade anything without having someone competent review the changes for any possible effect they have on you. Because code is so malleable, we as programmers make it a habit to replace the support columns in the basement with the newest high-tech design every time we repaint the cabinets in the kitchen… and then every now and then the house falls down in the process.

                                  1. 1

                                    Yeah, companies of any meaningful size relying on flaky redistributors like npm has always felt… odd

                                    1. 1

                                      because it uses npm install and semver ranges and doesn’t lock things down

                                      FYI, npm 5 and later generates a lock file automatically, so you’d have to specifically go out of your way to do this now.

                                    2. 2

                                      I don’t think that logic is twisted at all.

                                      There is an implied contract in open source that the maintainer(s) of a project promise constancy of some amount of functionality: that regressions are to be avoided and fixed and even that extensions and intentional changes are broadcasted and controlled. The implied contract holds because there’s an expectation that the maintainer(s) want the project to be successful as measured by the number of people who trust them enough to download and use the project.

                                      But that contract is just implied. For all you know, any one of your dependencies is actually has secret goals of being an art project to see how programmers respond during crisis. Or, more to the point, a con.

                                      The only way a bug or regression or breaking change is the “fault” of the maintainer(s) is if this implied contract holds.

                                      So it’s not so much whether or not you have a Plant B as much as whether you realize that you don’t have actual counterparties in the maintainer(s) who have given you a large amount of functionality you rely on. Whether you, as the responsible party for a service upon which you probably do have a contract for continued existence, support, and provision of services to your customers, are managing systemic risk appropriately.

                                      The obvious solution is to lock down dependencies and reduce “counterparty” risk on open source projects you use. More systemically, however, the solution is to be cognizant of assumptions like these and realize that they’re squarely upon the shoulders of the application maintainer.

                                      If that’s not good enough, then consider making a contract with the open source maintainers. You’ll probably need to pay them.

                                      1. 2

                                        This “implied contract” thinking is one of the strangest I’ve seen since joining BigCo. People will ask “is the project well maintained?” and don’t seem to think that the alternative is we write it from scratch – as a tech company, we write a lot from scratch and maintain all that. I’d much rather be forced to maintain a project for our use if upstream goes away then have to write from scratch and maintain!

                                        1. 2

                                          To be fair, there are lots of cases in modern development where the alternative to taking on a dependency isn’t rewriting the whole thing from scratch; it’s being moderately inconvenienced and writing slightly more code. You use a big framework because one small corner of it makes something you’re doing easier. If it didn’t exist, you would reinvent that small corner. The biggest risk is that you reinvent it with more bugs or a worse interface.

                                    1. 5

                                      I’m working on a Markdown to PDF service, via latex.

                                      Still wondering if creating nicely typesetted PDF from Markdown is of interest to anyone though!

                                      1. [Comment removed by author]

                                        1. 2

                                          Thanks for the feedback, your experience is very interesting!

                                          Is there a better layer of abstraction if I want to avoid generating PDF directly? What’s appealing to me in tex is that it already takes care of most of the typographic choices.

                                        2. 1

                                          Remarq does this. I regularly recommend it to consultants and small businesses for producing nice-looking reports quickly and cheaply.

                                        1. 3

                                          note: I’m using TypeScript to convert the async iterators and generators into something that node.js can run.

                                          Isn’t it a bit of an unfair comparison? The author seems to be benchmarking a JavaScript implementation of async iterators and generators while pretending to benchmark a language feature. What they were actually running on node is this: https://gist.github.com/vhf/2b01fe4f867964a27fe617443ddf786b , presented as this: https://github.com/danvk/async-iteration/blob/master/async-iter.ts (I used this: https://www.typescriptlang.org/play/ )

                                          I’d be more interested in an analysis running the actual language feature, I think it’s available on d8 (v8’s shell) via --harmony-async-iteration.

                                          1. 1

                                            Switched from GMail to protonmail last year, pretty happy so far. Using my own domain so I’m a paying customer.

                                            1. 19

                                              I’ve seen this argument a lot that you CAN do one in the other, however syntax matters and defaults matter.

                                              1. 3

                                                Case in point:

                                                   (assoc map :key value)
                                                

                                                In JavaScript:

                                                   Object.assign({}, {key: value})
                                                

                                                Which is both more complex, less obvious and less efficient.

                                                1. 2

                                                  Redux apps can use ES6 spreads for this: { ...map, key: value } - less efficient but certainly not too onerous syntax wise.

                                                  1. 1

                                                    This is not ES2015 (ES6) but ES2017. ;)

                                                    1. 1

                                                      Actually, it is not standardized at all. It is still a stage 3 proposal.

                                                      https://github.com/tc39/proposal-object-rest-spread
                                                      https://github.com/tc39/proposals

                                                  2. 2

                                                    Would anyone do that in real JS? (I don’t write JS, but if the answer is “no”, then I don’t see the point)

                                                    As time goes on I have to admit that FP hype gets on my nerves. I enjoy the functional combinators as much as the next person, I love getting work done in Erlang, I am sometimes grouchy when writing Go when I can’t just map over a container, I love testing pure functions in any language, but maybe it has been python and rust that have made me a little happier in general to fall back on the boring procedural control structures (they support both paradigms, and yet people tend to use explicit iteration instead of the combinators pretty frequently). Readability is nice.

                                                    1. 5

                                                      Would anyone do that in real JS? (I don’t write JS, but if the answer is “no”, then I don’t see the point)

                                                      Can’t speak for others, but we do that a lot in our React/Redux app to avoid mutation of objects from spreading.

                                                      I have been using Python in a functional style, with LCs, map, filter and just about everything from the itertools and operator modules, but it was just not idiomatic. So when I switched to Clojure where this is the dominant way to do things it feels like coming home. I think the procedural control structures, especially when used in places where a map or fold would suffice makes it more difficult to understand what is going on because they could do anything.

                                                      1. 3

                                                        You might be interested in Coconut (http://coconut-lang.org).

                                                        1. 1

                                                          We should use the abstractions that minimize incidental complexity. I like using combinators when they fit on a single clean line. I tend to get lost when reading code that nests them, or is like a chain of 4+ combinators that hasn’t been documented more thoroughly than average. While I enjoy writing clojure and erlang (languages heavily reliant on combinators) they tend to be languages that I despise reading other people’s code for the most. I’m happy when people spend the time to make their stuff clean and fucrs-compliant, but it’s so damn painful for me to follow a nasty chain of nested combinators.

                                                        2. 2

                                                          Oh yeah, I have a React/Redux project that sometimes felt like half my code was calls to Object.assign. It’s the easiest way to do a shallow copy of an object so you’re not inadvertently passing references to the same object around and mutating them.

                                                          Python does cause you to use more of the “boring procedural” control structures, and it does nominally support some functional programming, but some of it is more because of inconvenience than anything else. For example, I write far too many nested for/if loops where filter would be more natural if filter/map/lambda was less clunky and faster. I don’t think this makes it easier to read. I also think its reliance on list comprehensions is wrongheaded, because even now, after knowing Python for… ugh, nearly twenty years… I can never remember the order for iterating through nested lists in a list comprehension.

                                                    1. 1

                                                      This post has plenty of issues IMO.

                                                      For instance:

                                                      const add = (a, b) => a + b
                                                      The first line creates a function add() that takes one parameter and it returns another function that also takes one parameter.

                                                      Or the two code samples shown: https://gist.github.com/idealley/1066aca705b768e5f869674e489347c3/0540e51d02106a6e7904e762cc002ed1c2c4ccba and https://gist.github.com/idealley/3691634227195f6d26027e93b9485849/3471b723bbfd47af561e99cb03d2a66d7a986374 which are pretty bad:

                                                      • no consistency (x ;, x;, if (a), if (a ), if () {}, if ()\n{})
                                                      • loose equalities (!=)
                                                      • use of indexOf instead of includes
                                                      • a || b hack instead of default parameters

                                                      Overall, I think this blogpost gives a pretty bad example to the reader.

                                                      1. 2

                                                        Thank you. I will improve it.

                                                      1. 2

                                                        I am considering using postgresql for a project and the only thing that concerns me about it is the upgrade story. As someone who comes from using distributed DBs where zero downtime upgrades are the norm, several months of effort to do an upgrade in postgresql seems unacceptable.

                                                        Does anyone know if there are any plans to make this better?

                                                        1. 2

                                                          Random finding in my twitter feed after reading your comment: http://www.slideshare.net/dataloop/zero-downtime-postgres-upgrades

                                                          1. 1

                                                            Interesting. Unfortunately it still seems quite a bit more complicated.

                                                            1. 3

                                                              Author of the talk here, it is. I think Postgres has a long way to go on upgrades and clustering.

                                                              Since it’s not linked from that SlideShare page (and that page is controlled by the meetup hosts), here’s the video.

                                                        1. 5

                                                          I was cutting gitlab slack when people were complaining at their seat-of-the-pants data center migration a few years ago, but I think we should expect the engineering maturity to verify your backups at this point.

                                                          https://about.gitlab.com/2015/03/09/moving-all-your-data/

                                                          EDIT: But providing us these details is something they deserve credit for. Good point, kb.

                                                          1. 2

                                                            It certainly doesn’t give me any confidence that their plans to switch from the cloud to bare metal is going to work out well.

                                                            1. 1

                                                              They gave up on that plan after the blogpost you link generated many insightful comments advising them not to switch to bare metal.

                                                              1. 4

                                                                Oh, I did not realize that, I thought it was just in-progress. They even had a post about the hardware they were planning on buying… I didn’t see a post about abandoning the plan, I just thought it’s one of those things that takes time to play out. Do you have a source to point me to on that decision?

                                                                1. 1

                                                                  I wish I had a source. I might have misunderstood but I’m pretty sure that’s what one of the gitlab engineer on the live YouTube feed answered during the Q&A which lasted while they were waiting for the backup rsync to finish.

                                                                  I just went to their team webpage but couldn’t find the face of the engineer who said it. Hope I got it right, apologies otherwise.

                                                                  [EDIT] Oh but the stream is recorded! It might be around this time: https://youtu.be/nc0hPGerSd4?t=3782 Nope that’s not it. Not sure when it was.

                                                                  1. 4

                                                                    They are preparing a blog post explaining their decision to stay in the cloud. The original issue has some more details.

                                                                2. 1

                                                                  Wow, I missed that. I’ll have to go back and re-read the comments (I read some of them at the time, but obviously not all of them!).

                                                              2. 1

                                                                I always figured, and still do, that Gitlab users are happiest when hosting their pwn setup. The CE feels like a very good solution in that, and why not the paid editions as well.

                                                              1. 9

                                                                Relevant to my current work, thanks!

                                                                I’d love to see a deeper cut: Is there type inference? Destructuring? Product and sum types? Can we see examples of the same thing using both tools? How about examples of things one can do that the other can’t?

                                                                I’d also love to see stronger opinions: Should I decide just on the basis of existing Angular/React? If I don’t use either, which tool should I pick? Does one have a noticeably larger, friendlier community?

                                                                1. 7

                                                                  +1, this article left me eager for more.

                                                                  1. 5

                                                                    I’ve been working a lot with TypeScript for the past year, so I can answer some of those.

                                                                    Is there type inference?

                                                                    Yes. Both Flow and TypeScript do type inference.

                                                                    Destructuring?

                                                                    TypeScript does destructuring. Flow doesn’t, since it’s just a type checker, not a compiler. But you’d usually pair Flow with Babel.

                                                                    Product and sum types?

                                                                    TypeScript has sum types. I’m not familiar with product types.

                                                                    Should I decide just on the basis of existing Angular/React?

                                                                    Maybe in the case of Angular, since it’s really a TypeScript-first framework. React is less opinionated, and TypeScript has really good support for React, including compiling JSX if desired.

                                                                    If I don’t use either, which tool should I pick?

                                                                    TypeScript is a fair bit older and more mature as far as I can tell.

                                                                    Does one have a noticeably larger, friendlier community?

                                                                    TypeScript is much larger.

                                                                    The really big advantage that TypeScript has is the existing ecosystem of third party type definitions which you can install via npm.

                                                                    Say you want to use LoDash, which isn’t written in TypeScript, but you want to have your usage type checked. Just npm install @types/lodash and you’ll get a community-maintained type definition which the TypeScript compiler will recognize, and automatically match up with the existing lodash JS package in your node_modules.

                                                                    1. 4

                                                                      If I understand correctly, what TypeScript has is union types (as in set-theoretic unions), not sum types. The difference shows up when you try to union/sum a type with itself:

                                                                      -- Haskell
                                                                      data Sum = A Foo | B Foo
                                                                      

                                                                      Ignoring bottom, Sum is the disjoint union of two copies of Foo.

                                                                      // TypeScript
                                                                      type Union = Foo | Foo
                                                                      

                                                                      The union of Foo with itself is just Foo again.

                                                                      1. 1

                                                                        Product types as covered by Wikipedia. tl,dr; Haskell/ML/Rust shit.

                                                                        I suspect we probably recognize them by another name and concrete example, and I also suspect somebody here might be able to bridge the theory with the practice. (nudge nudge @pushcx)

                                                                        1. 5

                                                                          To give a simple version, Sum types are OR while Product types are AND. Tuples count as Product types and typescript and flow support them.

                                                                          1. 1

                                                                            So are product types the same as intersection types? They’re not like tuples, but do express an “and” relationship. https://www.typescriptlang.org/docs/handbook/advanced-types.html

                                                                            1. 3

                                                                              As I understand it, product types are different from intersection types in that product types vary depending on the order of the operands of the product, i.e. “A + B” (where + is the ‘product type operator’) differs from “B + A”.

                                                                              A struct in C is a classic example of a product type; if you include two sub-structs of the same type (A + A), the elements of each differ, whereas in intersection types A & A would be the same as just A itself, and A & B = B & A.

                                                                              (I think. Correct me if I’m wrong, someone more knowledgeable; this is based on my interactions with product/sum types in ML-y things, and only a little bit of interaction with intersection types while helping a friend debug some TypeScript, but I have no real-world experience with the latter.)

                                                                              1. 4

                                                                                There is so much confusion in this subthread that I feel compelled to correct some of it:

                                                                                • Sums (), tensor products (), direct products (×), unions (|) and intersections (&) are all different from each other.
                                                                                • Sums, tensor products and direct products are associative and commutative up to natural isomorphism. That is, (A ⊕ B) ⊕ C and A ⊕ (B ⊕ C) are naturally isomorphic, A ⊕ B and B ⊕ A are naturally isomorphic, etc. None of them is idempotent.
                                                                                • Unions and intersections are associative, commutative and idempotent up to strict equality. That is, (A | B) | C and A | (B | C) are the same type, A | B and B | A are the same type, A | A and A are the same type, etc.
                                                                                • Tensor products distribute over sums, up to natural isomorphism. That is A ⊗ (B ⊕ C) and (A ⊗ B) ⊕ (A ⊗ C) are naturally isomorphic.
                                                                                • Unions and intersections distribute over each other, strictly. That is, A & (B | C) and (A | B) & (A | C) are the same type, etc.
                                                                                • Sums, tensor products and direct products play nicely with data abstraction, because they respect isomorphisms of types. That is, if A and B are isomorphic, and so are C and D, then so are A ⊕ C and B ⊕ D, etc. Category theorists call this “not being evil”.
                                                                                • Unions and intersections are “evil” in the above sense, and thus don’t play nicely with data abstraction. Conceptually, unions and intersections require the existence of a single universe of all values, of which every type is a subset. Sounds familiar? Yes, this makes unions and intersections natural candidates for bolting on top of dynamically typed object systems. And, voilà, Typed Racket, Ceylon (yes, the JVM’s object system is dynamic), TypeScript and Flow all have union and intersection types.
                                                                                1. 1

                                                                                  Crap. I really meant A & (B | C) and (A & B) | (A & C) are isomorphic.

                                                                                  1. 1

                                                                                    Thank you!

                                                                            2. 2

                                                                              I think a plain old C struct is a product type, and tagged unions are sum types.

                                                                              Edit: nevermind, I’m very late to the party. :)

                                                                          2. 1

                                                                            I suggest this link for a deeper and more exemplified comparison: https://djcordhose.github.io/flow-vs-typescript/flow-typescript-2.html#/

                                                                          1. 3

                                                                            I really like the idea!

                                                                            I’m curious to know how you would implement it as I have the feeling that lobsters comments wouldn’t be a very good medium to comment code.

                                                                            1. 4

                                                                              Some kind of gist like system? With the links being in the comments here.

                                                                              1. 5

                                                                                Yeah, I was envisioning something like this. Anyone who had a piece of code they’d like reviewed on would make a comment with a link to the code (on github, bitbucket, wherever) and some context (why they’re working on it, specifically what aspects of the code they’d like feedback on), and then others could reply indicating that they’d like to review.

                                                                            1. 7

                                                                              This seems to be a variant of an IDN homograph attack.

                                                                              1. 8

                                                                                It is indeed.

                                                                                It’s becoming a common “attack” on many websites, forums, particularly as usernames to impersonate users, or in any user inputs really (like “your website”, …). There’s this small python lib I made to validate user inputs against homograph attacks: https://pypi.python.org/pypi/confusable_homoglyphs/

                                                                                (edit: How fun, I wanted to help preventing these issues but after posting this I checked if ɢoogle was properly detected as dangerous by my lib and… it isn’t.

                                                                                The lib builds a small data file containing all characters advertised as “confusable” by the Unicode Consortium. This weird small ‘G’ is ɢ, which according to the Unicode Consortium is only confusable with ԍ, not with G. Too bad.)

                                                                              1. 2

                                                                                I finally released monomorphist, a kind of hosted devtool abstracting some of the work required to trace JavaScript code on node/V8.

                                                                                This week will be my last week before starting a new full-time job, so I’ll spend most of it putting the most time-consuming open source projects I maintain in a state of low-maintainance hibernation.