1. 3

    Wow, this article takes sexist/racist technology blog posts to a new level. (I guess this comment will get downvoted here, but I’m really offended with this post.)

    1. 6

      In addition to talking about Haskell types and kinds, this article is an elaborate reference to the 2004 movie Mean Girls, which is about a clique of teenage girls at an American high school. There are several political perspectives from which you could critique the movie as racist or sexist against several genders and/or ethnic groups, but I don’t personally adhere to any of those perspectives.

      1. 4

        The superfluous content is meant in fun, not meant to be taken seriously. Try to be less offended.

      1. 0

        I hate these use-ids & fragments for magical behavior - it messes up my browsing history and it’s annoying. I would expect a JS solution if JS is possible and an optional fallback to ids only when no JS is executed.

        1. 22

          This is literally plain HTML. If something is magical here, it is the usage of javascript to emulate a behavior that has been standard in the web since the nineties.

          1. 5

            I gave up on the back button roughly a decade ago.

            1. 3

              I wanted to ask you what kind of browser would do such a silly thing, but apparently that’s (also?) what Firefox does: fragments do get added to history, and all the “back” button does is dropping the fragment.

              I still find it peculiar that there’s even a need for such button (on PC I have a Home button, and on mobile providing one should be the browser’s job imo), but seems like there is a good reason why people use JS for this after all.

              1. 24

                I like that it gets added to the history. You can press a link to a reference or footnote, and then press back to go to where you were.

                1. 4

                  There has been a craze for “hash-bang URLs” that abused URL fragments for keeping state and performing navigation. This let JS take over the whole site and make it painfully slow in the name of being a RESTful web application.

                  That was when HTML5 pushState was still too poorly supported and too buggy to be usable. But we’re now stuck with some sites still relying on the hashbang URLs, so removing them from history would be a breaking change.

                  1. 2

                    It’s always crazy to see how people abuse the anchor tag. My favourite personal abuse is I kept finding that SysAdmins and IT were always just emailing cleartext credentials for password resets and during pentests I’d often use this to my advantage (mass watching for password reset emails for example). So I jokingly ended up writing a stupid “password” share system that embedded crypto keys in the hash url and would delete itself on the backend after being viewed once: https://blacknote.aerstone.com/

                    Again, this is stupid for so many reasons, but I did enjoy abusing it for the “doesn’t send server side” use case. EDIT: I originally had much more aggressive don’t use this messages, but corporate gods don’t like that.

                    1. 1

                      One useful trait of hash-bang URLs is that your path is not sent to the server. This is useful for things like encryption keys. MEGA and others definitely use this as lawful deniability that they cannot reveal the contents of past-requested content. Though, if given a court order I suppose they can be forced to reveal future requests by placing a backdoor in the decryption JS.

                  2. 2

                    Hmmm that’s a good point, and not something I had considered. Thanks for the feedback.

                  1. 6

                    This post… probably doesn’t really mean anything. Firstly, judging a web browser’s compelxity by judging the spec catalogue is already unfair. The catalogue is basically a dump of things related to the web, for example the specs of JSON-LD. (Which probably has almost no relationship to implementing web browsers, since that’s just a data format of JSON.)

                    Also, word count doesn’t correlate with complexity. That’s like…. saying that a movie will be more entertaining than another one because it’s runtime is longer. Web-related specs are much, much more detailed than POSIX-specs because of their cross-platform nature: we’ve already seen what happens if web-related specs look like POSIX: anybody remember trying to make web pages that work both in IE, Safari, Firefox in the early 2000s? (Or, just try to make a shell script that works on both macOS, FreeBSD, Ubuntu, and Fedora without trying out on all four OSes. Can you make one with confidence?)

                    Really, it’s just tiring to hear the complaints about web browsers, especially the ones about ‘It wasn’t like it in the 90s, why is every site bloated and complex? Do we really need SPAs?’ Things are there for a reason, and while I agree that not every web-API is useful (and some are harmful), one should not dismiss everything as ‘bloat’ or ‘useless complexity’.

                    1. 8

                      Man, there’s a lot of copying and pasting comments from HN going on in this thread. Just to save the effort of copying and pasting the rest of this thread…

                      https://news.ycombinator.com/item?id=22617536

                      1. 4

                        Please do not copy and paste your comments from HN to here (or vice versa). HN and Lobsters have different cultures, different purposes and different users but there still is significant overlap in users. It annoys everyone, and I imagine especially the author, to have to read the same comment twice.

                        Also, word count doesn’t correlate with complexity. That’s like…. saying that a movie will be more entertaining than another one because it’s runtime is longer.

                        No, it really isn’t. It’s like saying that a movie will be longer than another one because its script is longer. Maybe there are some movies with really detailed screen directions in the scripts that mean they have a longer script than a longer, less-detailed movie does but it’s still a really strong correlation.

                        I think the same is true here. Word count in a specification really is highly correlated with complexity. You need more words to describe more complex behaviour.

                        Web-related specs are much, much more detailed than POSIX-specs because of their cross-platform nature: we’ve already seen what happens if web-related specs look like POSIX: anybody remember trying to make web pages that work both in IE, Safari, Firefox in the early 2000s?

                        The problem was not the specifications but the implementations of those specifications. Today, people aim to implement things in a compatible way, back then they aimed to implement them in an incompatible way.

                        Really, it’s just tiring to hear the complaints about web browsers, especially the ones about ‘It wasn’t like it in the 90s, why is every site bloated and complex? Do we really need SPAs?’

                        It’s way more tiring to have to use bloated, overly complex websites that waste my mobile data limit.

                        Things are there for a reason, and while I agree that not every web-API is useful (and some are harmful), one should not dismiss everything as ‘bloat’ or ‘useless complexity’.

                        “Things are there for a reason” is the most non-answer answer I’ve ever seen, holy crap. Yeah, they’re there for a reason: a fucking bad one.

                      1. 1

                        These overloaded terms are harder to explain, due to the person (who I am explaining to) having misconceptions about what the term is. Like, when I explain what ‘server’ is, I always have to explain that the physical computer called server is different from what I’m trying to explain. :-( When I (forget and) don’t, everything gets mixed up and both people lose track of the conversation…

                        1. 8

                          Ah. This article, which argues that there are no low level languages you can write code in.

                          1. 5

                            I read this article as that computers aren’t improving as much as they can, due to the hardware & software people thinking in the “C” way. People optimize CPUs to the C model, optimize compilers to the CPU, optimize software for compilers… and as a result the entire computing model, despite having multiple processors, fast caches, accurate branch prediction, is still revolving around C. There are some computing models for concurrency (due to some geniuses like Alan Kay and great timing). Maybe there can be models for caches or branch prediction too, but nobody really thinks about these models…

                            1. 4

                              And this is true. CPUs are high-level machines now. In x86 even the assembly/machine code is an emulated language that is dynamically compiled to μops, executed out of order, on multiple complex execution units that you have no direct control over.

                              1. 1

                                Totally agree but if you put aside that one rather questionable assertion there’s a whole lot of interesting here :)