1. 21

    The Amiga languished because Commodore was spectacularly bad at management and made some really stupid engineering decisions.

    The Amiga (later retroactively named the Amiga 1000) was a desktop-style case with an expansion slot on the side (the “sidecar” slot). A bunch of peripherals were made for this slot that sat on the desk next to the Amiga.

    Then the Amiga 500 came out. It had the same slot – great! – but, inexplicably on the other side and upside down. So all of the existing peripherals that worked on the Amiga still worked…if you flipped them upside down. Given how they had to be designed to reach the slot and the fact that the keyboard of the 500 was integrated into the main housing meant that none of them would really work.

    The Amiga 600 (which was originally the 300 and supposed to be cheaper than the 500 but somehow came out costing more) had a PCMCIA slot…except Commodore refused to wait for the final PCMCIA spec and produced the 600 with a PCMCIA slot that wasn’t fully compliant with the specification. This meant that a lot of PCMCIA cards wouldn’t work.

    The Video Toaster came out for the Amiga 2000 and was the killer app for the Amiga. It was, without question, the defining peripheral for the Amiga. Then Commodore made the Amiga 3000, which was compatible with the Video Toaster…except that the Amiga 3000 case was a half-inch too short for the Toaster card, so it wouldn’t fit.

    The Amiga had what was often considered the best of the SVR4 Unix ports (Amiga UNIX or Amix). Sun came and offered to produce the Amiga 3000 as a Unix workstation that could also run Amiga software. Commodore, because they sucked at management, declined.

    When the CD-ROM revolution hit, Commodore designed the A570 CD-ROM drive for the Amiga 500, then immediately discontinued the Amiga 500 in favor of the Amiga 600…which couldn’t use the A570 drive.

    The Amiga 1200 had a unique-to-the-model expansion port. Commodore released the specs of that port to various peripheral manufacturers and then proceeded to change the specs on the port for no good reason, meaning that a bunch of peripherals already produced would work, except that they wouldn’t physically fit. (I remember having a 68030 expansion card with a whopping 8MB of RAM on it; I couldn’t actually close the case once it was installed…)

    Then there were just…stupid boondoggles like the Commodore 64 Games System, released in 1990 in Europe. This was a Commodore 64 without a keyboard. It could play Commodore 64 games. This was five years after the NES had come out and right around when the SNES was being introduced. Here was a 1982 computer that could only play games designed for a 1982 computer and then only if those games came on a cartridge and didn’t need the keyboard. At least a few games came out for it that you couldn’t even get past the title screen on because the game asked you to press any key to continue.

    The Amiga (ahem Commodore) CDTV was launched because nobody knows why. It was a “multimedia appliance” that could play CDs and a videos on CD-ROM in a proprietary video format that nobody used. You could theoretically play Amiga 500 games, but very few were ever released on CD-ROM format, so there wasn’t really any reason to ever buy a CDTV, and no one did.

    That’s to ignore the simply egregious management blunders. Discontenting the core Amiga team so badly that they all quit. Treating dealers like trash. Managers and corporate officers being paid huge bonuses when the company was deep in the red and not selling much of anything.

    …can you tell I’m a bitter Amiga guy? I fervently loved the Amiga (to the point that it was probably really annoying to be around me) and Commodore did their best to destroy it.

    1.  

      Another bitter Amiga guy here. Wish I could give your comment than one upvote.

      I’ve been using computers since ’83 and (computer related) felt heartbroken twice, when I had to switch from my fried Amiga to PC and when Google cancelled Reader.

      1.  

        The developers tried to tell us. Remember the key sequence easter egg that told us what they really thought of Commodore?

        1.  

          Was the CDTV the https://en.wikipedia.org/wiki/Amiga_CD32 ?

          From what I’ve read (wiki) that was literally the death of the company (thanks to a stupid software patent).

          1.  

            No, it was an earlier product: https://en.m.wikipedia.org/wiki/Commodore_CDTV

            The CD32 was clearly positioned: it was a games machine and marketed as such. The CDTV was…something else. It could play CDs, and with the addition of a keyboard and mouse could run a good amount of Amiga software at the time (though it shipped with the four-year-old AmigaOS 1.3 and not the then-current 2.0, so not everything worked).

            (It should be noted that AmigaOS 2 was already a year old when the CDTV shipped…)

            Without a keyboard and mouse, you could run CDTV software. At the end of the CDTV’s life, there were around 100 titles available for it but the vast majority were simply the normal Amiga version of the software burned to CD-ROM with perhaps minimal changes to work with only the controls avaialble on the CDTV; very little software took advantage of the CD format.

        1. 19

          The best thing about Electron is that Linux is finally becoming a first class platform for desktop apps. Slack, Git Kraken, Atom, Mailspring and so on likely would’ve never seen the light of day on Linux if not for Electron. Electron drastically lowers the barrier for writing and maintaining cross-platform applications, and I think that far outweighs its disadvantages. I don’t really see any insurmountable problems with Electron that can’t be addressed in the long run as the adoption grows.

          The reality is that maintaining multiple UIs for different platforms is incredibly expensive, and only a few companies have the resources to dedicate separate development teams for that. The value of having a common runtime that works reasonably well on all platforms can’t be overstated in my opinion. This is especially important for niche platforms like Linux that were traditionally overlooked by many companies.

          1. 24

            I think a more accurate description would be that Electron makes every platform second class.

            It is certainly more egalitarian and even an improvement for platforms previously overlooked, but better than before is not necessarily good.

            1. 2

              On the other hand, if the web stack becomes the standard then all the platforms improve together in the long run.

            2. 15

              In a better universe there would be no reason to maintain cross-platform apps. Ideally we would use independently maintained platform tailored apps talking to common protocols. Like, a hypothetical Ubuntu-native VoIP app that could talk to Skype on Windows. Protocols as the point of commonality is far more desirable than a UI toolkit, because a common UI toolkit means that every app works in its own peculiar way on every platform, which sucks.

              Unfortunately, we’re living in this universe…

              1. 4

                Most people prefer applications that are unconstrained by stagnant standards. For example, consider Slack or Discord versus IRC, or web forums versus newsgroups or mailing lists.

                At least when the applications are open-source and API-driven, there’s hope for alternative clients for those who want or need them.

                1. 2

                  Most people prefer applications that are unconstrained by stagnant standards.

                  That’s an interesting thought, thanks.

                  Although I still think that a single entity evolving a standard is better than every app inventing their own UI conventions.

            1. 5

              I once liked python’s style, but then I met code formatters. Save your file, and your code gets formatted to the only correct form. This saves so much time and energy. Not only do you not have to think about code formatting, but nobody else has to either – and suddenly we don’t have to argue about standards or waste time in code review complaining about indentation.

              This saving is impossible on python and other languages with whitespace significant blocks.

              1. 3

                I came from Python to Go (and C, C++, Java, Pascal, etc. before), and my conclusion is the same.

                1. 2

                  Why is it impossible?

                  1. 1

                    This is impossible to indent automatically:

                    if True:
                        foobar
                    barfoo
                    

                    because the formatter cannot know if barfoo should be indented or not.

                    1. 1

                      Sorry, still lost. Why would it be impossible to know how to indent barfoo? It is at the same level as if so it should be indented as if statement.

                      1. 1

                        Ok. What about this?

                        if True:
                            foobar
                          barfoo
                        
                        1. 2

                          This is not a valid Python code so I’d say nothing or indent as it is (second line two levels, third line one level).

                          I have difficulty coming up with Python code that would be valid and also difficult to figure out how to indent when copied. The only exception I can think of is to mix tabs and spaces in the same file which is bat shit crazy and you’ll find nobody reasonable defending this (you already have an error even if you don’t see it).

                  2. 1

                    impossible on python

                    https://github.com/google/yapf

                    1. 1

                      Also, https://github.com/ambv/black, which has a philosophy similar to the go auto formatting tool

                      1. 1

                        Well, this looks like I’m moving the goalposts, but really I just forgot to specify well enough before.

                        I’m talking about a formatter that can format the whole program code to a single correct form without changing the meaning. In whitespace-significant languages this is obviously impossible, since formatting and semantics are intertwined.

                    1. 9

                      Contrary to the comments at Reddit, I’m pretty sure Apple cannot do this unless you have installed a MDM profile…

                      Locking, remote wipe, etc are limited to your iCloud account. There is no equivalent to “Google Play Services”. APNS has no control; it only handles push notifications.

                      1. 15

                        Contrary to the comments at Reddit, I’m pretty sure Apple cannot do this unless you have installed a MDM profile…

                        When the OS is closed source how would you know?

                        1. 12

                          If you think Apple has a gaping backdoor in all of their phones which violates the mission of their product line, then please prove me wrong. In fact, take this opportunity to short their stock and prove it to the world. You could make yourself really rich really fast.

                          Nobody else has done it, and everything Apple has done with their product line has been to constantly increase user security, not install backdoors for remote control and spying.

                          I do not think they are perfect, but this would be a huge blow to their public perception and would certainly tarnish their brand for years to come.

                          1. 7

                            Objectively, I think that u/user545 has a valid point. When proprietary software is in place there is no way to verify that such software does what the user expects it to do, and nothing more. Just because Apple has said it doesn’t spy on its users, doesn’t mean such a statement is true; and we cannot trust them, because we don’t know what the program does in the inside.

                            1. 9

                              Perhaps it’s not as severe as user545 says.

                              I think the argument can be transposed to anything done by anyone else:

                              • I didn’t see how cars were built. So I have to assume the worst.
                              • I didn’t see how roads were built. So I have to assume the worst.
                              • I didn’t audit this open source project’s source code myself. So I have to assume the worst.
                                • Or I only heard from someone that this source code checks out. But I don’t know that person, so I have to assume the worst (that they’re lying to me).
                                • I didn’t audit the crypto algorithms. So I have to assume the worst.
                                • I didn’t compile it myself. So I have to assume the worst.
                                • I didn’t compile my compiler myself. So I have to assume the worst.
                                • I didn’t compile my operating system myself with my own compiler. So I have to assume the worst.
                                • I didn’t mine and process the raw resources to create my computer. So I have to assume the worst.

                              Sure I can assume the worst, but then I probably wouldn’t live in a society.

                              “Assume the worst” feels like an impractical rule to follow. Instead, it’s a practical tradeoff of efficiency (of my time) and likelihood I need to “assume the worst”. I’m not discounting the valuable effort that security researchers do to audit and break into these systems. Especially if they take this approach, that’s great. But they’re way more qualified and have more resources (eg - time, money) than me to do it. I’m not going to blindly assume the worst that these security researchers are out to trick me.

                              I agree with feld. Apple isn’t perfect. They may change in the future. But Apple seem less likely than Google to implement a backdoor like this based on the way they position themselves in the market right now.

                              1. 5

                                You’re missing two things:

                                1. “They’re usually defective since suppliers dont care or have liability.”

                                2. “Intelligence agencies and law enforcement are threatening fines or jail for not putting secret backdoors in. The coercive groups also have legal immunity. Their targets can do 15 years if they talk.”

                                No 1 also applies to FOSS. With those premises, I definitely cant trust closed-source software to not have incidental or intentional vulnerabilities. Now, we’re back to thorough design and review by parties we trust. Multiple, skilled, mutually-suspicious groups.

                                1. 2

                                  Thanks,

                                  I agree with you on #1, including that it applies to FOSS. I may argue that a supplier has more incentive to fix it if you’re a potentially influential customer over a FOSS that has a disinterested maintainer (making you fall back to build-it-yourself or audit yourself. And to be clear, FOSS is definitely a better option than if the non-cooperative supplier is a monopoly). But I’d admit only be able to back up anecdotally, which isn’t a strong case.

                                  For #2, couldn’t that also apply to key maintainers in FOSS if they are contributing to the same project? I’d take a random guess that governments may find it impossible to coerce a small set of individuals. 15 years would equality scare FOSS maintainers as well. Sure, a geographical barrier may make that more difficult, but I’d guess that human-based intelligence agencies like the CIA probably have some related experience in this. I agree that FOSS makes it harder to sneak one by reviewers, but maybe there’s not many people needed to coerce to get the backdoor in a release.

                                  I only tangentially review security topics, so I’m not sure if that’s a realistic threat or just a tinfoil haty thought <:-).

                                  I guess I’m putting more emphasis from the perspective of typical (non-technical) user of software to:

                                  1. care more about security / privacy
                                  2. pressure companies they support to have better security/privacy practices

                                  Over distrusting all companies and have a significantly worse user experience of using software in general. Non-technical users generally like the fallback of technical support over just “figure it out yourself” or “you lost all your data because you couldn’t manage your secrets”.

                                  I’m curious, if a company allowed you to audit their source code before you approved/used it, would that significantly minimize the advantages FOSS software have over proprietary software for you?

                                  1. 2

                                    I may argue that a supplier has more incentive to fix it if you’re a potentially influential customer over a FOSS that has a disinterested maintainer

                                    This hasn’t been the case at all in the mobile space. The supplier has an incentive to not fix things so you buy a new device where as FOSS maintainers want your device to last as long as possible.

                                    1. 2

                                      I’d agree the motivation for some suppliers to upsell to newer devices, although I don’t really understand motivation for FOSS maintainers to want you to use your device as long as possible. As a one who maintained iOS libraries, there’s strong motivation to deprecate older devices/platforms since it’s a maintenance burden that sometimes hinders new feature work (and typically the most active contributors use the latest stuff). And when pitted against supporting the latest devices vs the older devices, chances are the newer stuff will win in those debates.

                                      Thinking through the supplier stuff a bit more doesn’t make that much difference though. Sure, it doesn’t feel like a great business practice for a company to upsell. But it’s also how those companies stay in business. It could be viewed similarly to a maintenance support fee for existing devices. If suppliers offered the a retainer fee, it would effectively be the same thing then?

                                      1. 2

                                        The lineageOS team does amazing work keeping old Android devices on the latest release. Also means app devs don’t have to worry because these old devices support all the new apis and features.

                                    2. 2

                                      “For #2, couldn’t that also apply to key maintainers in FOSS if they are contributing to the same project?”

                                      That’s a great observation. I held off mentioning it since people often say, “That’s speculation or conspiracy. Prove it with examples.” And the examples would have secrecy orders so… I just dropped the examples where they can find proof it happened. There very well could be coercive action against FOSS maintainers. Both Truecrypt developers and someone doing crypto on Linux filesystems kind of disappeared out of nowhere not talking about the project any longer. Now we’re into heresay and guesswork, though. Also, they might be able to SIGINT FOSS with a secrecy order. We might be able to counter that having people in foreign countries looking for the problem, submitting a fix, and the rule is to always take a fix. They have to spot the problem that might be out of their domain expertise, though.

                                      Plenty of possibilities. I just don’t have anything concrete on mandated, FOSS subversion. I will say one of the reasons I’d never publish crypto under my own name or take money for it is this threat. I think it’s very realistic. I think we haven’t seen it play out since the popular libraries for crypto were so buggy that they didn’t need such a setup. If they did, they’d use it sparingly. Those also ran on systems that were themselves ridden with preventable 0-days.

                                      Far as open vs closed with review, I wrote an essay on that here.

                                      1. 2

                                        Thanks for that essay, that was insightful.

                                        I’m roughly remember the Truecrypt incident and that was suspect, although never came across the linux file system crypto circumstance. Was it similar to Truecrypt? Was that developer already known. My googling didn’t seem to show up any mention of that at all.

                                    3. 1

                                      There is one thing I am wondering about. Government agencies require backdoors but I would think they also require backdoors that are kept secret. How does that work with FOSS software? Alright yes they could sneek it in the compiled version maybe but distros are all moving to reproducible builds so that would be detected.

                                      1. 2

                                        Ignore the Karger/Thompdon attack: only happened twice that I know of. The nation-state attackers will go for low-hanging fruit like other black hats. They also need deniability. So, they’re most likely to either (a) use all bug hunting tools to find what’s already there and (b) introduce the kinds of defects people already do by accident. With (b), discoveries might not even burn the source if they otherwise do good work.

                                        For FOSS, they’ll slip the vulnerability into a worthwhile contribution. It can be either in that component or be an interaction between it and others. Error-handling code of a complex component is a particularly-good spot since they often have errors.

                                2. 11

                                  They are able to push updates over the internet and the whole thing is proprietary. I am unable to tell you what the system does because I cant see it. And at any time apple can push arbitrary code which could add a back door without anyone knowing.

                                  When you can’t see what is going on you have to assume the worst.

                                  1. 5

                                    I can’t tell whether this is 1. a defense of open-source in general and android in particular or 2. a critique of apple.

                                    Neither works.

                                    1. See example of what just happened. or the firefox/mr robot partnership recently. open source does not automatically confer transparent privacy.

                                    2. Apple has, in fact, emerged as a staunch defender of user privacy. There are many many examples of apple defending users against law enforcement.

                                    You can’t wish Apple to be terrible about privacy and use that as the argument.

                                    1. 3

                                      Sure you can. They could take money to secretly backdoor the phone for NSA and use lawyers to tell FBI to get loss for image reasons. The better image on privcy leads to more sales. The deal with NSA puts upper bound on what FBI will do to them since they might just get data from NSA.

                                      If that sounds far fetched, remember two things:

                                      1. The telecoms were taking around $100 million each from NSA to give them data that they sometimes passed onto feds to use with parallel construction. Publicly they said they gave it out only with warrants. RSA went further to say they encrypted the data but weakened the crypto for $30 mil. The Core Secrets leak also said FBI could “compel” this.

                                      2. In Lavabit trial, Feds argued he wouldnt have losses if customers didnt know he gave Feds the master key. He was supposed to do it under court order and then lie about it.

                                      Given those two, I dont trust any profit-motivated company in US to not hand over data. Except maybe Lavabit in the past. Any of them could be doing it in secret for money that they take or get fines/jail.

                                      1. 3

                                        I would say Apple is more comparable to Lavabit than the others – they’re actively and publicly taking steps to protect their users’ privacy.

                                        I wouldn’t argue that they will never do it, but to paint Apple and Google with the same brush on user privacy is silly and irresponsible.

                                        1. 2

                                          Well, we know that the secret, court meeting was going to put him in contempt or else. He had to shut the business down to avoid it. Apple may have been able to do more due to both size and making case public debate. Then again, that may have been a one-time victory followed by a secret loss. You can’t know if there’s two legal systems in operation side by side, one public and one secret. I assume the worst if the secret system is aggressively after something.

                                          “I wouldn’t argue that they will never do it, but to paint Apple and Google with the same brush on user privacy is silly and irresponsible.”

                                          I agree with this. Apple is a product company. Google is a full-on, surveillance company. Google is both riskier for their users now and more over time as they collect more which more parties get in various ways.

                                      2. 3

                                        I am not defending android at all. As you can see in the OP post android is absolutely horrible for privacy and control. I also agree that open source is not flawless of course but open source enables us to have the opportunity to inspect the programs we use (usually while contributing features) from what I understand the firerfox event was pushed through a beta/testing channel and not through the FF source. I would hope all linux distros have this feature turned off when packaging FF.

                                        The OP comment was asking me to prove that Apple is able to change user settings over the network and I think that is an unreasonable statement to make when the software is closed source. I also mentioned that it is possible as apple is able to push new updates at any time with arbitrary code. So they have the capability of doing anything that is possible hardware wise.

                                        1. 2

                                          Fair on your 2nd point of responding to the OP and I don’t know whether they have the capability. However, they seem, at least at the moment, disinterested in taking random liberties with their users’ privacy.

                                          1. 3

                                            disinterested in taking random liberties with their users’ privacy.

                                            I think that’s probably true but no one in this thread actually knows and one day its quite likely that the US government will force them to backdoor devices if they haven’t already.

                                        2. 1

                                          Apple has, in fact, emerged as a staunch defender of user privacy.

                                          this has to be a joke

                                        3. 1

                                          How do you know they are able to do that then?

                                          Because all system updates that got installed on my phone came only after I manually approved them. Unless I am not aware of some previously demonstrated capability this sounds like exactly the same kind of unsubstantiated argument you are arguing against.

                                          1. 1

                                            What criteria do you use for approving or denying updates and how would that be able to stop a backdoor being installed?

                                            1. 2

                                              It doesn’t matter since the original argument was that Apple can do the same thing (automatically install/change software on your device) which they cannot. You have to assent to the installation (of updates, backdoor or whatever). May not be a difference you care about, but I do.

                                              I agree that black box software makes it impossible to know if software can be trusted, but binary package of an open source software is also just a black box if I am not able to generate the same hash when compiling myself which in my admittedly not recent experience happened a lot.

                                              1. 1

                                                “You have to assent to the installation “

                                                You would need a copy of source for all priveleged hardware and software on their platform to even begin to prove that. You dont have that. So, you don’t know. You’re acting on faith in a profit-motivated, company’s promises.

                                                I’ll also add one that has enough money to do a secure rewrite or mod of their OS but doesnt intentionaly. They don’t care that much. They’re barely even investing into Mac OS X from what its users say. Whereas, Sun invested almost $300 million into redoing Solaris for version 10. That brought us things like ZFS.

                                                A company with around a $100 billion that cares less about QA than smaller businesses shouldnt be trusted at all. They’ve already signalled that wealth accumulation was more important.

                                                Meanwhile, tiny OK Labs cranked out mobile sandboxing good enough that General Dynamics bet piles of money on them for Defense use. Several other companies cranked out security-enhanced CPU’s, network stacks, DNS, end-to-end messaging, and so on. Quite a few were for sale, esp those nearing bankruptcy. Shows Apple had plenty of opportunities to do the same or buy them. Didnt care. They’ll make billions anyway.

                                                1. 2

                                                  I agree with pretty much everything you say and while interesting, I am not sure how it is relevant to what I said.

                                                  I did not argue that one should trust Apple (even though I do think iPhone has a better track record than Android). My point was simply that all other things being equal I prefer platforms that don’t suddenly change on some company’s whim and let me decide when or if I want to perform an update and that AFAICT Apple does not push those updates without user’s consent.

                                                  I assume your argument is that consenting is meaningless as I cannot perform any reasonable security analysis of what I will receive. True that I can’t, but I also value predictability and speaking from a personal experience I feel I lose some of it with auto-updates.

                                                  1. 1

                                                    I assume your argument is that consenting is meaningless as I cannot perform any reasonable security analysis of what I will receive. True that I can’t, but I also value predictability and speaking from a personal experience I feel I lose some of it with auto-updates.

                                                    I think you are missing the point. Your iPhone has convinced you that it would only ever install an update if you approved it, but you have no way of knowing that there isn’t already a way for Apple to push software without your consent, in a way that you wouldn’t detect.

                                                    I’m sure if you looked at the EULA that you agree to when you use an iPhone, Apple has every legal right to do this even if they try to create an image of a company that wouldn’t.

                                      3. 4

                                        objdump -d

                                        1. 3

                                          When the OS is open source how would you know? Have you personally audited all of linux? How do you know you can trust third-party audits? I don’t think “it’s open source” provides much in terms of security all things considered.

                                        2. 3

                                          how do you know, what APNS does.

                                        1. 13

                                          Ah they tricked me with this one, it’s a Medium article hidden behind another domain.

                                          (Whenever I see “medium.com” next to lobsters articles I know not to click, since the result will be a weak thinkpiece by a frontend developer, wrapped in obtrusive markup.)

                                          1. 3

                                            i had literally the exact same response. “Ah, a medium article….about frontend dev……(tab closed)”.

                                            1. 3

                                              Interesting ‘hot take’!

                                              You judge people based on the ‘medium’ that they use.

                                              1. 8

                                                “The medium is the message” ;)

                                                I have to admit though that seeing a medium link is generally a negative signal for me. Still click on many of them.

                                                1. 7

                                                  I think Medium’s original USP was “only quality content”.

                                                  Predictably, that didn’t scale.

                                                  1. 1

                                                    Many confuse Marshall McLuhan’s original meaning of that phrase. It didn’t really mean that the way a message was delivered was part of the message itself. It actually meant that the vast majority of messages were medium or average.

                                                    It would have been better said, “meh, the message is average.”

                                                    1. 5

                                                      This didn’t really make sense to me, so I looked it up, and I don’t think that’s right. The original meaning is exactly what we’ve come to understand it as:

                                                      The medium is the message because it is the medium that shapes and controls the scale and form of human association and action. The content or uses of such media are as diverse as they are ineffectual in shaping the form of human association. Indeed, it is only too typical that the “content” of any medium blinds us to the character of the medium. (Understanding Media: The Extensions of Man, 1964, p.9)

                                                      I wonder where you’ve heard your interpretation?

                                                      1. 5

                                                        This comment is obviously a troll. Fitting, given that McLuhan himself was a troll.

                                                        1. 4

                                                          Interesting interpretation. I am not sure how he originally came to that phrase, but his book certainly spent a lot of time and effort arguing for the now prevalent meaning.

                                                  1. 2

                                                    Cool visualisations, although I wonder how well they’ll work without Javascript or on mobile. Kudos to them for adding ‘Heads up, you’re about to experience some scroll-driven animations. If you’d like to skip that, you can jump ahead to the final state.’

                                                    The issue itself is pretty funny. There are some pretty obvious solutions, like buying jeans with bigger pockets. I suspect the reason is relatively simple: pockets are needed less when most women carry a bag with them everywhere they go, while most men don’t.

                                                    Probably better not to have too many gender politics posts here tho.

                                                    1. 11

                                                      My wife carries bags mostly because pockets on women’s clothes are ridiculous and because your solution while theoretically sound, fails miserably in practice if you cannot find such clothes.

                                                      This issue might be funny to you, but at this point is just frustration for her and to be honest for me too.

                                                      1. 5

                                                        it works wonderfully on mobile

                                                        1. 5

                                                          Do you have good tips for women jeans with big pockets?

                                                        1. 1

                                                          Do you really carry anything in your pockets? I find it very uncomfortable.

                                                          1. 5

                                                            Yes and my wife would like to too.

                                                            1. 3

                                                              Of course I do. It may not be very comfortable, but unlike an external bad, it doesn’t restrict your movement, and that’s a big advantage.

                                                              The article is aice data collection and visualization effort.

                                                              1. 2

                                                                A “mobile” phone in a pocket surely restricts my movements, especially sitting. Personally sometimes I use a briefcase just for my phone and keys. It’s heavier but you may put it on your knees. Also it looks better than stuffed pockets. Article and presentations are very nice indeed.

                                                                1. 3

                                                                  For the briefcase you need one hand, ot you need to be sitting in order to put it on your lap. I intentionally choose phones that fit in a pocket comfortably, and I’m not happy with that stupid trend of phone size increasing to the point when even men’s pockets are not enough.

                                                              2. 2

                                                                I carry my phone, phones, house keys, work keycard and tissues, I wouldn’t survive with women’s pockets.

                                                                1. 3

                                                                  I usually add a wallet and a small bottle of alcohol-based hand sanitizer which is really great if you are eating something on the go.

                                                                  I’d like to add that roughly one in 15 people worldwide has a form of diabetes and that a large portion of them also carries medication and a sugary and a salty snack as treatment.

                                                                2. 1

                                                                  Not if the pocket is deep enough. I have pants that I can fit my phone in the pocket and it’s no issue because the phone sits lower on my leg.

                                                                1. 8

                                                                  To be fair, they should also mark as “Not Secure” any page running JavaScript.

                                                                  Also, pointless HTTPS adoption might reduce content accessibility without blocking censorship.
                                                                  (Disclaimer: this does not mean that you shouldn’t adopt HTTPS for sensible contents! It just means that using HTTPS should not be a matter of fashion: there are serious trade-offs to consider)

                                                                  1. 11

                                                                    By adopting HTTPS you basically ensure that nasty ISPs and CDNs can’t insert garbage into your webpages.

                                                                    1. 2

                                                                      No.

                                                                      It protects against cheap man-in-the-middle attacks (as the one an ISP could do) but it can nothing against CDNs that can identify you, as CDNs serve you JavaScript over HTTPS.

                                                                      1. 11

                                                                        With Subresource Integrity (SRI) page authors can protect against CDNed resources changing out from beneath them.

                                                                        1. 1

                                                                          Yes SRI mitigate some of the JavaScript attacks that I describe in the article, in particular the nasty ones from CDNs exploiting your trust on a harmless-looking website.
                                                                          Unfortunately several others remain possible (just think of jsonp or even simpler if the website itself collude to the attack). Also it needs widespread adoption to become a security feature: it should probably be mandatory, but for sure browsers should mark as “Not Secure” any page downloading programs from CDNs without it.

                                                                          What SRI could really help is with the accessibility issues described by Meyer: you can serve most page resources as cacheable HTTP resources if the content hash is declared in a HTTPS page!

                                                                        2. 3

                                                                          WIth SRI you can block CDNs you use to load JS scripts externally from manipulating the webpage.

                                                                          I also don’t buy the link that claims it reduces content accessiblity, the link you provided above explains a problem that would be solved by simply using a HTTPS caching proxy (something a lot of corporate networks seem to have no problem operating considering TLS 1.3 explicitly tries not to break those middleboxes)

                                                                          1. 4

                                                                            CDNs are man-in-the-middle attacks.

                                                                        3. 1

                                                                          As much as I respect Meyer, his point is moot. MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. Some companies even made out of the box HTTPS URL filtering their selling point. If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’. We should be ready to teach those in needs how to setup it of course, but that’s about it.

                                                                          1. 0

                                                                            MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. […] If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’.

                                                                            Well… how can I say that… I don’t think so.

                                                                            Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                                                                            Beyond the obvious risk that the proxy is compromised (you should never assume that they won’t) which is pretty high in some places (not only in Africa… don’t be naive, a chain is only as strong as its weakest link), a transparent HTTPS proxy has an obvious UI issue: people do not realise that it’s unsafe.

                                                                            If the browsers don’t mark as “Not Secure” them (how could them?) the user will overlook the MitM risks, turning a security feature against the users’ real security and safety.

                                                                            Is this something webmasters should care? I think so.

                                                                            1. 4

                                                                              Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                                                                              Not sure how to tell you this, but companies have been doing this on their internal networks for a very long time and this is basically standard operating procedure at every enterprise-level network I’ve seen. They create their own CA, generate an intermediate CA key cert, and then put that on an HTTPS MITM transparent proxy that inspects all traffic going in an out of the network. The intermediate cert is added to the certificate store on all devices issued to employees so that it is trusted. By inspecting all of the traffic, they can monitor for external and internal threats, scan for exfiltration of trade secrets and proprietary data, and keep employees from watching porn at work. There is an entire industry around products that do this, BlueCoat and Barracuda are two popular examples.

                                                                              1. 5

                                                                                There is an entire industry around products that do this

                                                                                There is an entire industry around rasomware. But this does not means it’s a security solution.

                                                                                1. 1

                                                                                  It is, it’s just that word security is better understood as “who” is getting (or not) secured from “whom”.

                                                                                  What you keep saying is that MitM proxy does not protect security of end users (that is employees). What they do, however, in certain contexts like described above, is help protect the organisation in which end users operate. Arguably they do, because it certainly makes it more difficult to protect yourself from something you cannot see. If employees are seen as a potential threat (they are), then reducing their security can help you (organisation) with yours.

                                                                                  1. 1

                                                                                    I wonder if you did read the articles I linked…

                                                                                    The point is that, in a context of unreliable connectivity, HTTPS reduce dramatically accessibility but it doesn’t help against censorship.

                                                                                    In this context, we need to grant to people accessibility and security.

                                                                                    An obvious solution is to give them a cacheable HTTP access to contents. We can fool the clients to trust a MitM caching proxy, but since all we want is caching this is not the best solution: it add no security but a false sense of security. Thus in that context, you can improve users’ security by removing HTTPS.

                                                                                    1. 1

                                                                                      I have read it, but more importantly, I worked in and build services for places like that for about 5 years (Uganda, Bolivia, Tajikistan, rural India…).

                                                                                      I am with you that HTTPS proxy is generally best to be avoided if for no other reason because it grows attack surface area. I disagree that removing HTTPS increases security. It adds a lot more places and actors who now can negatively impact user in exchange for him knowing this without being able to do much about it.

                                                                                      And that is even without going into which content is safe to be cached in a given environment.

                                                                                      1. 1

                                                                                        And that is even without going into which content is safe to be cached in a given environment.

                                                                                        Yes, this is the best objection I’ve read so far.

                                                                                        As always it’s a matter of tradeoff. In a previous related thread I described how I would try to fix the issue in a way that people can easily opt-out and opt-in.

                                                                                        But while I think it would be weird to remove HTTPS for an ecommerce chart or for a political forum, I think that most of Wikipedia should be served through both HTTP and HTTPS. People should be aware that HTTP page are not secure (even though it all depends on your threat model…) but should not be mislead to think that pages going through an MitM proxy are secure.

                                                                              2. 2

                                                                                HTTPS proxy isn’t incompetence, it’s industry standard.

                                                                                They solve a number of problems and are basically standard in almost all corporate networks with a minimum security level. They aren’t a weak chain in the link since traffic in front of the proxy is HTTPS and behind it is in the local network and encrypted by a network level CA (you can restrict CA capabilities via TLS cert extensions, there is a fair number of useful ones that prevent compromise).

                                                                                Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                                                                                1. 2

                                                                                  Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                                                                                  Browsers bypass the network configuration to protect the users’ privacy.
                                                                                  (I agree this is stupid, but they are trying to push this anyway)

                                                                                  The point is: the user’s security is at risk whenever she sees as HTTPS (which stands for “HTTP Secure”) something that is not secure. It’s a rather simple and verifiable fact.

                                                                                  It’s true that posing a threat to employees’ security is an industry standard. But it’s not a security solution. At least, not for the employees.

                                                                                  And, doing that in a school or a public library is dangerous and plain stupid.

                                                                                  1. 0

                                                                                    Nobody is posing a threat to employees’ security here, a corporation can in this case be regarded as a single entity so terminating SSL at the borders of the entity similar to how a browser terminates SSL by showing the website on a screen is fairly valid.

                                                                                    Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it (atleast when I wanted access to either I was in both cases instructed that the network is supervised and filtered) which IMO negates the potential security compromise.

                                                                                    Browsers bypass the network configuration to protect the users’ privacy.

                                                                                    Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                                                                                    1. 1

                                                                                      Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it [..] which IMO negates the potential security compromise.

                                                                                      Yes this is true.

                                                                                      If people are kept constantly aware of the presence of a transparent HTTPS proxy/MitM, I have no objection to its use instead of an HTTP proxy for caching purposes. Marking all pages as “Not Secure” is a good way to gain such awareness.

                                                                                      Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                                                                                      Did you know about Firefox’s DoH/CloudFlare affair?

                                                                                      1. 2

                                                                                        Yes I’m aware of the “affair”. To my knowledge the initial DoH experiment was localized and run on users who had enabled studies (opt-in). In both the experiment and now Mozilla has a contract with CloudFlare to protect the user privacy during queries when DoH is enabled (which to my knowledge it isn’t by default). In fact, the problem ungleich is blogging about isn’t even slated for standard release yet, to my knowledge.

                                                                                        It’s plain and old wrong in the bad kind of way; it conflates security maximalism with the mission of Mozilla to bring the maximum amount of users privacy and security.

                                                                                        1. 1

                                                                                          TBH, I don’t know what you mean with “security maximalism”.

                                                                                          I think ungleich raise serious concerns that should be taken into account before shipping DoH to the masses.

                                                                                          Mozilla has a contract with CloudFlare to protect the user privacy

                                                                                          It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                                                                          AFAIK, even Facebook had a contract with his users.

                                                                                          Yeah.. I know… they will “do no evil”…

                                                                                          1. 1

                                                                                            Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                                                                            It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                                                                            Cloudflare hasn’t done much that makes me believe they will violate my privacy. They’re not in the business of selling data to advertisers.

                                                                                            AFAIK, even Facebook had a contract with his users

                                                                                            Facebook used Dark Patterns to get users to willingly agree to terms they would otherwise never agree on, I don’t think this is comparable. Facebook likely never violated the contract terms with their users that way.

                                                                                            1. 1

                                                                                              Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                                                                              You should define “common user”.
                                                                                              If you mean the politically inepts who are happy to be easily manipulated as long as they are given something to say and retweet… yes, they have nothing to fear.
                                                                                              The problem is for those people who are actually useful to the society.

                                                                                              Cloudflare hasn’t done much that makes me believe they will violate my privacy.

                                                                                              The problem with Cloudflare is not what they did, it’s what they could do.
                                                                                              There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                                                                              But my concerns are with Mozilla.
                                                                                              They are trusted by milions of people world wide. Me included. But actually, I’m starting to think they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                                              1. 1

                                                                                                So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                                                                                Just because you think they aren’t useful to society (and they are, these people have all the important jobs, someone isn’t useless because they can’t use a computer) doesn’t mean we, as software engineers, should abandon them.

                                                                                                There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                                                                                Then don’t use it? DoH isn’t going to be enabled by default in the near future and any UI plans for now make it opt-in and configurable. The “Cloudflare is default” is strictly for tests and users that opt into this.

                                                                                                they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                                                You mean safe because everyone involved knows what’s happening?

                                                                                                1. 1

                                                                                                  I don’t believe the concerns are really concerns for the common user.

                                                                                                  You should define “common user”.
                                                                                                  If you mean the politically inepts who are happy to be easily manipulated…

                                                                                                  So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                                                                                  I’m not sure if you are serious or you are pretending to not understand to cope with your lack of arguments.
                                                                                                  Let’s assume the first… for now.

                                                                                                  I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept. That’s obviously because, anyone politically inept is unlikely to be affected by surveillance.
                                                                                                  That’s it.

                                                                                                  they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                                                  You mean safe because everyone involved knows what’s happening?

                                                                                                  Really?
                                                                                                  Are you sure everyone understand what is a MitM attack? Are you sure every employee understand their system administrators can see the mail they reads from GMail? I think you don’t have much experience with users and I hope you don’t design user interfaces.

                                                                                                  A MitM caching HTTPS proxy is not safe. It can be useful for corporate surveillance, but it’s not safe for users. And it extends the attack surface, both for the users and the company.

                                                                                                  As for Mozilla: as I said, I’m just not sure whether they deserve trust or not.
                                                                                                  I hope they do! Really! But it’s really too naive to think that a contract is enough to bind a company more than a subpoena. And they ship WebAssembly. And you have to edit about:config to disable JavaScript
                                                                                                  All this is very suspect for a company that claims to care about users’ privacy!

                                                                                                  1. 0

                                                                                                    I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept.

                                                                                                    I’m saying the concerns raised by ungleich are too extreme and should be dismissed on grounds of being not practical in the real world.

                                                                                                    Are you sure everyone understand what is a MitM attack?

                                                                                                    An attack requires an adversary, the evil one. A HTTPS Caching proxy isn’t the evil or enemy, you have to opt into this behaviour. It is not an attack and I think it’s not fair to characterise it as such.

                                                                                                    Are you sure every employee understand their system administrators can see the mail they reads from GMail?

                                                                                                    Yes. When I signed my work contract this was specifically pointed out and made clear in writing. I see no problem with that.

                                                                                                    And it extends the attack surface, both for the users and the company.

                                                                                                    And it also enables caching for users with less than stellar bandwidth (think third world countries where satellite internet is common, 500ms ping, 80% packet loss, 1mbps… you want caching for the entire network, even with HTTPS)

                                                                                                    And they ship WebAssembly.

                                                                                                    And? I have on concerns about WebAssembly. It’s not worse than obfuscated javascript. It doesn’t enable anything that wasn’t possible before via asm.js. The post you linked is another security maximalist opinion piece with little factual arguments.

                                                                                                    And you have to edit about:config to disable JavaScript…

                                                                                                    Or install a half-way competent script blocker like uMatrix.

                                                                                                    All this is very suspect for a company that claims to care about users’ privacy!

                                                                                                    I think it’s understandable for a company that both cares about users privacy and doesn’t want a marketshare of “only security maximalists”, also known as, 0%.

                                                                                                    1. 1

                                                                                                      An attack requires an adversary, the evil one.

                                                                                                      According to this argument, you don’t need HTTPS until you don’t have an enemy.
                                                                                                      It shows very well your understanding of security.

                                                                                                      The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                                                                                      I have on concerns about WebAssembly.

                                                                                                      Not a surprise.

                                                                                                      Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                                                                                      Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                                                                                      As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                                                                                      1. 1

                                                                                                        According to this argument, you don’t need HTTPS until you don’t have an enemy.

                                                                                                        If there is no adversary, no Malory in the connection, there is no reason to encrypt it either, correct.

                                                                                                        It shows very well your understanding of security.

                                                                                                        My understanding in security is based on threat models. A threat model includes who you trust, who you want to talk to and who you don’t trust. It includes how much money you want to spend, how much your attacker can spend and the methods available to both of you.

                                                                                                        There is no binary security, a threat model is the entry point and your protection mechanisms should match your threat model as best as possible or exceed it, but there is no reason to exert effort beyond your threat model.

                                                                                                        The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                                                                                        Malory is a potential enemy. An HTTPS caching proxy operated by a corporation is not an enemy. It’s not malory, it’s Bob, Alice and Eve where Bob wants to send Alice a message, she works for Eve and Eve wants to avoid having duplicate messages on the network, so Eve and Alice agree that caching the encrypted connection is worthwile.

                                                                                                        Malory sits between Eve and Bob not Bob and Alice.

                                                                                                        Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                                                                                        I did, in which case I either filed a Github issue if the project was open source or I notified the company that offered the javascript or optimized binary. Usually the bug is then fixed.

                                                                                                        It’s not my duty or problem to debug web applications that I don’t develop.

                                                                                                        Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                                                                                        Then don’t do it? Nobody is forcing you.

                                                                                                        As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                                                                                        I don’t think you consider that a practical problem such as bad connections can outweigh a lot of potential security issues since you don’t have the time or user patience to do it properly and in most cases it’ll be good enough for the average user.

                                                                                2. 2

                                                                                  My point is that the problems of unencrypted HTTP and MitM’ed HTTPS are exactly the same. If one used to prefer the former because it can be easily cached, I can’t see how setting up the latter makes their security issues worse.

                                                                                  1. 3

                                                                                    With HTTP you know it’s not secure. OTOH you might not be aware that your HTTPS connection to the server is not secure at all.

                                                                                    The lack of awareness makes MitM caching worse.

                                                                            1. 7

                                                                              Bad idea, it should error or give NaN.

                                                                              1/0 = 0 is mathematically sound

                                                                              It’s not mathematically sound.

                                                                              a/b = c should be equivalent to a = c*b

                                                                              this fails with 1/0 = 0 because 1 is not equal to 0*0.

                                                                              Edit: I was wrong, it is mathematically sound. You can define x/0 = f(x) any function of x at all. All the field axioms still hold because they all have preconditions that ensure you never look at the result of division by zero.

                                                                              There is a subtlety because some people say (X) and others say (Y)

                                                                              • (X) a/b = c should be equivalent to a = c*b when the LHS is well defined

                                                                              • (Y) a/b = c should be equivalent to a = c*b when b is nonzero

                                                                              If you have (X) definition in mind it becomes unsound, if you are more formal and use definition (Y) then it stays sound.

                                                                              It seems like a very bad idea to make division well defined but the expected algebra rules not apply to it. This is the whole reason we leave it undefined or make it an error. There isn’t any value you can give it that makes algebra work with it.

                                                                              It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.

                                                                              1. 14

                                                                                I really appreciate your follow-up about you being wrong. It is rare to see, and I commend you for it. Thank you.

                                                                                1. 8

                                                                                  This is explicitly addressed in the post. Do you have any objections to the definition given in the post?

                                                                                  1. 13

                                                                                    I cover that exact objection in the post.

                                                                                    1. 4

                                                                                      It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values

                                                                                      That was my initial reaction too. But I don’t think Pony’s intended use case is numerical analysis; it’s for highly parallel low-latency systems, where there are other (bigger?) concerns to address. They wanted to have no runtime exceptions, so this is part of that design tradeoff. Anyway, nothing prevents the programmer from checking for zero denominators and handling them as needed. If you squint a little, it’s perhaps not that different from the various conventions on truthy/falsey values that exist in most languages, and we’ve managed to accommodate to those.

                                                                                      1. 4

                                                                                        Those truthy/falsey values are an often source of errors.

                                                                                        I may be biased in my dislike of this “feature”, because I cannot recall when 1/0 = 0 would be useful in my work, but have no difficulty whatsoever thinking of cases where truthy/falsey caused problems.

                                                                                      2. 4

                                                                                        1/0 is integer math. NaN is available for floating point math not integer math.

                                                                                        1. 2

                                                                                          It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.

                                                                                          I wonder if someone making a linear math library for Pony already faced this. There are many operations that might divide by zero, and you will want to let the user know if they divided by zero.

                                                                                          1. 7

                                                                                            It’s easy for a Pony user to create their own integer division operation that will be partial. Additionally, a “partial division for integers” operator has been been in the works for a while and will land soon. Its part of operators that will also error if you have integer overflow or underflow. Those will be +?, /?, *?, -?.

                                                                                            https://playground.ponylang.org/?gist=834f46a58244e981473c0677643c52ff

                                                                                        1. 65

                                                                                          This blogpost is a good example of fragmented, hobbyist security maximalism (sprinkled with some personal grudges based on the tone).

                                                                                          Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

                                                                                          Talking about threat models, it’s important to start from them and that explains most of the misconceptions in the post.

                                                                                          • Usable security for the most people possible. The vast majority people on the planet use iOS and Android phones, so while it is theoretically true that Google or Apple could be forced to subvert their OSs, it’s outside the threat model and something like that would be highly visible, a nuclear option so to speak.
                                                                                          • Alternative distribution mechanisms are not used by 99%+ of the existing phone userbases, providing an APK is indeed correctly viewed as harm reduction.
                                                                                          • Centralization is a feature. Moxie created a protocol and a service used by billions and millions of people respectively that provides real, measureable security for a lot of people. The fact is that doing all this in a decentralized way is something we don’t yet know how to do or doing invites tradeoffs that we shouldn’t make. Federation atm either leads to insecurity or leads to the ossification of the ecosystem, which in turn leads to a useless system for real users. We’ve had IRC from the 1990s, ever wonder why Slack ever became a thing? Ossification of a decentralized protocol. Ever wonder why openpgp isn’t more widespread? Noone cares about security in a system where usability is low and design is fragile. Ever tried to do key rotation in gpg? Even cryptographers gave up on that. Signal has that built into the protocol.

                                                                                          Were tradeoffs made? Yes. Have they been carefully considered? Yes. Signal isn’t perfect, but it’s usable, high-level security for a lot of people. I don’t say I fully trust Signal, but I trust everything else less. Turns out things are complicated when it’s about real systems and not fantasy escapism and wishes.

                                                                                          1. 34

                                                                                            Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

                                                                                            In this article, resistance to governments constantly comes up as a theme of his work. He also pushed for his tech to be used to help resist police states like with the Arab Spring example. Although he mainly increased the baseline, the tool has been pushed for resisting governments and articles like that could increase perception that it was secure against governments.

                                                                                            This nation-state angle didn’t come out of thin air from paranoid, security people: it’s the kind of thing Moxie talks about. In one talk, he even started with a picture of two, activist friends jailed in Iran in part to show the evils that motivate him. Stuff like that only made the stuff Drew complains about on centralization, control, and dependence on cooperating with surveillance organization stand out even more due to the inconsistency. I’d have thought he’d make signed packages for things like F-Droid sooner if he’s so worried about that stuff.

                                                                                            1. 5

                                                                                              A problem with the “nation-state” rhetoric that might be useful to dispel is the idea that it is somehow a God-tier where suddenly all other rules becomes defunct. The five-eyes are indeed “nation state” and has capabilities that are profound; like the DJB talk speculating about how many RSA-1024 keys that they’d likely be able to factor in a year given such and such developments and what you can do with that capability. That’s scary stuff. On the other hand, this is not the “nation state” that is Iceland or Syria. Just looking at the leaks from the “Hacking Team” thing, there are a lot of “nation states” forced to rely on some really low quality stuff.

                                                                                              I think Greg Conti in his “On Cyber” setup depicts it rather well (sorry, don’t have a copy of the section in question) and that a more reasonable threat model of capable actors you do need to care about is that of Organized Crime Syndicates - which seems more approachable. Nation State is something you are afraid of if you are political actor or in conflict with your government, where the “we can also waterboard you to compliance” factors into your threat model, Organized Crime hits much more broadly. That’s Ivan with his botnet from internet facing XBMC^H Kodi installations.

                                                                                              I’d say the “Hobbyist, Fragmented Maximalist” line is pretty spot on - with a dash of “Confused”. The ‘threats’ of Google Play Store (test it, write some malware and see how long it survives - they are doing things there …) - the odds of any other app store; Fdroid, the ones from Samsung, HTC, Sony et al. - being completely owned by much less capable actors is way, way higher. Signal (perhaps a Signal-To-Threat ratio?) perform an good enough job in making reasonable threat actors much less potent. Perhaps not worthy of “trust”, but worthy of day to day business.

                                                                                            2. 18

                                                                                              Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

                                                                                              And yet, Signal is advertising with the face of Snowden and Laura Poitras, and quotes from them recommending it.

                                                                                              What kind of impression of the threat models involved do you think does this create?

                                                                                              1. 5

                                                                                                Who should be the faces recommending signal that people will recognize and listen to?

                                                                                                1. 7

                                                                                                  Whichever ones are normally on the media for information security saying the least amount of bullshit. We can start with Schneier given he already does a lot of interviews and writes books laypeople buy.

                                                                                                  1. 3

                                                                                                    What does Schneier say about signal?

                                                                                                    1. 10

                                                                                                      He encourages use of stuff like that to increase baseline but not for stopping nation states. He adds also constantly blogged about the attacks and legal methods they used to bypass technical measures. So, his reporting was mostly accurate.

                                                                                                      We counterpoint him here or there but his incentives and reo are tied to delivering accurate info. Moxie’s incentives would, if he’s selfish, lead to locked-in to questionable platforms.

                                                                                              2. 18

                                                                                                We’ve had IRC from the 1990s, ever wonder why Slack ever became a thing? Ossification of a decentralized protocol.

                                                                                                I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”

                                                                                                If you actually look at the protocols? Slack is a clear case of Not Invented Here syndrome. Slack’s interface is not only slower, but does some downright crazy things (Such as transliterating a subset of emojis to plain-text – which results in batshit crazy edge-cases).

                                                                                                If you have a free month, try writing a slack client. Enlightenment will follow :P

                                                                                                1. 9

                                                                                                  I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”

                                                                                                  Per IRCv3 people I’ve talked to, IRCv3 blew up massively on the runway, and will never take off due to infighting.

                                                                                                  1. 12

                                                                                                    And yet everyone is using Slack.

                                                                                                    1. 14

                                                                                                      There are swathes of people still using Windows XP.

                                                                                                      The primary complaint of people who use Electron-based programs is that they take up half a gigabyte of RAM to idle, and yet they are in common usage.

                                                                                                      The fact that people are using something tells you nothing about how Good that thing is.

                                                                                                      At the end of the day, if you slap a pretty interface on something, of course it’s going to sell. Then you add in that sweet, sweet Enterprise Support, and the Hip and Cool factors of using Something New, and most people will be fooled into using it.

                                                                                                      At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on: https://ircv3.net/specs/extensions/batch/chathistory-3.3.html)

                                                                                                      1. 9

                                                                                                        At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on […])

                                                                                                        The time for the IRC group to be working on a solution to persistent history was a decade ago. It strikes me as willful ignorance to disregard the success of Slack et al over open alternatives as mere fashion in the face of many meaningful functionality differences. For business use-cases, Slack is a better product than IRC full-stop. That’s not to say it’s perfect or that I think it’s better than IRC on all axes.

                                                                                                        To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool? But imagine being a UX designer and wanting to help make some native open-source IRC client fun and easy to use for a novice. “Sisyphean” is the word that comes to mind.

                                                                                                        If we want open solutions to succeed we have to start thinking of them as products for non-savvy end users and start being honest about the cases where closed products have superior usability.

                                                                                                        1. 5

                                                                                                          IRC isn’t hip and cool because people can’t make money off of it. Technologies don’t get investment because they are good, they get good because of investment. The reason that Slack is hip/cool and popular and not IRC is because the investment class decided that.

                                                                                                          It also shows that our industry is just a pop culture and can give a shit about good tech .

                                                                                                          1. 4

                                                                                                            There were companies making money off chat and IRC. They just didn’t create something like Slack. We can’t just blame the investors when they were backing companies making chat solutions whose management stayed on what didn’t work in long-term or for huge audience.

                                                                                                            1. 2

                                                                                                              IRC happened before the privatization of the internet. So the standard didn’t lend itself well for companies to make good money off of it. Things like slack are designed for investor optimization, vs things like IRC being designed for use and openness.

                                                                                                              1. 2

                                                                                                                My point was there were companies selling chat software, including IRC clients. None pulled off what Slack did. Even those doing IRC with money or making money off it didn’t accomplish what Slack did for some reason. It would help to understand why that happened. Then, the IRC-based alternative can try to address that from features to business model. I don’t see anything like that when most people that like FOSS talk Slack alternatives. Then, they’re not Slack alternatives if lacking what Slack customers demand.

                                                                                                                1. 1

                                                                                                                  Thanks for clarifying. My point can be restated as… There is no business model for federated and decentralized software (until recently , see cryptocurrencies). Note most open and decentralized tech of the past was government funded and therefore didn’t face business pressures. This freed designets to optimise other concerns instead of business onrs like slack does.

                                                                                                          2. 4

                                                                                                            To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool?

                                                                                                            The argument being made is that the vast majority of Slack’s appeal is the “hip-and-cool” factor, not any meaningful additions to functionality.

                                                                                                            1. 6

                                                                                                              Right, as I said I think it’s important for proponents of open tech to look at successful products like Slack and try to understand why they succeeded. If you really think there is no meaningful difference then I think you’re totally disconnected from the needs/context of the average organization or computer user.

                                                                                                              1. 3

                                                                                                                That’s all well and good, I just don’t see why we can’t build those systems on top of existing open protocols like IRC. I mean: of course I understand, it’s about the money. My opinion is that it doesn’t make much sense to insist that opaque, closed ecosystems are the way to go. We can have the “hip-and-cool” factor, and all the amenities provided by services like Slack, without abandoning the important precedent we’ve set for ourselves with protocols like IRC and XMPP. I’m just disappointed that everyone’s seeing this as an “either-or” situation.

                                                                                                                1. 2

                                                                                                                  I definitely don’t see it as an either-or situation, I just think that the open source community typically has the wrong mindset for competing with closed products and that most projects are unapproachable by UX or design-minded people.

                                                                                                          3. 3

                                                                                                            Open, standard chat tech has had persistent history and much more for decades in the form of XMPP. Comparing to the older IRC on features isn’t really fair.

                                                                                                            1. 2

                                                                                                              The fact that people are using something tells you nothing about how Good that thing is.

                                                                                                              I have to disagree here. It shows that it is good enough to solve a problem for them.

                                                                                                              1. 1

                                                                                                                I don’t see how Good and “good enough to solve a problem” are related here. The first is a metric of quality, the second is the literal bare minimum of that metric.

                                                                                                        2. 1

                                                                                                          Alternative distribution mechanisms are not used by 99%+ of the existing phone userbases, providing an APK is indeed correctly viewed as harm reduction.

                                                                                                          I’d dispute that. People who become interested in Signal seem much more prone to be using F-Droid than, say, WhatsApp users. Signal tries to be an app accessible to the common person, but few people really use it or see the need… and often they are free software enthusiasts or people who are fed up with Google and surveillance.

                                                                                                          1. 1

                                                                                                            More likely sure, but that doesn’t mean that many of them reach the threshold of effort that they do.

                                                                                                          2. 0

                                                                                                            Ossification of a decentralized protocol.

                                                                                                            IRC isn’t decentralised… it’s not even federated

                                                                                                            1. 3

                                                                                                              Sure it is, it’s just that there are multiple federations.

                                                                                                          1. 28

                                                                                                            That is a very reductionist view of what people use the web for. And I am saying this as someone who’s personal site pretty much matches everything prescribed except comments (which I still have).

                                                                                                            Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.

                                                                                                            1. 19

                                                                                                              Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.

                                                                                                              Chickenshit minimalism: https://medium.com/@mceglowski/chickenshit-minimalism-846fc1412524

                                                                                                              1. 13

                                                                                                                I wouldn’t say medium even gives the illusion of simplicity (For example, on the page you linked, try counting the visual elements that aren’t blog post). Medium seems to take a rather contrary approach to blogs, including all the random cruft you never even imagined existed, while leaving out the simple essentials like RSS feeds. I honestly have no idea how the author of the article came to suggest medium as an example of minimalism.

                                                                                                                1. 8

                                                                                                                  Medium started with an illusion of simplicity and gradually got more and more complex.

                                                                                                                  1. 3

                                                                                                                    I agree with your overall point, but Medium does provide RSS feeds. They are linked in the <head> and always have the same URL structure. Any medium.com/@user has an RSS feed at medium.com/feed/@user. For Medium blogs hosted at custom URLs, the feed is available at /feed.

                                                                                                                    I’m not affiliated with Medium. I have a lot of experience bugging webmasters of minimal websites to add feeds: https://github.com/issues?q=is:issue+author:tfausak+feed.

                                                                                                                2. 3

                                                                                                                  That is a very reductionist view of what people use the web for.

                                                                                                                  I wonder what Youtube, Google docs, Slack, and stuff would be in a minimal web.

                                                                                                                  1. 19

                                                                                                                    Useful.

                                                                                                                    algernon hides

                                                                                                                    1. 5

                                                                                                                      YouTube, while not as good as it could be, is pretty minimalist if you disable all the advertising.

                                                                                                                      I find google apps to be amazingly minimal, especially compared to Microsoft Office and LibreOffice.

                                                                                                                      Minimalist Slack has been around for decades, it’s called IRC.

                                                                                                                      1. 2

                                                                                                                        It is still super slow then! At some point I was able to disable JS, install the Firefox “html5-video-everywhere” extension and watch videos that way. That was awesome fast and minimal. Tried it again a few days ago, but didn’t seem to work anymore.

                                                                                                                        Edit: now I just “youtube-dl -f43 ” directly without going to YouTube and start watching immediately with VLC.

                                                                                                                        1. 2

                                                                                                                          The youtube interface might look minimalist, but under the hood, it is everything but. Besides, I shouldn’t have to go to great lengths to disable all the useless stuff on it. It shouldn’t be the consumer’s job to strip away all the crap.

                                                                                                                        2. 2

                                                                                                                          That seems to be of extreme bad faith though.

                                                                                                                          1. 11

                                                                                                                            In a minimal web, locally-running applications in browser sandboxes would be locally-running applications in non-browser sandboxes. There’s no particular reason any of these applications is in a browser at all, other than myopia.

                                                                                                                            1. 2

                                                                                                                              Distribution is dead-easy for websites. In theory, you have have non-browser-sandboxed apps with such easy distribution, but then what’s the point.

                                                                                                                              1. 3

                                                                                                                                Non-web-based locally-running client applications are also usually made downloadable via HTTP these days.

                                                                                                                                The point is that when an application is made with the appropriate tools for the job it’s doing, there’s less of a cognitive load on developers and less of a resource load on users. When you use a UI toolkit instead of creating a self-modifying rich text document, you have a lighter-weight, more reliable, more maintainable application.

                                                                                                                                1. 3

                                                                                                                                  The power of “here’s a URL, you now have an app running without going through installation or whatnot” cannot be understated. I can give someone a copy of pseudo-Excel to edit a document we’re working together on, all through the magic of Google Sheet’s share links. Instantly

                                                                                                                                  Granted, this is less of an advantage if you’re using something all the time, but without the web it would be harder to allow for multiple tools to co-exist in the same space. And am I supposed to have people download the Doodle application just to figure out when our group of 15 can go bowling?

                                                                                                                                  1. 4

                                                                                                                                    They are, in fact, downloading an application and running it locally.

                                                                                                                                    That application can still be javascript; I just don’t see the point in making it perform DOM manipulation.

                                                                                                                                    1. 3

                                                                                                                                      As one who knows JavaScript pretty well, I don’t see the point of writing it in JavaScript, however.

                                                                                                                                      1. 1

                                                                                                                                        A lot of newer devs have a (probably unfounded) fear of picking up a new language, and a lot of those devs have only been trained in a handful (including JS). Even if moving away from JS isn’t actually a big deal, JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language – you can do whatever you do in JS in python or lua or perl or ruby and it’ll come out looking almost the same unless you go out of your way to use particular facilities.

                                                                                                                                        The thing that makes JS code look weird is all the markup manipulation, which looks strange in any language.

                                                                                                                                        1. 3

                                                                                                                                          JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language

                                                                                                                                          (a == b) !== (a === b)

                                                                                                                                          but only some times…

                                                                                                                                          1. 3

                                                                                                                                            Javascript has gotchas, just like any other organic scripting languages. It’s less consistent than python and lua but probably has fewer of these than perl or php.

                                                                                                                                            (And, just take a look at c++ if you want a faceful of gotchas & inconsistencies!)

                                                                                                                                            Not to say that, from a language design perspective, we shouldn’t prize consistency. Just to say that javascript is well within the normal range of goofiness for popular languages, and probably above average if you weigh by popularity and include C, C++, FORTRAN, and COBOL (all of which see a lot of underreported development).

                                                                                                                                    2. 1

                                                                                                                                      Web applications are expected to load progressively. And that because they are sandboxed, they are allowed to start instantly without asking you for permissions.

                                                                                                                                      The same could be true of sandboxed desktop applications that you could stream from a website straight into some sort of sandboxed local VM that isn’t the web. Click a link, and the application immediately starts running on your desktop.

                                                                                                                                    3. 1

                                                                                                                                      I can’t argue with using the right tool for the job. People use Electron because there isn’t a flexible, good-looking, easy-to-use cross-platform UI kit. Damn the 500 mb of RAM usage for a chat app.

                                                                                                                                      1. 4

                                                                                                                                        There are several good-looking flexible easy to use cross-platform UI kits. GTK, WX, and QT come to mind.

                                                                                                                                        If you remove the ‘good-looking’ constraint, then you also get TK, which is substantially easier to use for certain problem sets, substantially smaller, and substantially more cross-platform (in that it will run on fringe or legacy platforms that are no longer or were never supported by GTK or QT).

                                                                                                                                        All of these have well-maintained bindings to all popular scripting languages.

                                                                                                                                        1. 1

                                                                                                                                          QT apps can look reasonably good. I think webapps can look better, but I haven’t done extensive QT customization.

                                                                                                                                          The bigger issue is 1) hiring - easier to get JS devs than QT devs 2) there’s little financial incentive to reduce memory usage. Using other people’s RAM is “free” for a company, so they do it. If their customers are in US/EU/Japan, they can expect reasonably new machines so they don’t see it as an issue. They aren’t chasing the market in Nigeria, however large in population.

                                                                                                                                          1. 5

                                                                                                                                            Webapps are sort of the equivalent of doing something in QT but using nothing but the canvas widget (except a little more awkward because you also don’t have pixel positioning). Whatever can be done in a webapp can be done in a UI toolkit, but the most extreme experimental stuff involves not using actual widgets (just like doing it as a webapp would).

                                                                                                                                            Using QT doesn’t prevent you from writing in javascript. Just use NPM QT bindings. It means not using the DOM, but that’s a net win: it is faster to learn how to do something with a UI toolkit than to figure out how to do it through DOM manipulation, unless the thing that you’re doing is (at a fundamental level) literally displaying HTML.

                                                                                                                                            I don’t think memory use is really going to be the main factor in convincing corporations to leave Electron. It’s not something that’s limited to the third world: most people in the first world (even folks who are in the top half of income) don’t have computers that can run Electron apps very well – but for a lot of folks, there’s the sense that computers just run slow & there’s nothing that can be done about it.

                                                                                                                                            Instead, I think the main thing that’ll drive corporations toward more sustainable solutions is maintenance costs. It’s one thing to hire cheap web developers & have them build something, but over time keeping a hairball running is simply more difficult than keeping something that’s more modular running – particularly as the behavior of browsers with respect to the corner cases that web apps depend upon to continue acting like apps is prone to sudden (and difficult to model) change. Building on the back of HTML rendering means a red queen’s race against 3 major browsers, all of whom are changing their behaviors ahead of standards bodies; on the other hand, building on a UI library means you can specify a particular version as a dependency & also expect reasonable backwards-compatibility and gradual deprecation.

                                                                                                                                            (But, I don’t actually have a lot of confidence that corporations will be convinced to do the thing that, in the long run, will save them money. They need to be seen to have saved money in the much shorter term, & saying that you need to rearchitect something so that it costs less in maintenance over the course of the next six years isn’t very convincing to non-technical folks – or to technical folks who haven’t had the experience of trying to change the behavior of a hairball written and designed by somebody who left the company years ago.)

                                                                                                                                          2. 1

                                                                                                                                            I understand that these tools are maintained in a certain sense. But from an outsider’s perspective, they are absolutely not appealing compared to what you see in their competitors.

                                                                                                                                            I want to be extremely nice, because I think that the work done on these teams and projects is very laudable. But compare the wxPython docs with the Bootstrap documentation. I also spent a lot of time trying to figure out how to use Tk, and almost all resources …. felt outdated and incompatible with whatever toolset I had available.

                                                                                                                                            I think Qt is really good at this stuff, though you do have to marry its toolset for a lot of it (perhaps this has gotten better).

                                                                                                                                            The elephant in the room is that no native UI toolset (save maybe Apple’s stack?) is nowhere near as good as the diversity of options and breadth of tooling available in DOM-based solutions. Chrome dev tools is amazing, and even simple stuff like CSS animations gives a lot of options that would be a pain in most UI toolkits. Out of the box it has so much functionality, even if you’re working purely vanilla/“no library”. Though on this points things might have changed, jQuery basically is the optimal low-level UI library and I haven’t encountered native stuff that gives me the same sort of productivity.

                                                                                                                                            1. 3

                                                                                                                                              I dunno. How much of that is just familiarity? I find the bootstrap documentation so incomprehensible that I roll my own DOM manipulations rather than using it.

                                                                                                                                              TK is easy to use, but the documentation is tcl-centric and pretty unclear. QT is a bad example because it’s quite heavy-weight and slow (and you generally have to use QT’s versions of built-in types and do all sorts of similar stuff). I’m not trying to claim that existing cross-platform UI toolkits are great: I actually have a lot of complaints with all of them; it’s just that, in terms of ease of use, peformance, and consistency of behavior, they’re all far ahead of web tech.

                                                                                                                                              When it comes down to it, web tech means simulating a UI toolkit inside a complicated document rendering system inside a UI toolkit, with no pass-throughs, and even web tech toolkits intended for making UIs are really about manipulating markup and not actually oriented around placing widgets or orienting shapes in 2d space. Because determining how a piece of markup will look when rendered is complex and subject to a lot of variables not under the programmer’s control, any markup-manipulation-oriented system will make creating UIs intractably awkward and fragile – and while Google & others have thrown a great deal of code and effort at this problem (by exhaustively checking for corner cases, performing polyfills, and so on) and hidden most of that code from developers (who would have had to do all of that themselves ten years ago), it’s a battle that can’t be won.

                                                                                                                                              1. 5

                                                                                                                                                It annoys me greatly because it feels like nobody really cares about the conceptual damage incurred by simulating a UI toolkit inside a doument renderer inside a UI toolkit, instead preferring to chant “open web!” And then this broken conceptual basis propagates to other mediums (VR) simply because it’s familiar. I’d also argue the web as a medium is primarily intended for commerce and consumption, rather than creation.

                                                                                                                                                It feels like people care less about the intrinsic quality of what they’re doing and more about following whatever fad is around, especially if it involves tools pushed by megacorporations.

                                                                                                                                                1. 2

                                                                                                                                                  Everything (down to the transistor level) is layers of crap hiding other layers of different crap, but web tech is up there with autotools in terms of having abstraction layers that are full of important holes that developers must be mindful of – to the point that, in my mind, rolling your own thing is almost always less work than learning and using the ‘correct’ tool.

                                                                                                                                                  If consumer-grade CPUs were still doubling their clock speeds and cache sizes every 18 months at a stable price point and these toolkits properly hid the markup then it’d be a matter of whether or not you consider waste to be wrong on principle or if you’re balancing it with other domains, but neither of those things are true & so choosing web tech means you lose across the board in the short term and lose big across the board in the long term.

                                                                                                                              2. 1

                                                                                                                                Youtube would be a website where you click on a video and it plays. But it wouldn’t have ads and comments and thumbs up and share buttons and view counts and subscription buttons and notification buttons and autoplay and add-to-playlist.

                                                                                                                                Google docs would be a desktop program.

                                                                                                                                Slack would be IRC.

                                                                                                                                1. 1

                                                                                                                                  What you’re describing is the video HTML5 tag, not a video sharing platform. Minimalism is good, I do agree, but don’t mix it with no features at all.

                                                                                                                                  Google docs would be a desktop program.

                                                                                                                                  This is another debate around why using the web for these kind of tasks, not the fact that it’s minimalist or not.

                                                                                                                            1. 4

                                                                                                                              Why re-create code editors, simulators, spreadsheets, and more in the browser when we already have native programs much better suited to these tasks?

                                                                                                                              Because the Web is the non-proprietary application platform that actually has traction.

                                                                                                                              1. 1
                                                                                                                                1. 1

                                                                                                                                  That’s true for all useful platforms.

                                                                                                                              1. 23

                                                                                                                                “It is difficult to get a [web developer] to understand something, when [their] salary depends on [them] not understanding it.”

                                                                                                                                ― Upton Sinclair

                                                                                                                                1. 4

                                                                                                                                  My back looks like a pin cushion from all the arrows I received over the years fighting for web that would be more ethical and void of mostly useless crap. Some battles won, too many lost. I lost one just yesterday, but it didn’t occur to me that it was because of my money-induced blindness.

                                                                                                                                  I actually like this quote and have used it myself before, but while I met many web developers over the years who didn’t care about bullshit described in the article, almost all of them didn’t simply because they were either ignorant of available technologies, didn’t care much about quality of anything they did and most often both.

                                                                                                                                  1. 1

                                                                                                                                    Some battles won, too many lost.

                                                                                                                                    What were some of the wins?

                                                                                                                                    1. 4

                                                                                                                                      Example of a small recent one would be Klevio website (as it currently exists, less so after today). I am not linking to it because I don’t want referrals from Lobsters to show up in website’s logs, but is trivial to find.

                                                                                                                                      Almost everything on this website works with Javascript turned off. It uses Javascript to augment experience, but does not needlessly rely on external libraries. Should work reasonably well even on poor connections. Does not track you and still has a privacy policy handling that tries to be closer to the spirit of GDPR then to what you may get away with.

                                                                                                                                      It would certainly be easier for me and faster to develop (cheaper for company) if I just leaned on existing tools, build yet another SPA and have not spent more than a week arguing with lawyers about what is required.

                                                                                                                                      Alas, because unsurprisingly most people do not opt-in to analytics, I am now working on a different confirmation dialog, more in line with what others are doing. It will still be better than most, but certainly more coercive than current.

                                                                                                                                      And this is in a company that is, based on my experience, far more conscientious about people’s privacy than others I worked for.

                                                                                                                                      1. 1

                                                                                                                                        It would certainly be easier for me and faster to develop (cheaper for company) if I just leaned on existing tools, build yet another SPA and have not spent more than a week arguing with lawyers about what is required.

                                                                                                                                        Is this really true? Not to downplay your craft but I always thought tinkering with HTML/CSS until things look right would be way easier than learning a separate library.

                                                                                                                                        I checked out that website and it’s pretty refreshing that stuff actually works. If you want a little constructive feedback, the information density is very low especially on a desktop computer with a widescreen monitor. I have to scroll down 7 screens to get all the information, which could have fit on a single screen. Same with the “about us” page. I notice the site is responsive, giving a hamburger when you narrow your window, so maybe the “non-mobile” interface could be more optimized for desktop use.

                                                                                                                                        1. 1

                                                                                                                                          I don’t think it is in every case, but in this one I think it would be since everything was handwritten without picking up existing solutions for things like galleries. If you mean the SPA part, then I guess it becomes more moot. It would probably be about the same doing the first implementation, but this one, which is basically a bunch of static files, certainly has a higher cost of maintenance because we (I) didn’t get around to finishing it so page “components” still have to be manually copied to new files and updated everywhere when their content changes. The plan was to automate most of this, but we haven’t spent the time on it yet.

                                                                                                                                          I agree with everything in the second paragraph. Regretfully that is one of those battles lost.

                                                                                                                                          1. 1

                                                                                                                                            so what do your managers feel is the benefit of having such low information density? how do these decisions get made?

                                                                                                                                            1. 1

                                                                                                                                              If I remember correctly it was because it supposedly looks modern, clean and in-line with company’s brand. It has been a while so my memory is fuzzy on this.

                                                                                                                                  2. 2

                                                                                                                                    I’ve heard this a few times already, but I’ve never quite understood what the implication is. What precisely are web developers not understanding? I get the default examples (eg. oil companies funding environmental research), but just can’t see the analogy in this case.

                                                                                                                                    1. 22

                                                                                                                                      You’re on week three of your new job at a big city ad and design firm. Getting that first paycheck was nice, but the credit card bill from the moving expenses is coming up, that first month of big city rent wiped out your savings, and you don’t really have a local personal network to find new jobs. The customer wants a fourth “tag” for analytics tracking. Do you:

                                                                                                                                      1. Put it in
                                                                                                                                      2. Engage in a debate about engineering ethics with your boss and his boss (who drives a white Range Rover and always seems to have the sniffles after lunch) culminating with someone screaming and you storming out, never to return?
                                                                                                                                      1. 8

                                                                                                                                        Web devs know that auto play videos and newsletter pop ups are annoying but annoying people is profitable

                                                                                                                                    1. 3

                                                                                                                                      If weather is bad enough to prevent me from hiking I plan to finish my Instapaper alternative (email myself a nicely formatted version of the article).

                                                                                                                                      This is my first step in exploring possibility of using email clients as a feed reader interface.

                                                                                                                                      1. 37

                                                                                                                                        I think practically all “Why You Should…” articles would be improved if they became “When You Should…” articles with corresponding change of perspective.

                                                                                                                                        1. 23

                                                                                                                                          An even better formulation would be “Here is the source code for an app where I didn’t use a framework. It has users, and here are my observations on building and deploying it”.

                                                                                                                                          In other words, “skin in the game” (see Taleb). I basically ignore everyone’s “advice” and instead look at what they do, not what they say. I didn’t see this author relate his or her own experience.

                                                                                                                                          The problem with “when you should” is that the author is not in the same situation as his audience. There are so many different programming situations you can be in, with different constraints, and path dependence. Just tell people what you did and they can decide whether it applies to them. I think I basically follow that with http://www.oilshell.org/ – I am telling people what I did and not attempting to give advice.

                                                                                                                                          (BTW I am sympathetic to no framework – I use my own little XHR wrapper and raw JS, and my own minimal wrapper over WSGI and Python. But yes it takes forever to get things done!)

                                                                                                                                          1. 2

                                                                                                                                            Thanks for the Taleb reference. I didn’t know it existed, and so far it is a good read.

                                                                                                                                            1. 1

                                                                                                                                              His earlier books are also good. It is a lot of explaining the same ideas in many different ways, but I find that the ideas need awhile to sink in, so that’s useful.

                                                                                                                                              He talks about people thinking/saying one thing, but then acting like they believe its opposite. I find that to be painfully true, and it also applies to his books. You could agree with him in theory, but unless you change your behavior then you might not have gotten the point :-)


                                                                                                                                              Less abstractly, the worst manager I ever had violated the “skin in the game” rule. He tried to dictate the technology used in a small project I was doing, based on conversations with his peers. That technology was unstable and inappropriate for the task.

                                                                                                                                              He didn’t have to write the code, so he didn’t care. I was the one who had to write the code, so I’m the one with skin in the game, so I should make the technology choices. I did what he asked and left the team, but what he asked is not what the person taking over wanted I’m sure.

                                                                                                                                              In software, I think you can explain a lot of things by “who has to maintain the code” (who has skin in the game). I think it explains why the best companies maintain long term software engineering staff, instead of farming it out. If you try to contract out your work, those people may do a shitty job because they might only be there for a short period. (Maybe think of the healthcare.gov debacle – none of the engineers really had skin in the game.)

                                                                                                                                              It also explains why open source code can often be higher quality, and why it lasts 30+ years in many cases. If the original designer plans on maintaining his or her code for many years, then that code will probably be maintainable by others too.

                                                                                                                                              It also explains why “software architect” is a bad idea and never worked. (That is, a person who designs software but doesn’t implement it.)

                                                                                                                                              I’m sure these principles existed under different names before, and are somewhat common sense. But they do seem to be violated over and over, so I like to have a phrase to call people on their BS. :-)

                                                                                                                                              1. 2

                                                                                                                                                Yeah, the phrase works as a good lens and reminder. Interestingly, as most parents will attest to - the “do as I say not as I do” is generally unsuccessful with kids. They are more likely to emulate than listen.

                                                                                                                                          2. 2

                                                                                                                                            I definitely agree with this change. It’d get more people thinking architecturally, something that’s sorely needed.

                                                                                                                                          1. 11

                                                                                                                                            One culture note I find really interesting: I remember 3-4 years ago a lot of people were griping about how “full stack” wasn’t real. Almost all of them argued that backend was so complicated you needed a specialist to do it well.

                                                                                                                                            Now we’re seeing the exact same articles but now it’s the frontend that’s too complicated.

                                                                                                                                            When it comes to specialisation, generalists underestimate the benefits but specialists overestimate the necessity.

                                                                                                                                            1. 5

                                                                                                                                              I think it’s related to how complicated front end has become.

                                                                                                                                              I think the problem is that most people who call themselves “full stack” are, like most of us, quite highly experienced in one area, and have enough working knowledge to get by in the other areas.

                                                                                                                                              Every “full stack vs not” discargument I’ve seen has boiled down to “full stack” people claiming that more specialised people are “single skill”.

                                                                                                                                              I’ve never met or worked with anyone that did just one thing. In most teams/orgs I’d expect people to have some experience across most of the tech stack - but that doesn’t make them “full stack” any more than it makes me a mechanic because I can change a tire or replace a car battery, or a builder because I can put up a shelf.

                                                                                                                                              That doesn’t mean there isn’t a place for people who are (or seemingly claim themselves to be, ala “full stack”) more evenly experienced over the stack than those who specialise, but in my experience these people tend to be the ones who just brush off anything that’s beyond them as “we don’t need to worry about it”.

                                                                                                                                              1. 1

                                                                                                                                                I acknowledge that you used “most” and “tend” to allow exceptions, but your argument still rubs me the wrong way.

                                                                                                                                                I am one of those people who has described himself as a full-stack web developer. I feel comfortable doing this because I have designed and implemented back-ends and front-ends of services that scaled to hundreds of thousands of users. Obviously there exist much larger scales, but I think ~million users will cover the needs of most web services out there and in some countries, like Slovenia where I live, it will cover all of them. It does not seem unreasonable to me to have a term for noting that you can build any part of it if necessary.

                                                                                                                                                I do not claim to know everything I need to know at all times, but I do know enough about everything relevant that I can tell where the gaps are and fill them in a reasonable time. I find this perfectly reasonable in the same way as needing to learn a new language for a project does not disqualify a developer from still being a developer.

                                                                                                                                                I am not alone and have colleagues who can do the same or better. None of us argue that we are all anyone needs and even on smaller projects it is generally better if people focus on fewer things. Most of my work lately is on front-end and I certainly am not stupid enough to not notice that specialists can do many things better than me. If your project can benefit from that and can afford hiring such person, it would be stupid not to.

                                                                                                                                                As you say yourself, full-stack is really just a description for a different distribution of skills and experience over the stack and you can be a competent developer over huge part of it if you pay attention to what and why you are learning something and avoid switching tools and frameworks for the currently fashionable one every half a year.

                                                                                                                                                I don’t doubt most full-stack developers are bad at their job in the same way as most of any group of developers are (X specialists, Python developers…). Likewise no group of practitioners of noticeable size lacks individuals disparaging other groups.

                                                                                                                                                1. 1

                                                                                                                                                  I did specifically qualify it as anecdotal:

                                                                                                                                                  in my experience these people tend to be

                                                                                                                                                  The rest of your comment just seems to reaffirm what I said though - fullstack is generally just a broader, shallower set of experience rather than narrow, deeper with someone more specialised.

                                                                                                                                            1. 2

                                                                                                                                              “If you were to go back in time to 1987, this is probably similar to what would have replaced the Amiga if Jack Tramiel had never left Commodore.”

                                                                                                                                              Cool project, but I don’t think this is true. Amiga 500 had 512KB of RAM because it was bloody expensive. So did majority of competitors. Nobody would put 1.5MB in a computer at that time because it would severely reduce number of units you could shift for little benefit. Pretty much all software written at that point needed far less than that (even on multitasking Amiga).

                                                                                                                                              Also, I believe 65C816 did not run at 14Hz back then. Not many chips did and both Amiga and Atari were running at 7-8Hz.

                                                                                                                                              1. 3

                                                                                                                                                The A500 could be expanded up to 7 MB though, so I don’t think it’s completely out of line.

                                                                                                                                                I wonder if the CPU is actually the W65C816S, which is readily available at 14 MHz. I sent an email to Stefany and asked about it.

                                                                                                                                                Edit: it is indeed the W65C816S from Western Design Center.

                                                                                                                                              1. 12

                                                                                                                                                Commodore was spectacular in how well it could snatch defeat from the jaws of victory. The Amiga was the most amazing machine the world had yet seen in 1985, they had possibly the best team of hardware and software engineers in the world, but management just…couldn’t leave it well enough alone.

                                                                                                                                                Bizarre decisions like:

                                                                                                                                                • The Amiga (later retroactively named the Amiga 1000) had a sidecar expansion port. The Amiga 500 had the same port, but upside down…so that all of the existing peripherals had to be upside down to work. Given how they were designed, it meant that none of them would.
                                                                                                                                                • The Amiga 2000 was the first machine that could use the Video Toaster, and the Video Toaster was the killer app for the Amiga. Then they made the Amiga 3000, which could also use the Video Toaster, except that the case was a quarter-inch too short for the Toaster card.
                                                                                                                                                • The Amiga 600 had a PCMCIA slot. Except that they rushed to manufacturing using a draft of the PCMCIA spec, rather than waiting for the final specification. The end result was that regular PCMCIA cards often wouldn’t work on the Amiga.
                                                                                                                                                • Amiga Unix on the Amiga 3000UX was considered one of the highest-quality SVR4 ports ever. Sun offered to produce the Amiga 3000UX for Commodore as a Sun-branded Unix workstation that could run Amiga software…and Commodore declined.

                                                                                                                                                We’d all be using Amigas now if Commodore’s management had literally been anything other than hilariously incompetent, I swear.

                                                                                                                                                1. 4

                                                                                                                                                  Jimmy Maher’s book about the Amiga explores a number of these bizarre decisions and reaches a similar conclusion. The title says it all: The Future Was Here! http://amiga.filfre.net/

                                                                                                                                                  1. 2

                                                                                                                                                    Agree with everything except conclusion as even less incompetent companies failed including Sun. Only Apple survived and even they became are now basically producing PCs with their distro.

                                                                                                                                                    However we might have been living in a different future if Amiga had an opportunity for a bigger impact. Mine certainly is as I went to study mathematics instead of CS because I could not imagine developing software for PCs in DOS era.

                                                                                                                                                    1. 1

                                                                                                                                                      Are you certain that the first 2 issues (upside-down sidecar port & case too short for toaster card) were the fault of management & not engineering?

                                                                                                                                                    1. 2

                                                                                                                                                      That’s rich, from a guy who done his best to advance client-server cloud model in his time.

                                                                                                                                                      OK, not really happy about the acquisition either, but overall GitHub has been a massive boon to the community in general. It lowered the threshold to collaboration, publishing your projects and facilitated a bunch of dependency fetching ecosystems with much higher availability than was possible before.

                                                                                                                                                      1. 5

                                                                                                                                                        How did he do that?

                                                                                                                                                        I thought he was involved in writing Netscape Navigator browser and its mail component neither of which promote cloud model.

                                                                                                                                                        1. 2

                                                                                                                                                          You posted that comment using a web browser which identifies itself as “Mozilla” and a cloud-hosted application called “lobste.rs”. IMO it’s fair to say that someone who was both a primary author of Mozilla-the-browser and a founder of mozilla.org was involved in enabling, even promoting the model lobste.rs uses.

                                                                                                                                                          1. 2

                                                                                                                                                            This is basically an argument that the web itself or really any client-server approach is promoting cloud model which I find absurd. Cloud-hosted wasn’t a technologically inevitable outcome as you could build something similar to email. You still can as you can use those same technologies JWZ help building to run your stuff on your own hardware.

                                                                                                                                                            I don’t remember either JWZ or Mozilla in his time promoting running stuff in cloud (other people’s computers).

                                                                                                                                                            1. 1

                                                                                                                                                              He wrote software that made it feasible to put even user interface code on a server running in a colo somewhere. The UI on such software was primitive and laggy compared to using alternatives like MFC or Qt, but on the other hand a webapp didn’t have to be purchased, downloaded or installed.

                                                                                                                                                              I don’t recall him saying that anyone should write webapps. But he wrote software that made it feasible, and did his best to get that software installed everywhere.

                                                                                                                                                              1. -1

                                                                                                                                                                Other people’s computers? You make it sound like a P2P network. I know zero cloud services hosted on other people’s computers, as opposed to other corporations.

                                                                                                                                                                Oh and funny how email was decentralized right until its consolidation as browser-based client-server (sorry, cloud) platforms.

                                                                                                                                                                1. 3

                                                                                                                                                                  “Other people’s computers” is a popular description of where cloud-hosted apps run. I don’t think anyone, certainly not me, means P2P by that.

                                                                                                                                                                  Email is still decentralized. You can run your own server as I and many others do. It can also have a webmail interface like mine does and that has been true for 2 decades. The fact that users are consolidating on few providers does not make underlying technology more “cloudy” and that did not happen for the first decade also strongly suggest that change did not happen because of underlying (web) technology.

                                                                                                                                                                  1. 1

                                                                                                                                                                    What share of the world’s email has to be stored in a single database before you consider it centralised? 50% perhaps?

                                                                                                                                                                    Google alone hosts a two-digit percentage of email users. I’ve heard the number 25% mentioned. Assuming one From address per message, an average of 1.4 To/Cc addresses and a 25% market share for Google, Google stores 50% of the email that was sent yesterday on behalf of the sender or any recipient. I self-host, so Google stores about 33% of my email.

                                                                                                                                                                    (I made up the number 1.4. I don’t really care about the precise details. And I don’t care about whether you want to consider just Google or the also the next ten big hosters.)

                                                                                                                                                                    1. 1

                                                                                                                                                                      This debate has moved far away from JWZ and cloud to what feels off topic to main theme (Github+MS).

                                                                                                                                                                      Since you asked, I have no idea what percentage of contained data if any should be a limit at which something counts as centralized. I think your question reveals and underlying dilemma which is are we talking about effectively centralized in a sense that for all intents and purposes everything happens at one place, or actually centralized in a sense, that it can’t happen elsewhere.

                                                                                                                                                                      Clearly in the second sense email is not centralized as one can demonstrably run their own server as still so many do without penalties as long as the server is properly configured. It might not make economic or otherwise sense, but at least for now you are not technologically locked out.

                                                                                                                                                                      I don’t think it is centralized in the first sense either and I am not sure your metric is valid. In that sense the whole web is already centralized or was, as Google scrapped everything public so in a way it stored close to all of it. Let’s imagine that we are left only with two email providers of approximately equal size and usage pattern. Then by your approach each of them will contain more or less all email and yet neither of which would actually be in a position where everyone had to be.

                                                                                                                                                                      And to bring this closer to thread’s original topic, I don’t think any of this has much to do with web as such. It happened because costs of running your own server did not fall like the cost of hosted accounts which also provided a degree of freedom compared to ISP’s or company’s. What web did do, as it improved, is change client that is used to access email as there was less need for native OS ones. And even that is not completely true since Gmail has native client both for Android and iOS.

                                                                                                                                                                      I think we would move to “cloud” services over time even if web did not exist or remained limited to HTML2. We would just be using Windows apps to do so.

                                                                                                                                                        1. 38

                                                                                                                                                          Appreciate the honesty here. My take: GitHub stars aren’t real. Twitter followers aren’t real. Likes aren’t real. It’s all a video game. If you want to assess the quality of the code, you have to read it. You can’t rely on metrics except as a weak indicator. I predict there will be services to let you buy Github stars if the current trend of overvaluing them continues.

                                                                                                                                                          The endless self-promotion and programmers-masquerarding-as-brands on Twitter and Medium generates a huge amount of noise for an even larger amount of BS. The only winning move is to not engage.

                                                                                                                                                          1. 9

                                                                                                                                                            This is more true than one might think. There are a couple of projects on GitHub with thousands of stars, some more than all the BSDs source codes combined, with the promise to bring something amazing, while not even having a working proof of concept, and being completely abandoned.

                                                                                                                                                            However, since it is true (to some degree) that having a larger user base in programming historically means that you won’t have to maintain a project on your own in the end it’s easy to be fooled by anything that appears to indicate a large userbase, like GitHub stars.

                                                                                                                                                            Many people use GitHub more like a “might be interesting, let’s bookmark it” or “Wow, so many buzzwords”, etc.

                                                                                                                                                            On the other hand there is quite a few projects that do one thing and do it well. Programmed to solve a problem, with 0-10 stars.

                                                                                                                                                            One might think that are extreme cases, they are only in the sense that 0 stars is the extreme of not being able to have fewer. They are not rare cases.

                                                                                                                                                            Another thing to consider is that GitHub is built a lot like a social network, so you have network effects, where people follow other people and one person liking something results in timelines, causing others to like it to remember to look at it, or “in case I need this some day”, and so one ends up having these explosions. Hackernews, Lobsters, reddit, etc. and in general having someone mention it to a bigger audience can help a lot too - and be it just “I have heard about this, but not looked at it yet”. It appears to be similar to the same story having zero upvotes on one day, and hundreds or thousands on another.

                                                                                                                                                            The rest is probably rooted in human psychology.

                                                                                                                                                            1. 3

                                                                                                                                                              This is what I do. I use stars on Github pretty much only as a bookmarking tool.

                                                                                                                                                            2. 4

                                                                                                                                                              Spot on. On top of the detrimental “programmers-masquerarding-as-brands”, many GH repos are heavily marketed by the companies behind the projects. Covert marketing might be more popular than what people think.

                                                                                                                                                              1. 7

                                                                                                                                                                Corporate OSS is winning the mindshare war. Plenty of devs would rather use a massive framework by $MEGACORP instead of something simple that doesn’t box them in. Pragmatism, they say.

                                                                                                                                                                (Of course, they don’t think twice about pulling in a community-sourced standard library (JS).)

                                                                                                                                                                Favorite example of this was a CTO talking about how they used Sinatra instead of Rails for their API endpoint and the flood of surprised replies, “but what if you need to change feature X?”, to which he said, “well, we understand all of the code, so it’s no big deal. Can you say the same about Rails?”