1. 6

    I think the faulty assumption is that the happiness of users and developers is more important to the corporate bottom line than full control over the ecosystem.

    Linux distributions have shown for a decade that providing a system for reliable software distribution while retaining full user control works very well.

    Both Microsoft and Apple kept the first part, but dropped the second part. Allowing users to install software not sanctioned by them is a legacy feature that is removed – slowly to not cause too much uproar from users.

    Compare it to the time when Windows started “phoning home” with XP … today it’s completely accepted that it happens. The same thing will happen with software distributed outside of Microsoft’s/Apple’s sanctioned channels. (It indeed has already happened on their mobile OSes.)

    1. 8

      As a long-time Linux user and believer in the four freedoms, I find it hard to accept that Linux distributions demonstrate “providing a system for reliable software distribution while retaining full user control works very well”. Linux distros seems to work well for enthusiasts and places with dedicated support staff, but we are still at least a century away from the year of Linux on the desktop. Even many developers (who probably have some overlap with the enthusiast community) have chosen Macs with unreliable software distribution like Homebrew and incomplete user control.

      1. 2

        I agree with you that Linux is still far away from the year of Linux on the desktop, but I think it is not related to the way Linux deals with software distribution.

        There are other, bigger issues with Linux that need to be addressed.

        In the end, the biggest impact on adoption would be some game studios releasing their AAA title as a Linux-exclusive. That’s highly unlikely, but I think it illustrates well that many of the factors of Linux’ success on the desktop hinge on external factors which are outside of the control of users and contributors.

        1. 2

          All the devs I know that use mac use linux in some virtualisation options instead of homebrew for work. Obviously thats not scientific study by any means.

          1. 8

            I’ll be your counter example. Homebrew is a great system, it’s not unreliable at all. I run everything on my Mac when I can, which is pretty much everything except commercial Linux-only vendor software. It all works just as well, and sometimes better, so why bother with the overhead and inconvenience of a VM? Seriously, why would you do that? It’s nonsense.

            1. 4

              Maybe a VM makes sense if you have very specific wishes. But really, macOS is an excellent UNIX and for most development you won’t notice much difference. Think Go, Java, Python, Ruby work. Millions of developers probably write on macOS and deploy on Linux. I’ve been doing this for a long time and ‘oh this needs a Linux specific exception’ is a rarity.

              1. 4

                you won’t notice much difference.

                Some time ago I was very surprised that hfs is not case sensitive (by default). Due to a bad letter-case in an import my script would fail on linux (production), but worked on mac. Took me about 30 minutes to figure this out :)

                1. 3

                  You can make a case sensitive code partition. And now with APFS, partitions are continuously variable size so you won’t have to deal with choosing how much goes to code vs system.

                  1. 1

                    A case sensitive HFS+ slice on a disk image file is a good solution too.

                  2. 2

                    Have fun checking out a git repo that has Foo and foo in it :)

                    1. 2

                      It was bad when microsoft did it in VB, and it’s bad when apple does it in their filesystem lol.

                  3. 2

                    Yeah definitely. And I’ve found that accommodating two platforms where necessary makes my projects more robust and forces me to hard code less stuff. E.g. using pkg-config instead of yolocoding path literals into the build. When we switched Linux distros at work, all the packages that worked on MacOS and Linux worked great, and the Linux only ones all had to be fixed for the new distro. 🙄

                  4. 2

                    I did it for awhile because I dislike the Mac UI a lot but needed to run it for some work things. Running in a full screen VM wasn’t that bad. Running native is better, but virtualization is pretty first class at this point. It was actually convenient in a few ways too. I had to give my mac in for repair at one point, so I just copied the VM to a new machine and I was ready to run in minutes.

                    1. 3

                      I use an Apple computer as my home machine, and the native Mac app I use is Terminal. That’s it. All other apps are non-Apple and cross-platform.

                      That said, MacOS does a lot of nice things. For example, if you try to unmount a drive, it will tell you what application is still using it so you can unmount it. Windows (10) still can’t do that, you have to look in the Event viewer(!) to find the error message.

                      1. 3

                        In case it’s unclear, non-Native means webapps, not software that doesn’t come preinstalled on your Mac.

                        1. 3

                          It is actually pretty unclear what non-Native here really means. The original HN post is about sandboxed apps (distributed through the App Store) vs non-sandboxed apps distributed via a developer’s own website.

                          Even Gruber doesn’t mention actual non-Native apps until the very last sentence. He just talks/quotes about sandboxing.

                          1. 3

                            The second sentence of the quoted paragraph says:

                            Cocoa-based Mac apps are rapidly being eaten by web apps and Electron pseudo-desktop apps.

                      2. 1

                        full-screen VM high-five

                      3. 1

                        To have environment closer to production I guess (or maybe ease of installation, dunno never used homebrew). I don’t have to use mac anymore so I run pure distro, but everyone else I know uses virtualisation or containers on their macs.

                        1. 3

                          Homebrew is really really really easy. I actually like it over a lot of Linux package managers because it first class supports building the software with different flags. And it has binaries for the default flag set for fast installs. Installing a package on Linux with alternate build flags sucks hard in anything except portage (Gentoo), and portage is way less usable than brew. It also supports having multiple versions of packages installed, kind of half way to what nix does. And unlike Debian/CentOS it doesn’t have opinions about what should be “in the distro,” it just has up to date packages for everything and lets you pick your own philosophy.

                          The only thing that sucks is OpenSSL ever since Apple removed it from MacOS. Brew packages handle it just fine, but the python package system is blatantly garbage and doesn’t handle it well at all. You sometimes have to pip install with CFLAGS set, or with a package specific env var because python is trash and doesn’t standardize any of this.

                          But even on Linux using python sucks ass, so it’s not a huge disadvantage.

                          1. 1

                            Installing a package on Linux with alternate build flags sucks hard in anything except portage

                            You mention nix in the following sentence, but installing packages with different flags is also something nix does well!

                            1. 1

                              Yes true, but I don’t want to use NixOS even a little bit. I’m thinking more vs mainstream distro package managers.

                            2. 1

                              For all its ease, homebrew only works properly if used by a single user who is also an administrator who only ever installs software through homebrew. And then “works properly” means “install software in a global location as the current user”.

                              1. 1

                                by a single user who is also an administrator

                                So like a laptop owner?

                                1. 1

                                  A laptop owner who hasn’t heard that it’s good practice to not have admin privileges on their regular account, maybe.

                              2. 1

                                But even on Linux using python sucks ass, so it’s not a huge disadvantage.

                                Can you elaborate more on this? You create a virtualenv and go from there, everything works.

                                1. 2

                                  It used to be worse, when mainstream distros would have either 2.4 or 2.6/2.7 and there wasn’t a lot you could do about it. Now if you’re on python 2, pretty much everyone is 2.6/2.7. Because python 2 isn’t being updated. Joy. Ruby has rvm and other tools to install different ruby versions. Java has a tarball distribution that’s easy to run in place. But with python you’re stuck with whatever your distro has pretty much.

                                  And virtualenvs suck ass. Bundler, maven / gradle, etc. all install packages globally and let you exec against arbitrary environments directly (bundle exec, mvn exec, gradle run), without messing with activating and deactivating virtualenvs. Node installs all it’s modules locally to a directory by default but at least it automatically picks those up. I know there are janky shell hacks to make virtualenvs automatically activate and deactivate with your current working directory, but come on. Janky shell hacks.

                                  That and pip just sucks. Whenever I have python dependency issues, I just blow away my venv and rebuild it from scratch. The virtualenv melting pot of files that pip dumps into one directory just blatantly breaks a lot of the time. They’re basically write once. Meanwhile every gem version has it’s own directory so you can cleanly add, update, and remove gems.

                                  Basically the ruby, java, node, etc. all have tooling actually designed to author and deploy real applications. Python never got there for some reason, and still has a ton of second rate trash. The scientific community doesn’t even bother, they use distributions like Anaconda. And Linux distros that depend on python packages handle the dependencies independently in their native package formats. Ruby gets that too, but the native packages are just… gems. And again, since gems are version binned, you can still install different versions of that gem for your own use without breaking anything. Python there is no way to avoid fucking up the system packages without using virtualenvs exclusively.

                                  1. 1

                                    But with python you’re stuck with whatever your distro has pretty much.

                                    I’m afraid you are mistaken, not only distros ship with 2.7 and 3.5 at same time (for years now) it is usually trivial to install newer version.

                                    let you exec against arbitrary environments directly (bundle exec, mvn exec, gradle run), without messing with activating and deactivating virtualenvs

                                    You can also execute from virtualenvs directly.

                                    Whenever I have python dependency issues, I just blow away my venv and rebuild it from scratch.

                                    I’m not sure how to comment on that :-)

                                    1. 1

                                      it is usually trivial to install newer version

                                      Not my experience? How?

                                      1. 1

                                        Usually you have packages for all python versions available in some repository.

                        2. 2

                          Have they chosen Macs or have they been issued Macs? If I were setting up my development environment today I’d love to go back to Linux, but my employers keep giving me Macs.

                          1. 3

                            Ask for a Linux laptop. We provide both.

                            I personally keep going Mac because I want things like wifi, decent power management, and not having to carefully construct a house of cards special snowflake desktop environment to get a useable workspace.

                            If I used a desktop computer with statically affixed monitors and an Ethernet connection, I’d consider Linux. But Macs are still the premier Linux laptop.

                            1. 1

                              At my work place every employee is given a Linux desktop and they have to do a special request to get a Mac or Windows laptop (Which would be in addition to their Linux desktop).

                          2. 3

                            Let’s be clear though, what this author is advocating is much much worse from an individual liberty perspective than what Microsoft does today.

                            1. 4

                              Do you remember when we all thought Microsoft were evil for bundling their browser and media player? Those were good times.

                          1. 4

                            At my undergrad CS program (NYU, 2002-2006) they taught Java for intro programming courses, but then expected you to know C for the next level CS courses (especially computer architecture and operating systems). Originally, they taught C in the intro courses, but found too many beginning programmers to drop out – and, to be honest, I don’t blame them. C isn’t the gentlest introduction to programming. But this created a terrible situation where professors just expected you to know C at the next level, while they were teaching other concepts from computing.

                            But, as others have stated, knowing C is an invaluable (and durable) skill – especially for understanding low-level code like operating systems, compilers, and so on. I do think a good programming education involves “peeling back the layers of the onion”, from highest level to lowest level. So, start programming with something like Python or JavaScript. Then, learn how e.g. the Python interpreter is implemented in C. And then learn how C relates to operating systems and hardware and assembler. And, finally, understand computer architecture. As Norvig says, it takes 10 years :-)

                            The way I learned C:

                            • K&R;
                            • followed by some self-instruction on GTK+ and GObject to edit/recompile open source programs I used on the Linux desktop;
                            • read the source code of the Python interpreter;
                            • finally, I ended up writing C code for an advanced operating systems still archived/accessible here which solidified it all for me.

                            Then I didn’t really write C programs for a decade (writing Python, mostly, instead) until I had to crack C back open to write a production nginx module just last year, which was really fun. I still remembered how to do it!

                            1. 3

                              One of the things I loved about my WSU CS undergrad program 20 years ago is that in addition to teaching C for the intro class, it was run out of the EE department so basic electronics courses were also required. Digital logic and simple circuit simulations went a long way towards understanding things like “this is how RAM works, this is why CPUs have so much gate count, this is why you can’t simply make up pointer addresses”

                              1. 2

                                they taught Java for intro programming courses, but then expected you to know C for the next level CS courses (especially computer architecture and operating systems).

                                It’s exactly like this at my university today. I don’t think there’s any good replacement for C for this purpose. You can’t teach Unix system calls with Java where everything is abstracted into classes. Although most “C replacement” languages allow easier OS interfacing, they similarly abstract away the system calls for standard tasks. I also don’t think it’s unreasonable to expect students to learn about C as course preparation in their spare time. It’s a pretty simple language with few new concepts to learn about if you already know Java. Writing good C in a complex project obviously requires a lot more learning, but that’s not required for the programming exercises you usually see in OS and computer architecture courses.

                                1. 1

                                  I think starting from the bottom and going up the layers is better. Rather than being frustrated as things get harder, you will be grateful for and know the limitations of the abstractions as they are added.

                                1. 4

                                  Why do people make the byte order mistake so often? I think it’s because they’ve seen a lot of bad code that has convinced them byte order matters.

                                  I think it’s also because it’s often just convenient to write byte-order dependent code. You need to serialize something and only develop for x86 anyways, so just write out a packed struct!

                                  At some point, you add support for a big endian architecture. You’re busy adding #ifdefs for that target anyways, so it appears easier to keep the original code as-is and byte-swap everything.

                                  1. 5

                                    No example pictures? :(

                                    1. 5

                                      Hey, sorry I didn’t put any since I’ve done this a while ago. I don’t have too many on hand. Here’s a couple I found on my computer.

                                      https://imgur.com/a/Ewwe7
                                      https://imgur.com/a/fokYq

                                    1. 1

                                      VTune is pretty cool, but unfortunately needs a kernel module on Linux. When I used it last year, you either had to compile some old kernel or fix the module’s code for the current kernel. The officially supported distributions were all out of date at that point, I think. The Linux code evolves too fast for external kernel modules.

                                      1. 6

                                        Patreon also adds VAT to your pledge, so if you want to pledge $10, you’ll actually pay ~$12. Apparently they only do that in the EU, but they still add the VAT “US-style” as an additional fee instead of having it included in the price.

                                        1. 5

                                          Why are you installing to /usr/local? Packages are supposed to go to /usr directly.

                                          1. 1

                                            It’s the filesystem location specified in the GNU Coding Standard

                                            Executable programs are installed in one of the following directories.

                                            bindir: The directory for installing executable programs that users can run. This should normally be /usr/local/bin, but write it as $(exec_prefix)/bin.

                                            1. 5

                                              Packages should never be installed to /usr/local

                                              https://wiki.archlinux.org/index.php/arch_packaging_standards

                                              Arch users expect packages to install in /usr, so it makes more sense to follow the Arch packaging standards here.

                                              1. 2

                                                Fair enough, I can make that adjustment. Thanks for sharing that link.

                                              2. 3

                                                GNU expects downstream packagers (“installers”) to change the install location, which is why the prefix variable exists. /usr/local/ is an appropriate default for “from-source” installs, to avoid conflicts with packages.

                                            1. 6

                                              Happy to see people writing screensavers. In many ways, they’ve outlived their namesake purpose, but there is still something so charming about them! I also recently wrote a mac os screen saver (for my first time) and unfortunately found that the fragment shader I wrote really heats up my machine.

                                              1. 3

                                                They’re possibly still useful on display types that burn in in the modern day like OLED - and for people still on plasma and god forbid, CRTs, they still have a use.

                                                1. 6

                                                  Is a screen saver better than just turning the monitor off (i.e. turning turning monitor output off which makes the screen go into standby mode)? Are/were people using screen savers just to avoid the few seconds the monitor needs to turn back on or is there another reason?

                                                  1. 2

                                                    Certain screensavers can help with burn in on OLED displays, turning the display off does not help. I don’t know the actual science behind it, I just know it worked on an OLED display I had that had burn in. ;)

                                                    1. 2

                                                      Note that a screensaver is unlikely to have that property unless it was designed to. Those screen-healing screensavers usually use colored geometric patterns.

                                                      I remember one of the patterns in such a screensaver was a series of black and white vertical stripes that slowly scrolled sideways. I once had the idea of making a free clone of that screensaver, so I replicated that pattern in Quartz Composer, Apple’s visual programming tool for generating graphics. I never remade any of the other patterns though.

                                              1. 3

                                                In practice, a currently representative x86 cache hierarchy consists of: […] Often a unified L3 cache of 2 to 16 MiB shared between all cores.

                                                Note that this isn’t true for AMD’s current Ryzen processors. On these processors, there are two 8 MiB L3 caches shared between half the cores. If an application has threads on both caches that try to share data, it will run a lot slower than with everything on the same cache.

                                                1. 1

                                                  Another example of “Unix-style” programming is generating plots with gnuplot. There are lots of great plotting libraries for languages like R and Python, but I’ve found that if you have some log files and quickly want to visualise some data, a single line of awk piped into gnuplot is often all you need.

                                                  1. 1

                                                    the theoretical maximum space of a comprehensively NATted IPv4 environment is 48 bits, fully accounting for the 32-bit address space and the 16-bit port address space. This is certainly far less than IPv6’s 128 bits of address space, but the current division of IPv6 into a 64-bit network prefix and a 64-bit interface identifier drops the available IPv6 address space to 64 bits

                                                    This isn’t true. The IPv6 network prefix has the same function as the 32 bit IPv4 address in the NAT scenario. The interface identifier corresponds to the NAT’ed port address. So if you do that (somewhat nonsensical) bit addition like he does, you get the full 2^128 for IPv6! Only in the IPv6 case, individual computers can actually have more than one connection at once.

                                                    1. 4

                                                      Author is generous. Originally from UK, moving from NL to US in 2006 I ended up remarking to colleagues that the US banking system was like moving back to the 1970s. 11 years later I’m still using passwords for bank website authentication, with knowledge of a bank account number being a closely held secret.

                                                      IBAN fee-free international transfers to friends or for paying bills (same day in-country, instant if same bank chain); fee-free cross-bank ATM withdrawals; sane security for web sign-in or initiating transfers; banking websites which don’t require you to lower the browser security settings to work; PIN-less on-card small-balance cash so you’re not typing your PIN into everything (paying for parking or using vending machines), all stuff I am still waiting for. Well, aside from the browser security settings: American banks have mostly caught up there.

                                                      1. 2

                                                        IBAN fee-free international transfers to friends or for paying bills (same day in-country, instant if same bank chain)

                                                        SEPA Instant Credit Transfer is launching in November and will hopefully see support from banks somewhen next year. It will allow instant (less than 10s) transfers across banks.

                                                      1. 12

                                                        Why is IPv6 such a complicated mess compared to IPv4? Wouldn’t it be better if it had just been IPv4 with more address bits?

                                                        Do people really feel that way? To me, IPv6 seems to be a lot better designed for today’s use, even when there’s still a separate Layer 2 protocol below. To increase address lengths, you’d have to re-specify IPv4 and all its supporting protocols anyways. Going for an integrated solution where address autoconfiguration “just works” and where multicast isn’t just an afterthought looks like the best course of action to me.

                                                        I feel like most people think of IPv6 as “complicated” mostly because they already know IPv4 well and IPv6 actually is different. I wish e.g. university courses would teach IPv6 as the main protocol and not in terms of changes from IPv4.

                                                        1. 6

                                                          The argument I see more often (e.g. from djb, whose opinions I generally respect) is that IPv6 was designed without sufficient consideration for how the transition would happen in practice. The better choice, according to djb, would have been to embed the entire IPv4 address space in the IPv6 space, thereby allowing interoperability.

                                                          1. 2

                                                            I agree, but for a lot of people day one is confusing as it is natural to have more than a single IP address, it is quite normal to have five or more pubically routable ones in addition to the link layer address. This actually solves a pile of technical real problems (ie. HA for LAN routing without VRRP).

                                                            I am unaware of any documentation that describes what to expect and cool new topologies you can call upon. This documentation also does not exist for IPv4, but that’s the devil we know, right?

                                                            Now throw in that until recently, since some governments mandated IPv6 support during procurement, that vendor support was straight up awful; arguably some would say it still is…Cisco I am looking at you.

                                                            Add to that you often pay a premium either in time, money or both to obtain native IPv6 connectivity

                                                            PI space and local LAN addressing (a la RFC1918) for IPv6 is a really recent thing too.

                                                            So to a lot of people nothing really still works and it takes some mental work to realise that you often end up doing more work to light up an IPv4 service. That work though in our minds it is latent that we are all used to; NAT, renumbering, deep packet inspection for connection tracking, service design with rendezvous points (STUN/TURN) and central servers, etc.

                                                            What vexes me though is that IPv6 multicast over the Internet is next to non-existent; anyone know who does xDSL to the home with this?

                                                            I would have really like to see the collaboration (and game benefits) of such support. Plus, legal issues aside, you could run a radio/TV stream from your home as a hobby.

                                                              1. 2

                                                                I am unaware of any documentation that describes what to expect and cool new topologies you can call upon. This documentation also does not exist for IPv4, but that’s the devil we know, right?

                                                                Now throw in that until recently, since some governments mandated IPv6 support during procurement, that vendor support was straight up awful; arguably some would say it still is…Cisco I am looking at you.

                                                                Add to that you often pay a premium either in time, money or both to obtain native IPv6 connectivity

                                                                Would any of this really have been any less true with an IPv4v2 though? You’d have had the awful initial vendor support to accommodate government mandates, the unfamiliarity with the new thing, the software that needed rewriting to accommodate the longer addresses, etc

                                                                It just seems like almost every complaint about IPv6 boils down to “it’s not a mature protocol” and short of literal magic making more than 32 bits worth of address fit into IPv4 that would have been true no matter what.

                                                                1. 1

                                                                  Would any of this really have been any less true with an IPv4v2 though?

                                                                  I am quite certain it would have been.

                                                            1. 2

                                                              Not initializing mode seems dangerous, although it’s probably fine in main(), I guess?

                                                              1. 2

                                                                I initialized a char *mode at around 3:09, is that what you mean?

                                                                You’re right that it would be dangerous! But clang would have assumed a type of int and complained when I tried to assign mode = optarg, so the compiler is looking out for us at least a little :)

                                                                1. 4

                                                                  No, you did not initialize it.

                                                                  And yes it is dangerous, main() or not.

                                                                  1. 2

                                                                    oh, you’re right! s/declare/initialize in my brain.

                                                                    Yeah, a dangerous mistake.

                                                              1. 1

                                                                It’s a sad state of affairs when a malfunctioning doorbell can prevent cars from starting. The button won’t open the door, whatever, just put the key in the lock and try not to strain yourself, but when the car has to be physically towed out of proximity?

                                                                Technology will always fail, there has to be redundancy.

                                                                1. 2

                                                                  There is redundancy. In the “key proximity” systems I’ve seen (Audi and BMW), there is a spot where you can put the key in case the usual key detection doesn’t work. The car will even tell you to put it there if it can’t detect the key. It’s supposed to work with empty batteries in the key, so it’s probably using something like NFC rather than 434 MHz radio.

                                                                1. 2

                                                                  Why did Mozilla choose to prevent installation of Firefox Focus via Google Play in Germany?

                                                                  Seems like they uploaded the same software under a different identifier “com.firefox.klar” and made this version available here: https://play.google.com/store/apps/details?id=org.mozilla.klar

                                                                  1. 2

                                                                    It’s due to a naming conflict with Focus Magazine

                                                                    It’s mentioned at the bottom here: https://blog.mozilla.org/press-de/2015/12/08/app-focus-by-firefox-ein-content-blocker-fur-apple-ios/

                                                                    1. 1

                                                                      If they named it “Firefox Focus” instead of “Focus by Firefox” from the beginning, I don’t think the rename would have been necessary…

                                                                  1. 2

                                                                    Fun fact: log-structured file systems were originally designed to improve write performance of hard drives. They later turned out to be great for flash-based media as well as the garbage collection always frees and re-uses large blocks at once.

                                                                    1. 1

                                                                      I think Let’s Encrypt is an awesome service, and providing certs for free is really great for admins, but… I can imagine a scenario where, after LE grows substantially and for example, renews 100,000+ certs per day, a serious havoc spreads through the web when their renewal service goes offline for a good portion of a day, and the 100K+ websites use expired certs. What will their millions of visitors do? Add exceptions? Browse elsewhere? I have no good solution in mind. Maybe it’s just the hidden cost of LE. Not that using other cert authorities is any better (it’s worse).

                                                                      1. 9

                                                                        I don’t believe that’s an issue. LE certificates are valid for 90 days, but most clients are set up to renew them after only 60 days. Consequently, the LE servers would have to be down for a full month before certificates start expiring.

                                                                        1. 1

                                                                          Ah, forgot about that. You are correct, 30 days would be enough to sort things out.

                                                                      1. 2

                                                                        Counting mnemonics is an odd way to do this. For a proper count, you’d rather look at all the instruction formats and then count all valid combinations of “opcode” fields. You may also want to take instruction prefixes (e.g. lock) into account.

                                                                        1. 6

                                                                          The fact this is plausibly useful is a sad comment on the state of software engineering.

                                                                          1. 2

                                                                            How would you describe the alternative desired state? That insecure protocols don’t exist? That engineers would have deeper knowledge of cryptography?

                                                                            1. 8

                                                                              Distributions of major server software would come with good configurations out of the box, alleviating every developer from being responsible for configuring things.

                                                                              https://caddyserver.com/ is a great example of this; you configure it to do what your app needs, all the TLS defaults are well curated.

                                                                              1. 4

                                                                                While I agree that a “reasonably secure default” should be standard, mostly you have to find a trade-off between security and compatibility. If you need support for IE8, there’s no way around SHA. If you want to support Windows XP or Android 2, there’s no hope at all. If you want it more secure (as of today) you fence out most Androids (but 4.x), Javas, IEs, Mobile Phones, non-up-to-date browsers. Unfortunately, there is no one size fits all.

                                                                                1. 3

                                                                                  On the other hand, compatibility with older software is very easy to figure out (people see an error message), whereas insecure configuration appears to work perfectly fine. I also believe developers are more likely to know that they need to support some obsolete software (modern web development doesn’t “just work” on IE8 or Android 2) than about the newest TLS configuration options.

                                                                                  1. 2

                                                                                    I think if you want that, we ought to have APIs that express things in terms of goals, instead of implementation details: ssl_ciphers +modern +ie8 maybe. Then it’s clear what needs to be changed to drop a platform, instead of it being a guessing game.

                                                                                    1. 2

                                                                                      This would be great. This is exactly what I’m trying to provide the user with the snippets in nginx.vim: Choose between ciphers-paranoid, ciphers-modern, ciphers-compat, ciphers-low cipher suites.