1. 3

    As former user of XMPP, let me try a different list:

    1. XMPP is a morass of partly interoperable servers and clients, each supporting different long lists of extensions.
    2. Stuff doesn’t/didn’t work. I have a fine camera and a microphone, you do too, so does XMPP mean we can talk? The answer involves extensions in the plural and is too complex for my brain as user. My phone is always on, can it get notifications without burning through its battery? The answer is again too complex for my brain.
    3. Google stopped talking to other XMPP servers after a (rumours say debilitating) spam attack. AFAICT XMPP still doesn’t have effective defense against this particular attack, which doesn’t make me feel good about the readoption by Google or others.
    4. Most users used a few servers (during the time when I still used it actively), so XMPP suffered the sluggishness of decentralised protocols (see 1, 2) but without getting their advantages.

    I have busines cards with an XMPP address. I stopped handing those out long ago.

    1. 2

      Eh, I’m going to disagree with some of these.

      1 and 2 are easy to answer. For the client, you should use Dino on the desktop, and Conversations or Monal on mobile depending on your platform. If you want to use another client, you are now presumed to be an expert and able to solve any problems you have with it. For the server, you should use a server set up by someone who is up to date with the current state of XMPP; when you connect to it from Conversations, the server info should show that all the requested features are fully supported.

      3 is fair enough, but honestly, SMTP doesn’t have an effective general defense against spam, either, and that doesn’t stop people from using it. My understanding was that Google stopped talking to other XMPP servers mainly because they had enough marketshare that they didn’t need to anymore, and preferred to lock their users in.

      I don’t know about 4, or whether it’s still true.

      1. 2

        Are you saying XMPP is a federated protocol with zero to one recommendable clients per platform? IF that’s an accurate assessment then I think it can be added as a fifth problem on my list.

        In re Google, you can explain anything with “because is evil”, “because wants lockin” etc, and a lot of lazy people do. You should at least consider the possibility that they dropped XMPP because it wasn’t used enough to deal with the hassle of contact-request spam. Spammers used XMPP contact requests to get added to people’s contact list and then sent spam via SMTP. There was a bad wave of that, Google had to choose between either decreasing the spam-sign value of people’s contact lists or getting rid of those XMPP contact requests, and used an axe to do the latter. SMTP was important, XMPP was just nice to have.

        Now, that’s hearsay. (Almost everything I’ve heard about Google’s antispam mechanisms is hearsay.) You get to judge: Is it more or less plausible than just “preferred to lock their users in”?

      2. 1

        This is a reply to myself because it’s a digression:

        I noticed that the developers of a couple of XMPP tools didn’t use XMPP addresses. They suggested that one might talk to them via XMPP, but by sending a private message to nickname in chatroom rather than by a plain XMPP message. IRCish behaviour rather than XMPPish. I didn’t understand why, but whatever it is, it suggests to me that there’s an impedance mismatch somewwhere. I’d love to understand better.

        1. 1

          I agree with your list, but have to note that 4 (and to a lesser extent 3) seem to be true for pretty much every federated/decentralized system in existence, which suggests a more fundamental problem with the concept of federated services. Every once in a while there’s a post on here philosophizing about that, e.g.

          1. 1

            Sure, I know…

            Federated services need to be designed so that there’ll be few interop-relevant feature differences, because getting new changes widely deployed is so difficult. XMPP suffered because it had that problem, and suffered particularly badly because was heavily oriented towards extensions, and therefore really needed ease of deployment and interop.

        1. 3

          I do have such codebases, but not open source… but I think it doesn’t matter, because you’d say “but that’s small!” and be right, and it’s my point.

          Code has a sharp size cliff, at a small size. Well-written code that fits on one screen is readable without comments, equally well written code in the same language, in the same style, but just ten times bigger… isn’t.

          I’ve seen ruby/rails models and controllers that fit in 30 lines and are clearly, strongly walled off from the rest of the universe. Those are clear. I’ve seen other code written in other DSLs with the same property.

          I’ve seen perl scripts that are a few tens of lines, completely free of comments, that are clear and work well.

          Someone’s going to jump on me and say “but what is the size cliff and code tends to grow and blah”. My answer is that if you have

          • the wisdom to recognise the the size cliff in a growing codebase
          • and the wisdom to recognise which tasks will fit on one screen if written in that style

          then you can write comment-free code for those tasks and get those things done quickly and maintainably. If you can’t, or you don’t trust your team to remain able to next year, then you can’t.

          Large code is different from small code in many ways. The ability to grok it without helpful comments is one of the differences.

          1. 3

            Late to the party here and this is probably write-only. But I’ll write.

            I’m a bit of an IETF person. I’ve written some RFCs, been to some meetings in the past several decades, and I’m a designated expert for a a couple of niche subjects. My take is that:

            1. We have about five richtext formats, three of which were implemented and widely adopted and four of which were standardised. The five are HTML, Markdown, text/richtext, text/enriched and text/plain;format=flowed. Markdown is not a MIME type, but rather a formalisation of existing email conventions: the single biggest source of inspiration for Markdown’s syntax is the format of plain text email as the announcement said, and the story of later RFC attempts are a tempting digression.

            2. The threads on the IETF lists over the decades show a rather clear divide between the people who want to use richtext and the people who think other people shouldn’t. So you’d hear people who wanted to use HTML in email discuss with people who didn’t want to use any richtext at all, but rather voiced opinions about whether and how other people ought to use richtext. The latter group made unsubstantiated assertions along the lines of “foo us not used due to hype deficiency” which tended to sidetrack discussion and block progress.

            3. The IETF process works well when the participants have running code and know the pain-points of use. In this area non-implementers talked much too much, and so today, people want to use richtext and do use it, but the the best richtext specifications we have today are things like this one. Sigh.

            1. 1

              I came across this issue comment this morning, which isn’t about richtext but is a perfect sample of the genre: It contains “I think”, “should be” and an anecdote, but no attempt to engage with any relevant facts or with another participant’s reasoning.

              1. 1

                Very insightful. Thank you so much for your contribution! It would definitely satisfy my curiosity to see some of the RFCs you’ve written :D

                1. 1


                  https://rant.gulbrandsen.priv.no/good-bad-rfc is a somewhat relevant blog posting on what I’ve found makes a good RFC… the short version is that the RFCs I wrote that have been widely implemented didn’t grow much during the editing phase, and the one that grew the most in length is definitely the one that’s been implemented least.

              1. 8

                For instance, HTML mail allows JavaScripts to be embedded in mail messages. This means that when you view a mail message, you might be executing an arbitrary program written by the sender of the message.

                Has this ever been true?

                1. 3

                  For some clients configured in some ways, yes. I don’t know how prevalent it was but I know I’ve used clients that would execute javascript in an html email if you didn’t turn it off.

                  1. 2

                    I’m not aware of any. I see zaphar mentions an unnamed ‘several’, so I’ll name names for a reader that let the sender to execute arbitrary code for text/plain: GNU Emacs, using any of the mail add-on packages that were available at the time.

                    The sender had to embed something like

                      -*- tab-width: (progn arbitrary-code-here) -*-

                    on the last line of the email, and emacs would run the code and then set tab-width to the presumedly returned int. I don’t remember the exact syntax. It was fixed in… 1992? Mail add-ons were given the ability to block -*- handling.

                  1. 4

                    Comparing PGP to something like Signal is an apples to oranges comparrisson. If you’re using Signal then you’re basically trusting Moxie’s server. No trust of any particular server is required for PGP and nor do you need to give out your phone number. It’s not that PGP isn’t without numerous problems (hence its lack of adoption) but it’s designed for a different sort of use.

                    1. 2

                      How does one use PGP without trusting the keyservers?

                      (And don’t they permit anyone to upload a key for anyone else’s address, BTW?)

                      1. 3

                        You exchange keys personally, or over the phone, or via keybase.io.

                        1. 1

                          So do people actually do this? Do you know any PGP users who don’t use the keyservers? (And what’s keybase.io? Why doesn’t it involve trust?)

                          1. 2

                            I have used PGP without keyservers In the past, though I don’t have an active PGP use case right now. In most of my usage, all the potential recipients were local and members of the same organization, so we just exchanged keys in person.

                            PGP was mostly used for signatures (“I attest that I produced and/or verified this file”) and not for encryption.

                            1. 1

                              I don’t know any PGP users.

                              The keyservers have had major issues since a decade, owing to the fact that it’s an open append-only database, and the latest version of GPG won’t read signatures on keys from keyservers.

                              Here’s my keybase.io entry: https://keybase.io/gerikson - you can see I have cryptographically signed multiple social media accounts, so you can be fairly sure I am who I say I am. If you want to exchange keys, you can either use the public key on that page, or DM me via a social media account and offer yours. Keybase provides identity in this case. You don’t really have to trust the PGP key on that page if you feel nervous.

                      1. 2

                        Just don’t do this on any computer which ever runs untrusted code.

                        Especially don’t do it if you browse the web on such a computer with Javascript turned on.

                        1. 2

                          Yes! Forgot to add the satire tag. Added. :-)

                          1. 1

                            I have computers that don’t ever run javascript. I’m bookmarking this. I haven’t heard of most of these arguments… Did the author miss any?

                            The idea could be expanded to include gcc flags, right?

                            1. 1

                              I would suggest not and just trust the kernel maintainers. There may be very specific reasons for why the kernel is optimized the way it is. Sometimes over optimization actually introduces security vulns because compilers get too smart for their own good.

                              1. 1

                                Oh, let me try again.

                                The kernel command line arguments shown in this post appear to deliberately disable important security features in favor of performance. (Are none of these ‘free’ to enable? I don’t know how much work the author put into selecting this specific list.)

                                Just for fun, using a computer disconnected from the internet… Can we push this idea further?

                                Are the binaries in popular contemporary linux distros compiled with compiler options that favor security over performance? That is, are there compile-time choices we can make to favor performance over security?

                                I’d like to imagine this line of thought could actually be meaningful in some hypothetical situation. Like using old hardware to play HD video (on a machine not connected to the internet) or something.

                                1. 1

                                  Sure, link everything into the kernel and avoid syscall and context switch overhead.

                                  1. 1

                                    That sounds good. As I understand it, context switches are very expensive.

                                    But, that’s a lot more work than changing some parameters, right? What would a utility like grep even look like after ‘linking everything into the kernel’?

                                    tedu, are you talking about putting a bunch of kernel into a grep binary, or putting a bunch of grep application into the kernel?

                                    (This line of conversation would rightly be classified as a thought experiment, right?)

                                    1. 1

                                      (This line of conversation would rightly be classified as a thought experiment, right?)

                                      I suspect it would be classified as trolling.

                                      But having a unibinary system would be fascinating, offer very little by way of runtime customization (let alone programming any compiled language), and have miserable separation between user accounts (if you maintain a mostly POSIX compatible interface).

                                      1. 2

                                        Unibinary. Ok, so, we’re talking about putting the functionality of applications into the kernel. Yes, the downsides you describe make sense. Though, customization is still possible through self-modification.

                                        …I’m certainly not trying to have any kind of negative impact on anyone. I am, I’ll admit, trying to get something out of folks. I want to know how computers work. Actually, I guess I want to know how compilers work–in practice. I seem to know how compilers work in theory.

                                        Moreover, I have believed for some time that my various CPUs spend a lot of time and heat on tasks that are somehow adjacent to whatever task I ask of them. More and more I am biased towards leaner systems that do less.

                            2. 1

                              That’s computer, right? How much of that command line even applies to my intel-free devices?

                              1. 2

                                The spectre mitigations apply to everyone that does branch prediction.

                            1. 2

                              I don’t understand the “Wasting time in fuzzy front end” paragraph; can someone try to explain it to me? I get that it’s completely unrelated to the modern usage of the term “frontend” (as in UI), but I’m not sure how it isn’t contrary to “Abandoning planning under pressure”.

                              1. 4

                                In some projects, the ship date is known before the first project proposal is known, even if it isn’t known as a concret date. This can be “in time for Christmas”, but also something more complex If you’ve committed to be a launch customer for Apple’s next iphone feature, “the release date Apple chooses” is your fixed date. Apple can change it, you cannot.

                                If time is a little short, do you block all development on the completion of all planning materials? Or do you try to compress the initial phase and let the development phase start, perhaps partly in parallel?

                                Dropping the planning phase completely would be bad. But that’s no excuse to have a leisurely planning phase for a project where you know time is short.

                                And people do act with some leisure. They’ll set a meeting date several weeks into the future because one participant is on vacation and not really think about whether dropping that attendant would be a smaller problem than losing those weeks.

                                1. 1

                                  I took it to mean the false economy of spending extra weeks planning, so that days of development time can be saved. Such extra planning does pay off (it saved days of effort), but it wasn’t worth the cost (weeks of effort).

                                  Here’s a classic example

                                1. 20

                                  My advice, which is worth every penny you pay for it:

                                  Don’t maintain a test environment. Rather, write, maintain and use code to build an all-new copy of production, making specific changes. If you use puppet, chef, ansible or such to set up production, use the same, and if you have a database, restore the latest backup and perhaps delete records.

                                  The specific changes may include deleting 99% of the users or other records, using the smallest possible VM instances if you’re on a public cloud, and should include removing the ability to send mail, but it ought to be a faithful copy by default. Including all the data has drawbacks, including only 1% has drawbacks, I’ve suffered both, pick your poison.

                                  Don’t let them diverge. Recreate the copy every week, or maybe even every night, automatically.

                                  1. 9

                                    Seconding this.

                                    One of the nice things about having “staging” being basically a hot-standby of production is that, in a pinch, you can cut over to serve from it if you need to. Additionally, the act of getting things organized to provision that system will usually help you spot issues with your existing production deployment–and if you can’t rebuild prod from a script automatically, you have a ticking timebomb on your hands.

                                    As far as database stuff goes, use the database backups from prod (hopefully taken every night) and perhaps run them through an anonymizing ETL to do things like scramble sensitive customer data and names. You can’t beat the shape (and issues) of real data for testing purposes.

                                    1. 2

                                      Pardon a soapbox digression: Friendlysock is big improvement over your previous persona. Merci.

                                      1. 1

                                        It’s not a bad idea to make use of a secondary by having it be available to tests. Though I would argue instead for multiple availability zones and auto scaling groups if you want production to be high availability. Having staging as a secondary makes it difficult for certain databases like Couch base to have automatic fail over since the data is not in sync and in both cases your gonna have to spin up new servers anyways.

                                      2. 8

                                        We basically do this. our production DB (and other production datastores) are restored every hour, so when a developer/tester runs our code they can specify –db=hourly and it will talk to the hourly copy(actually we do this through ENV variables, but can override that with a cli option) . We do the same for daily. We don’t have a weekly.

                                        Most of our development happens in daily. Our development rarely needs to live past a day, as our changes tend to be pretty small anymore. If we have some long-lived branch that needs it’s own DB to play in(like a huge long-lasting DB change or something) we spin out a copy of daily just for that purpose, we limit it to one, and it’s called dev.

                                        All of our debugging and user issue fixing happens in hourly. It’s very rare that a user bug gets to us in < 1hr that can’t be reproduced easily. When that happens we usually just wait for the next hour tick to happen, to make sure it’s still not reproducible before closing.

                                        It makes life very nice to do this. We get to debug and troubleshoot in what is essentially a live environment, with real data, without caring if we break it badly (since it’s just an at most 1 hour old copy of production, and will automatically get rebuilt every hour of every day).

                                        Plus this means all of our dev and test systems have the same security and access controls as production, if we are re-building them EVERY HOUR, it needs to be identical to production.

                                        Also this is all automated, and is restored from our near-term backup(s). So we know our backups work every single hour of every day. This does mean keeping your near-term backups very close to production, since it’s tied so tightly to our development workflow. We do of course also do longer-term backups that are just bit-for-bit copies of the near-term stuck at a particular time(i.e. daily, weekly, monthly).

                                        Overall, definitely do this and make your development life lazy.

                                        1. 1

                                          I’m sorry, what is the distinction you’re making that makes this not a test environment? The syncing databases?

                                          1. 2

                                            If I understand correctly, the point is that this entire environment, infrastructure included, is effectively ephemeral. It is not a persistent set of servers with a managed set of data, instead, it’s a stand by copy of production recreated every week, or day. Thus, it’s less of a classic environment and more like a temporary copy. (That is always available.)

                                            1. 4

                                              Yes, precisely.

                                              OP wants the test environment to be usable for testing, etc., all of which implies that for the unknown case that comes up next week, the test and production environments should be equivalent.

                                              One could say “well, we could just maintain both environments, and when we change one we’ll do the same change on the other”. I say that’s rubbish, doesn’t happen, sooner or later the test environment has unrealistic data and significant but unknown divergences. The way to get equivalence is to force the two to be the same, so that

                                              • quick hacks done during testing get wiped and replaced by a faithful copy of production every night or sunday
                                              • mistakes don’t live forever and slowly increase divergence
                                              • data is realistic by default and every difference is a conscious decision
                                              • people trust that the test environment is usable for testing

                                              Put differently, the distinction is not the noun (“environment”) but the verb (“maintain” vs “regenerate”).

                                              1. 2

                                                Ah, okay. That’s an interesting distinction you make – I take it for granted that the entire infrastructure is generated with automation and hence can be created / destroyed at will.

                                                1. 2

                                                  LOLWTFsomething. Even clueful teams fail a little now and then.

                                                  Getting the big important database right seems particularly difficult. Nowhere I’ve worked and nowhere I’ve heard details about was really able to tear down and set up the database without significant downtime.

                                        1. 5

                                          Uh, what?

                                          Literally every responsive web site I’ve ever written or even seen (other than flak.tedunangst.com, which not-coincidentally has a smaller than probably intended font size in my Android device) uses viewport width=device-width. That’s what Lobsters is using. That’s what GitHub’s mobile site uses. That’s what I use. And they employ stylesheets that look great on Safari, Chrome, and Firefox’s mobile versions. And it’s the same stylesheet used on desktop. And, in case you’re going to complain about sites that disable zoom, that’s a separate non-default directive and you don’t have to enable it on your site.

                                          I have no idea what the OP is doing, but it’s weird.

                                          1. 3

                                            I looked at his stylesheet; he has one style for ≤1280 pixels and one for wider. Phone browsers are MUCH narrower than 1280px and the stylesheet prioritises the margins over the text when it allocates width, so he uses to make the phones pretend their screens are 720px wide and scale down the fonts to make that happen. That way the margins look okay, but the text is much smaller than intended.

                                            I agree, weird.

                                            @tedu if you read this, the “px” unit isn’t physical pixels in CSS, it’s 1/96 inch. There are reasons for that too, mostly historical but in the end if comes down to device oddities: Some devices’ resolutions aren’t well described in terms of addressable squares, including many printers and some phone screens. Your CSS is based on the idea that all devices are roughly 33cm wide.

                                            1. 2

                                              This is helpful. Though I’ll note that I’ve used the same variant of stylesheet forever. If I delete the meta viewport tag entirely, it renders pretty much exactly the same. I only added the viewport so that it would stop bouncing to top after navigating back.

                                              I guess you could say the font is too small? But it’s the same size as I see on lobsters, or ars technica, or many other sites.

                                              1. 5

                                                If I delete the meta viewport tag entirely, it renders pretty much exactly the same.

                                                Running without a viewport tag is essentially a “legacy mode” for Mobile Safari and its clones. It’s designed to make pages that assume everyone’s using a 1024x768 computer screen render without producing completely broken layouts. at the cost of requiring the user to zoom and pan. By running Safari in that mode while trying to make a site that’s mobile-friendly, you are using the browser in a way that is contrary to Apple’s design intent. And it don’t work too well, do it?

                                                In contrast, look at https://notriddle-more-interesting.herokuapp.com/ and https://notriddle-more-interesting.herokuapp.com/assets/style.css. Notice how the style sheet contains no media queries at all, and how it does not perform user agent sniffing. Most of the stuff in that stylesheet is done in multiples of em, relying entirely on the browser’s defaults to be sensible, and when you run Mobile Safari with viewport width=device-width, they are. All I have to do beyond that is implement my margins using margin:auto and max-width.

                                                If you want a simpler and cleaner example, and one that I didn’t write, look at http://bettermotherfuckingwebsite.com/. Notice, once again, that it contains no iphone-specific stylesheet (no media queries, no user agent stiffing) and it still looks great on an iphone. The magic incantations are:

                                                • viewport with width=device-width
                                                • the content margins are implemented using max-width, rather than setting specifc margins, so the margins grow and shrink as the browser grows and shrinks and the content is never forced into a tiny sliver
                                                1. 1

                                                  I ran in to the same issue today, and I remembered this thread. Seems to be related. Can reproduce with a simple stylesheet:

                                                  html { font-size: 16px; background-color: yellow; }
                                                  @media (max-width: 26rem) {
                                                    html { font-size: 12px; background-color: red; }

                                                  During scrolling and clicking links on my iPhone SE it switches between the two colours and font sizes. I had a similar issue a while ago where a { transition: all } caused the layout to continuously “stutter” between the two sizes without any interaction.

                                                  I have <meta name="viewport" content="width=device-width, initial-scale=1">

                                                  I can’t find anything related to this issue right now. For now I just set it to 27rem as I got bored of this and it seems to work 🤷 I think it’s safe to say that changing things such as text size in media queries is broken on iOS, at least at certain widths, since iOS doesn’t properly apply them and doing too much with it confuses it.

                                                  CC/FYI: @tedu @arnt

                                                  1. 1

                                                    Using http://stephen.io/mediaqueries/#iPhone as a reference:

                                                    • The iPhone has a browser viewport 375 CSS pixels wide in portrait mode.
                                                    • When computed with font-size 16px, your media query applies to a viewport with 416px or less, which applies.
                                                    • This causes it to switch to font-size 12px, which changes the media query to 312px, which no longer applies.
                                                      • rem is relative to the font-size of the root tag, which is html in your web site.
                                                    • Once the media query no longer applies, it switches it off, going back to a 16px font size, which means that the media query now applies again.

                                                    This doesn’t seem to have anything to do with the viewport meta tag at all. Can someone try this in mobile safari? I can’t reproduce this in mobile Chrome.

                                                    /* given this font-size, the media query will check for a window that's 12000 CSS pixels wide */
                                                    html { font-size: 120px; background-color: yellow; }
                                                    @media (max-width: 100rem) {
                                                      /* given this font-size, the media query will check for a window that's 100 CSS pixels wide */
                                                      html { font-size: 1px; background-color: red; }

                                                    I think the right way to do this is to either not use rem or em in media queries, or to perform your font-size changes on a child of html, not on html itself.

                                              2. 2

                                                Well, I tried again and I guess it works. As you said, I was trying to render to a smallish desktop sized canvas, then scale to phone screen. It’s easier for me to see what that looks like with my own desktop browser without resizing it down to actual phone size. I’m generally dissatisfied with special mobile styling and wish it did just work more like a tiny desktop. But no point fighting the whole world. Thanks.

                                                1. 1

                                                  You don’t need to use a mobile.

                                                  If you use chrome or firefox, just drag the tab for your site out of the browser window. You’ll get a new, separate window that you can resize. When you make that narrow, both chrome and firefox will reapply all style elements, and you can see how your site will work on mobile.

                                                  It may be a configuration option (I use KDE on linux), but when I resize the browser window, the browser updates the layout as I resize. I can get a quick look at the full range of widths just by moving the mouse slowly right and left for a few seconds.

                                              3. 2

                                                I’m mostly annoyed that the viewport tag is necessary at all. Browsers should just work without custom extensions.

                                              1. 12

                                                Many things about coverage of this annoy me.

                                                “Boeing just wanted to cut corners and avoid retraining” arguments act like this is some sort of completely unprecedented thing.

                                                But the Boeing 757/767 (narrowbody versus widebody) were designed to achieve a common type rating and minimize the cost of having pilots certified to fly either one. The Airbus A330/A340 (twinjet versus quadjet) were designed to achieve a common type rating. The Airbus A380 was designed in a way that probably makes it impossible ever to do a useful cargo conversion, just so it could have the flight deck at the same height as the A330/A340 and reduce the amount of new training required. The Airbus A320neo uses the same “keep the basic airframe, put bigger engines on it” approach as the last two generations of 737s. The Airbus A330neo uses that approach. In fact, of re-engined new-generation aircraft in service today, the Embraer “E2” E-Jets are the only ones I know of that didn’t do this. “Fungible” re-enginings of existing aircraft have been very successful across the aviation industry.

                                                (yes, the A320neo and A330neo do make some modifications other than new engines, such as new wingtip devices, but so did the 737NG and 737 MAX; the point of the Airbus “neo” aircraft, like the new 737s, is larger, more efficient engines on essentially the same airframe; Embraer actually redesigned the wing for the E2s, which is a non-trivial modification)

                                                So to properly go into this, you can’t just hand-wave “Boeing did this to avoid retraining and recertification”, because that’s an incredibly common thing in aviation. You have to go into more detail about what makes the 737 case unique (and there are some unique things, and the linked article touches on them sort of tangentially, but still mostly just scaremongers the “greedy company wanted to avoid costly training and certification” narrative.

                                                Similarly, blaming “software” is disingenuous. Airbus’ product line is fully fly-by-wire. Boeing’s more recent new types (the 777 and 787) are fly-by-wire. This is the direction the industry is going and in fact it’s where the industry already largely is and has been for multiple decades. And the evidence is overwhelming that it’s the right choice: while faulty equipment or software are involved in plenty of incidents and accidents, the eventual report basically always ends up being “compounded by human error”, and “human error” is far and away the most common cause of serious issues. And this isn’t something that can be fixed by emphasizing training and “manual flying” skills, either, as the AA587 crash demonstrated (the airline’s “manual flying” training program was implicated as a cause of the pilot error there).

                                                And problems with MCAS – if indeed MCAS is implicated in the final reports – would not be unprecedented. In 2008 Qantas had two uncommanded pitch-down incidents, on two different Airbus A330s. The root cause was determined to be a fault where the aircraft’s inertial data unit (ADIRU) would get into a corrupted state and treat the altitude value as the angle of attack. That produced over 100 casualties.

                                                And “the crew doesn’t know what the software is doing” isn’t unprecedented either. The AF447 crash (faulty sensor equipment resulting in loss of airspeed data on an Airbus A330) had exactly that problem: the crew never seemed to figure out what the aircraft was doing, and as a result mishandled it right into the ocean, leaving 228 people dead. The root problem – iced-up pitot tubes – was discovered to be more common on A330s and A340s than anyone had expected, and eventually the component was ordered to be replaced. Incidentally, Airbus didn’t, as far as I’m aware, offer even a factory option for the kind of feedback and notification that would have helped the crew of AF447.

                                                I could go on and on and on about this, and have on a few forums, but the basic takeaway is that this situation is not unprecedented. What is unprecedented is the way it’s been handled. There absolutely is something wrong here, and there absolutely should be investigations into the two crashes, and there should be updates and changes in response to the findings of those investigations. But even compared to just ten years ago when the A330 had its troubles, the scale of the narrative-driven fear-mongering coverage is something else. If this had happened in 2008-2009, I don’t think we’d have seen a worldwide grounding. And I worry because the correct resolution here is almost certainly going to be a combination of minor updates to the 737 MAX and increased training for pilots, and the coverage is setting up the public to violently reject that resolution, which would just compound the already-existing tragedy.

                                                1. 3

                                                  All this, and of course, if the software had worked we’d be reading case studies about the marvels of software engineering and how it enables us to overcome pesky laws of physics.

                                                  1. 2

                                                    With respect to the 737 specifically, I have some sympathy for Boeing because they’ve been crunched by customer airlines who make contradictory and at times impossible demands. It’s basically not viable and hasn’t been viable for some time to do the kind of significant redesign or even clean-sheet new type that should have replaced the 737, especially as that market niche is getting crowded now with China doing its own homegrown plane.

                                                    And 737NG pilots had already complained that sometimes that aircraft handled differently than previous generations (the engine-forward move started on the NG). Adding software flight-envelope protection to return the handling to what pilots were used to is not an unreasonable solution. There are certainly some questionable things in the specific implementation Boeing went with, and I expect that they’ll be changed as a result of the two crashes, but all the attempts to spin this as some unprecedented completely awful thing just really really bother me.

                                                    1. 4

                                                      attempts to spin this as some unprecedented completely awful thing just really really bother me.

                                                      How many other cases are which meets the following conditions:

                                                      1. Technical fault in the plane.
                                                      2. Which happens under fairly common circumstances.
                                                      3. Which is hard to correct by pilots, even when acting according to procedure.
                                                      4. The exact same fault caused two planes to crash with all loss of life within the span of 6 months.

                                                      I don’t know if this scenario is unprecedented, but it is very uncommon, hence the response to this is different from “normal” crashes.

                                                      1. 1

                                                        If you put enough bullet points on there to narrow things, sure, you can make this unique. But all of the basic issues, and even combinations of several of them, alleged in the 737 MAX crashes are precedented. What is unprecedented is the coverage and reactions, as I’ve attempted to demonstrate at some length.

                                                        But for sake of simplicity, can we keep it to one comment chain? I’ve given you a lengthy reply to your other comment.

                                                        1. 2

                                                          If you put enough bullet points on there to narrow things, sure, you can make this unique.

                                                          That’s just handwaving concerns away; but we can keep it simpler by saying “two crashes in a short span of time caused by the same technical fault”.

                                                          1. 1

                                                            two crashes in a short span of time caused by the same technical fault

                                                            Is there actually proof of this yet? the Lion Air Final Report hasn’t even been released.

                                                            1. 1

                                                              It won’t be clear even when the report if released, if it’s like the reports I have read.

                                                              Which didn’t talk about the fault in singular, but rather went into detail. Lucid, insightful analysis if the chain that led to each crash (from when a part was designed as the aircraft was built, until just before the crash), then analysis of which points someone might’ve acted differently and prevented the accident, and finally recommendations. Software developers ought to read one, they’re marvellous postmortems, with a fine balance between clarity and blamelessness.

                                                          2. 1

                                                            What is unprecedented is the coverage and reactions, as I’ve attempted to demonstrate at some length.

                                                            That’s basically a public relations issue, not an engineering one.

                                                            I don’t know how Boeing dropped the ball on communications when Airbus (seemingly) didn’t. Maybe it’s because (at least in the Air France case) there was only on airline and one other flight safety counterparty to have the dialog with. The optics of the rich American company seemingly knowingly selling faulty goods to poorer nations’ airlines probably has something to do with it.

                                                            For what it’s worth, having read about previous air safety disasters, I think it will take a long time for the “definitive” truth to come out, at which time the outrage machine will have moved on.

                                                            1. 1

                                                              I don’t know how Boeing dropped the ball on communications when Airbus (seemingly) didn’t.

                                                              There’s a longer, darker rant about the politics of the aviation industry that I’m not going to uncork today, including how much an aircraft manufacturer can benefit from a “home turf” investigation of an accident.

                                                              But I do think there’s a qualitative difference between coverage today and coverage a decade ago, and vastly increased audience for outrage-bait, which feasts upon incidents like these. And that’s the biggest thing that annoys me about the 737 MAX case, because so many of the things people are getting outraged about are not just precedented but normal practice (like trying to design the new generation of a plane to minimize retraining and recertification from the old generation).

                                                    2. 2

                                                      In 2008 Qantas had two uncommanded pitch-down incidents, on two different Airbus A330s. The root cause was determined to be a fault where the aircraft’s inertial data unit (ADIRU) would get into a corrupted state and treat the altitude value as the angle of attack. That produced over 100 casualties.

                                                      This incident didn’t produce any casualties, merely injuries due to the occupants not being restrained when the aircraft violently pitched down.

                                                      This also seems like an entirely different class of problem. As I understand the report, this was an intermittent error from combining data from different sensors which only occurred in a set of specific circumstances: two AOA sensor spikes 1.2 seconds apart. A condition that was impossible when the flight computer was first designed, but became possible after an upgrade to the sensors and replacing some hardware components of the flight computer.

                                                      The report says that the flight computer is fundamentally sound, and can adequately deal with many different scenarios; just not with this specific one. This is fundamentally different from the MCAS failure, where it is claimed that the entire approach is wrong.

                                                      Most air crashes are the result of a combination of factors (e.g. technical failure which happens under a rare and specific set of circumstances combined with pilot misjudgement). The Qantas 72 flight suffered from exactly this, but the pilots were able to correct. The MCAS crashes seem to be the result from a technical failure combined with a fairly common set of circumstances (rather than a rare and specific one), and at least two pilot crews have been unable to correct for this.

                                                      Your comparison to Air France 447 also seemed inapt. The flight computer disconnected after it got inconsistent sensor input, and the crew bungled the response to that. While there are certainly improvements that could (and have) been made, it again seems like a different class of problem.

                                                      Both incidents were less severe than two planes fatally crashing in a short span of time due to exactly the same technical root cause. I’m fairly confident that this is not “narrative-driven fear-mongering”, and that exactly the same response to ground all planes would be made in 2008-2009, or if it happened to an Airbus plane.

                                                      the correct resolution here is almost certainly going to be a combination of minor updates to the 737 MAX and increased training for pilots

                                                      Perhaps, but two planes crashed due to what appears to be the exact same root cause in the span of 6 months. Do you really want to wait for the next accident to happen before those updates and training has been done?

                                                      1. 1

                                                        This incident didn’t produce any casualties, merely injuries

                                                        The term “casualties” refers to both fatalities and injuries.

                                                        The report says that the flight computer is fundamentally sound, and can adequately deal with many different scenarios; just not with this specific one. This is fundamentally different from the MCAS failure, where it is claimed that the entire approach is wrong.

                                                        Without litigating this too much, MCAS is “fundamentally sound” in the same way – it handles many different scenarios adequately, just not a particular one that turns out to have happened multiple times. Remember: the A330 ADIRU corruption bug happened multiple times to different aircraft in the space of just a few months, too. The second incident wasn’t as bad because the crew were aware of the first incident and acted more quickly.

                                                        The other relevant point here is that the A330 has multiple redundant ADIRUs and the ability to reject obviously-bad data coming from one of them, but this situation still came up, in the real world, multiple times. While redundancy is better than a single point of failure, and Boeing probably should re-work MCAS to draw from the multiple AoA sensors and add better handle of AoA disagreement, redundancy has still been treated as too much of a panacea in discussions of the 737 MAX. There are no silver bullet solutions.

                                                        Your comparison to Air France 447 also seemed inapt. The flight computer disconnected after it got inconsistent sensor input, and the crew bungled the response to that. While there are certainly improvements that could (and have) been made, it again seems like a different class of problem.

                                                        While the AF447 crash started with bad airspeed indicator and the aircraft switching out of normal law operation, that was a perfectly survivable situation – as later investigation showed, bad airspeed indicator was a common problem on the A330 because of the pitot tube icing issue. What put AF447 in the water was a combination of automation dependency and lack of any type of feedback to tell the crew what the automation was doing. The same situation – lack of any notice to pilots that MCAS exists or is applying trim – appears in many people’s speculation and complaints about the 737 MAX.

                                                        (the AF447 situation is actually a lot scarier when you dig into it, by the way, because the aircraft was behaving in a manner exactly opposite to what a pilot is trained to expect: it was sounding stall warnings when the pilots put the nose down, and clearing the stall warning when they yanked back on the stick to put the nose up)

                                                        Perhaps, but two planes crashed due to what appears to be the exact same root cause in the span of 6 months. Do you really want to wait for the next accident to happen before those updates and training has been done?

                                                        Two A330s had uncommanded pitch-down incidents, and another went into the water, in the space of less than a year. 200+ fatalities and 100+ injured. Nobody ever called for a worldwide grounding of the type.

                                                        But if you really want something you can sink your teeth into as an analogue, look at the MD-11, and how aspects of its design (including a deliberate choice to shift its center of gravity aft) caused uncommanded-pitch issues as well as handling difficulties during takeoff and landing. Resolutions of those issues involved a combination of software, pilot training, and redesign of the flap/slat handle. The type was never grounded, despite double-digit numbers of incidents and a fatal accident (MU583, the accident on which Michael Crichton’s Airframe was loosely based).

                                                    1. 2

                                                      I’m not making much progress on (big stuff with difficult problems, details elided) so I’ve tried to unblock myself by doing some small things. A one-line PR on a nice opensource thing yesterday, for example.

                                                      One thing I did on… Friday? involved changing an algorithm that decides where an object is put on-screen. Mostly the new object is put near the top of a list, earlier it would be put further down, truly a detail, and the result gave me just the sort of deep satisfaction Christoph Alexander has in mind

                                                      The new location was precisely where expected, and the insertion timed such that the scrolling doesn’t conflict with user input. It was a a small change in code, but it made the list 100% free of small factors that irritate or annoy.

                                                      And that’s what I see with many of the satisfying changes — the result is remarkably free of annoying details. It may be an animation that’s so well-designed that it looks like no change at all, or may be a search that produces the right result so often that one comes to regard it as self-evident, or a compile that doesn’t last too long.

                                                      1. 2

                                                        Replying to myself — I want to post more. The urge to speechify has come upon me.

                                                        Alexander uses the word egoless in a passage I liked very much. Can’t find the passage now, but it was near the beginning of the “Timeless way” book and it’s central. Software can be egoless, when it gets out of the way. When it serves the user and otherwise gets out of the way, and that’s damned well exactly what software should do.

                                                        Most of the software’s ego is visible in the form of irritants, but that’s not all. The software does things, if it’s properly egoless that applies to what it does, not just to the irritating errors. Much of what the software does can feel facilitating, empowering. That’s egoless, in a way: You do things, the software serves you by implementing the action you initiate (hopefully without irritating you with petty details).

                                                        A digression: Alexander wrote about cities more than single buildings, IIRC. I live in Munich, quite close to a street called Grafinger Straße, which has used to end at a railway station for 50-100 years, now it’s being more or less cut off two blocks earlier so to get to the railway station, you turn a little towards the northwest into what used to be Haager Straße. What’s the point? I didn’t see the sense until I saw that they’re also building a big ferris wheel, and it lines up with the street. It’s a dominant structure, difficult to call that egoless, but it lines up with the street and… fits. The street is an axis now and the ferris wheel anchors it, in as egoless a manner as a look-at-me ferris wheel can. The wheel is one with the street. Software can interface well, too. Even a big system doesn’t have to grate against its surroundings.

                                                        1. 1

                                                          Nice to hear about the ferris wheel. It sounds like it makes for a nice “center” in Alexander’s terminology, if it links up with the whole of the street and contributes to the coherence of the neighborhood. Of course such attractions can be delightful for dates, kids, and tourists who get a vantage point and elevate themselves for a while; maybe we ought to build more of them! Sometimes I recommend visitors to my city to take the elevator up to a tall hotel’s sky bar, because taking in a view like that is exhilarating and gives a good perspective to understand the city’s structure. I read in a book about a Tibetan Buddhist tradition that holds that depression can be related to a kind of mental constriction which can be treated with the help of vast views like from a peak or tower. During a period of anxiety a while ago I used to go to that hotel for a cup of tea in the afternoon.

                                                      1. 5

                                                        I just have a python two-liner that picks four random words from /usr/share/dict/words and downcases them. This is sufficiently entropic for my purposes:

                                                        print(' '.join([w.strip().lower() for w in random.sample(list(open('/usr/share/dict/words')), 4)]))
                                                        1. 3

                                                          You can also do this with Bash directly from the terminal:

                                                          for i in seq 1 10; do nice_dogs=shuf -n 5 /usr/share/dict/words | tr ‘\n’ ‘-’ && echo $nice_dogs$RANDOM; done

                                                          Example output:

                                                          • aggregations-Tahitian-Biden’s-laundries-lagniappe-32369
                                                          • aridity’s-fortification’s-Teri’s-surfboard’s-stinted-12072
                                                          • wick-homophone’s-Leander-triteness’s-Hamlin-7182
                                                          • Seneca’s-flags-ideogram’s-Yosemite’s-meter’s-28483
                                                          • beryllium-rubdowns-showdown-replaceable-Siamese-22326
                                                          • inaugural’s-fan-echelon’s-Devi’s-nightie-3720
                                                          • extortion’s-coolies-highfalutin-reconcilable-spotlight’s-24242
                                                          • Hatsheput-secrete-angioplasty-snacks-ruggedness’s-9776
                                                          • bordering-turds-binomial’s-conclusively-glimpse’s-25920
                                                          • odder-buzzes-hypotenuses-theocracy’s-sportier-16552
                                                          1. 1

                                                            For those copy-pasting:

                                                            for i in `seq 1 10`
                                                                nice_dogs=`shuf -n 5 /usr/share/dict/words | tr '\n' '-'` && echo $nice_dogs$RANDOM;
                                                          2. 1
                                                            1. 3

                                                              Someone I can’t recall coined the quip that while password policies are stuck in 70s mainframe land, the technological sophistication of attackers use the latest machine learning and statistical analysis tools.

                                                              The XKCD panel is no longer secure advice, since attackers learned how to aggressively mutate and statistically analyze large password datasets. On massive data breaches they are above 95-98% recovery rate from hashed passwords.

                                                              The state of the art is if it is a combination of human meaningful words and some additonal mutation and a few random characters on top, it’s already within reach of the people reversing hashes. (Which is the threat model, credential stuffing, not someone trying to brute-force a login).

                                                              This is why security researchers are pushing for password managers that generate non-human-meaningful 18+ character random passwords per service.

                                                              1. 2

                                                                Can you explain this more? How does statistical analysis help with the entropy of words more than with the entropy of characters? Remember (I’m sure you do, but for onlookers) than in the XKCD panel we’re not mistakenly counting each letter as entropically meaningful but only the words vs the size of a reasonable dictionary. So a dictionary attack is the assumed vector, I’m curious what new statistical tools improve on this attack.

                                                                1. 1

                                                                  I also would like to see this explained. Regarding password managers and credential stuffing: using a 6-word passphrase (the recommended length for current required entropy levels) doesn’t mean you can keep using the same passphrase for every service. You still need a unique correct-horse-battery-staple-defection-epilogue passphrase for each service. But when you need to type a password from your password manager and can’t use autotype, it’s easier to correctly type a 6-word passphrase than PIXROU8i+00((AJM4s$$.

                                                                  1. 5

                                                                    A couple of things worth mentioning here:

                                                                    • if you’re generating passwords based on a strongly random source, of course it doesn’t matter if you’re using the randomness to generate random characters or select random words out of a large dictionary, overall entropy matters.
                                                                    • Contrary to the small-print text in the xkcd comic, offline cracking attacks are the ones to worry about, not online guesses against a service.
                                                                    • How much entropy is enough? That depends on the way a particular service is storing the passwords. If you’re using a password manager then it’s a mute point, you might as well use a strong 20+ character autogenerated password per service. If you’re using a memorable passphrase without a password manager though, the lowest common denominator hashing/salting method across all services the password is shared with is the one you want to protect against. It’s unlikely that you’ll ever get hard confirmation from any service about their password storage procedures, especially from services that are at the most risk from data breaches (the ones not too much on top of things). Recent versions of GPU-based password cracking benchmarks range from 100 Gigahashes/s per GPU to a couple thousand for more hardened password-storage methods. If you can attack at 100Gh/s instead of a thousand guesses per second, the correct-battery-horse-staple example would take 175 seconds to find, assuming you know the dictionary it was generated from.
                                                                    • This was all assuming that people use strong randomness as a source of their passwords. Most people still don’t do that and password cracking tools learned long-ago to account for all the clever tricks anyone could ever think of to generate passwords from various dictionary or other methods (and therefor vastly narrow the search space).
                                                                    • The biggest problem with the xkcd comic therefor is to suggest that you can ever remember passwords. I guess it’s more important that you use a different non-related password per service than the individual entropy levels of the passwords, but to do 1. you already need a password manager so no point in using weak passwords. I agree with the narrow point though that using a long passphrase composed from common words might be easier to type from a phone.
                                                                    1. 3

                                                                      I should have added – you need to use a sufficiently long, memorable passphrase for your password manager, so there is always one passphrase that has to be memorable. You will probably have to memorize your workstation/laptop passphrase, too, unless you want to keep looking it up on another device (your phone), since the local password manager is not available while you are logged out/screen locked. And it’s likely to be worthwhile to have a memorable passphrase for some of your most used services, such as email. So that’s maybe three or four passphrases that you need/want to memorize, and which should therefore be diceware-or-similar.

                                                                      For everything else, I don’t really care whether I’m using a diceware passphrase or a random-characters passphrase, since it’s in my password manager. But even then, if the password manager offers the option to generate diceware passphrases, I will use those, because they are easier to type and visually verify.

                                                                2. 1

                                                                  the threat model, credential stuffing

                                                                  I am not familiar with this concept, what is the definition of this?

                                                                  1. 4

                                                                    If you somehow learn that github user ken has password p3ssw4rd, try that username and password on every site you’ve heard of, stupidly. Most people reuse passwords, see? So you stuff the github credentials you learned into facebook’s login form, linkedin’s, every service’s login form.

                                                                    1. 4

                                                                      Data breaches are now so common and wide-ranging (just check the billions of records in https://haveibeenpwned.com) that if you’re not using a password manager with individualized password for each service, then the likelihood is very high that the few passwords people inevitably reuse across many services has been already part of a data breach.

                                                                      So nefarious people just take the data dumps with the cracked passwords (or email, password combinations) and just try to login to other services with the same username/password combination. With a quite high success rate.

                                                                      To combat this there are two actions people can take:

                                                                      • use a different password per service (only really feasible with a password manager)
                                                                      • use a strongly random, long password that resists offline cracking (only really feasible with a password manager)
                                                                      1. 1

                                                                        Thanks! I am familiar with the concept, but the specific term was unknown to me.

                                                                        I personally use a password manager, and I think one should probably be integrated in services like Google or Apple IDs. Perhaps banks can include a subscription for one as part of the fee for having an account - it would probably help a lot with fraud, so could be a net positive for them.

                                                                    2. 1

                                                                      This is true, and I use the above to generate memorable nonsense for the answers to security questions, and use my password manager’s maximally entropic random generator for everything else.

                                                                    3. 2

                                                                      Yep! And it has the added advantage of being much easier to type on e.g. a phone virtual keyboard than a shorter but symbol-heavy password.

                                                                      1. 1

                                                                        Maybe even shorter: jot -rcs ‘’ 20 33 126


                                                                        1. 2

                                                                          Not really comparable, though. I’d do something like this from the shell (fish syntax):

                                                                          % echo (shuf -n4 < /usr/share/dict/words | tr '[A-Z]' '[a-z]' | tr '\n' ' ')
                                                                          1. 1

                                                                            out of the current top 10 distros:


                                                                            Debian is the only one that has “jot” available, and its called “athena-jot”:


                                                                            so this suggestion is not helpful.

                                                                            1. 3

                                                                              Didn’t see the Linux tag, my bad.

                                                                    1. 3

                                                                      I ordered from a linux/bsd shop, they do exist, told them I wanted silence and lots of screens, and got a tiny Shuttle thing that Just Worked. I have three screens connected and the PC itself sits on a shelf, quite far from my ears.

                                                                      The combination “linux/bsd shop” and “shuttle” has been great.

                                                                      1. 2

                                                                        Your link is broken btw

                                                                        1. 2

                                                                          I guessed a single, letter typo. Probably meant ixsoft.

                                                                          1. 1

                                                                            Right. Sorry.

                                                                      1. 12

                                                                        Author here, happy to be thoroughly corrected on German or linguistics in general.

                                                                        1. 2

                                                                          Not a correction, but you may want to learn the reason why prepositions are so difficult: They aren’t indoeuropean. Most of the nouns and verbs we use have some root in Indoeuropean, but the prepositions were mostly (entirely?) created after the great divisions, so there’s less reason for them to pair nicely with prepositions in other indoeuropean languages.

                                                                          1. 2

                                                                            The claim that “prepositions aren’t indoeuropean” is poorly-defined and incorrect in most reasonable more-specific senses. Many prepositions in English, German, and in other modern Indo-European languages are straightforwardly traceable to Proto-Indo-European roots. The English prepositions off and of and their German cognate ab, for instance, are reflexes of the reconstructed PIE root *apo, which also yields Greek απο and Latin ab (and then Spanish a). The common English preposition in, which is cognate with similarly-pronounced German in and Latin in (and then Spanish en) are reflexes of a PIE root *en meaning, more or less, “in”.

                                                                            It’s true that not every single preposition in English or any other modern Indo-European language is traceable to a PIE root, and that some roots that yield prepositions in modern IE languages were not necessarily prepositions in PIE (if PIE even had a distinct syntactic category of prepositions), and that some prepositions in English or German are cognate with morphemes in other Indo-European languages that are not necessarily prepositions (German um for instance is cognate with the Latinate prefix ambi-, which is not a preposition in Latin). But I don’t think any of these facts are inconsistent with the claim that prepositions in modern Indo-European languages by and large are shared Indo-European vocabulary, traceable to the proto-language.

                                                                            1. 1

                                                                              I took a random set of prepositions now (the German accusative prepositions durch für gegen ohne um, for no particular reaon other than having the Kluge dictionary on a shelf in front of me) and looked them up. They all are traceable a little over a thousand years back, one has much older roots and another may have, but those much older roots don’t seem to be indoeuropean prepositions.

                                                                              When you write “not every… IE root”, are you suggesting that most prepositions are traceable to an IE preposition?

                                                                            2. 1

                                                                              Wow! Super interesting, thank you for the info.

                                                                              1. 2

                                                                                I saw this really cool diagram once somewhere with prepositions in different languages, including Finnish which doesn’t have prepositions. The idea was, iirc, to demonstrate conceptualization.

                                                                                Couldn’t find it now, but this one on dativ/akkusativ is pretty neat too ;)

                                                                          1. 8

                                                                            One possible avenue is to hire someone to do that.

                                                                            Perhaps you know someone who can’t or won’t accept the job, but would be available for a bit of highly-paid consulting — just enough hours to help you hire the right person?

                                                                            1. 1

                                                                              Thanks for the reply, yes we have considered this, and most likely will be going forward with this. It makes sense, as someone with ~4 years of exp, it will be hard to gauge someone twice my experience.

                                                                            1. 13

                                                                              I hate telling people about this.

                                                                              But. I’ve been compiling java to native code, a little differently. I think I’ll get perhaps 30% speed performance over C when targeting today’s usual superscalar processors, a GIANT size advantage over the openjvm, and a certain predictability/reliability. No GC pauses, no JIT pauses, no warmup, and a variety of nice things you can tell the compiler to do for you. “Make sure that this method and whatever it calls obeys …”

                                                                              A couple of months ago I managed to run a simple program with a complex cyclic data structure. Automatic memory management without runtime GC or rust-like ownership restrictions. That felt good.

                                                                              “Don’t hesitate to champion”. Well, I’m quite some way from a release still, so there’s not much to champion. But I’d like to hear “oh, cool” and things like that. It’s a bit of a slog and today I’d appreciate some encouragement from random strangers ;)

                                                                              1. 2

                                                                                That’s pretty damn cool man.

                                                                                I’ve played a bit around with the same java to native code, although I was targeting small MCUs, which leads to a whole new set of constraints, but I quickly ran into the issue of how do you talk to hardware from Java… And with the concept I used “Hardware Near Objects” it ended up being silly.

                                                                                But it was a good learning experience - are you targeting anything other than a learning experience from your project?

                                                                                1. 2

                                                                                  Well, thanks.

                                                                                  Improving the world and earning money in the process ;)

                                                                                  I don’t care about small size. Once upon a time it was possible to fill one typical unit of storage with the output of a large team of programmers, but today RAM, SSD and spinning rust has outgrown software development output. So I generate comparatively large executables and don’t worry about it. Most of the executables aren’t even paged into RAM, and that’s okay. I keep an eye on L1/2/3 cache friendliness and ignore the other target sizes.

                                                                                  1. 1

                                                                                    Improving the world AND earning money - I don’t think you can wish for much more ;-)

                                                                                    1. 10

                                                                                      Well, it can happen ;)

                                                                                      Many years ago, one day after having visited the dentist and needing another appointment, I noticed that the assistant used some undocumented keyboard shortcuts to make an appointment, and moved around the GUI’s dialogs and forms very fast. I also recognised the GUI library (Qt). So I said, “oh, you use the arrow keys with that program instead of the mouse?” “yes, that’s great” she said. I wrote those shortcuts. It may not be world peace but it was a nice feeling.

                                                                                  2. 1

                                                                                    Most of what would help you has been commercial under the banner of Real-Time Java. Might be something in those links talking about hardware interfaces. I dont remember anything open source other than JOP, which was hardware/software mix.

                                                                                    1. 2

                                                                                      Hey Nick. Thanks for the reply.

                                                                                      I know about Real Time Java, quite a bit of the implementation I did followed the JSR. I’m long done with that project though, it was a project I did back when I was still in university ;-)

                                                                                      1. 1

                                                                                        Oh that’s cool. Anything you really liked or hated about that? Anything worth using today?

                                                                                        1. 2

                                                                                          The main issue I had with implementing it, was a mismatch of models. When you doing real time Java on small MCUs you need to think about allocations, and the fact that you need to box primitives requries that you essentially never use the built in classes, and instead either make your own which conforms to the interfaces or find a third party library.

                                                                                          Another issue was interacting with hardware registers, I couldn’t find a Java way of interacting with them, so I ended up using a deeply unsatisfactory way instead.

                                                                                          I found that it was deeply unsatisfactory because you were essentially poking the bits by yourself, which Java isn’t good at (too high level in general) and it doesn’t look like Java either.

                                                                                          It was a very useful learning experience though :-)

                                                                                          1. 1

                                                                                            Thanks for the insightful reply. It sounds like most of this could be done by using a suitable abstraction for Java which had a native implementation at the JVM level that handled the messy details. I say at the JVM level since it’s doing things like managing memory and threads. So, it might need to know when low-level, risky things are happening that might effect thread or memory state. The combo would let you stay close the the Java model.

                                                                                            Might even be what the RT JVM’s were already doing. Regardless, what you think about that?

                                                                                            1. 2

                                                                                              Without sounding too pessimistic about the general skillsets of developers, I think it’s easier to state that doing all these things to Java effectively results in having to write a specific subset of Java, which the general population of Java developers are not trained in, which seems to be one of the issues that at least commercially has lead to the time spent on RTJVMs - can we take a commodity and turn them into real time developers?

                                                                                              There’s some problems with the way the JVM works internally (Generics are a compiler-time construct and not actually build into the JVM) for it to be effective, which are not that easy to solve without changing even more characteristics of how Java is written, and then again effectively invalidating quite a bit of the existing Java knowledge.

                                                                                              1. 1

                                                                                                Interesting. Well, the market tends to support what you’re saying since they were all custom JVM’s with their own way of doing things. People did need to learn all that. They were usually subsets, too. I don’t think subsets would be hard to learn. They would be forced to solve old problems in different ways using those subsets. Maybe loss in productivity and code reuse since their favorite libraries using non-real-time features wouldn’t work. So many fewer frameworks, too. The pointy-haired bosses would freak out about that.

                                                                                  3. 1

                                                                                    That’s pretty damn exciting. Java is one of those languages that the hipsters love to hate, ignoring the fact that a giant chunk of the modern computing world would simply fall over if it ever stopped working :)

                                                                                    1. 2

                                                                                      Yeah, but that’s true with COBOL, too. I still say ditch it if you can. Decimal support is one thing modern apps, esp handling money, can learn from it. Likewise, new stuff can learn lessons from enterprise and embedded apps in Java. I’d just ignore the desktop side, though. ;)

                                                                                      1. 1

                                                                                        OK. What other type safe object oriented performant language with first class tooling and one of the most incredibly rich library ecosystems in existence would you suggest?

                                                                                        (If you say something silly like C I’mma gonna laugh :) )

                                                                                        1. 1

                                                                                          Sounds like you’re kind of forcing it to be an OOP, JVM language with heavy tooling. If on JVM, I’d say try modern languages developed after Java like Scala or Clojure. If it can call Java libraries, you get library ecosystem. If on .NET, any modern language with tool support that runs on its VM that can call its libraries.

                                                                                          If native, Pascal with Delphi and Ada with numerous IDE’s were alternatives with both being safer, lighter than .NET/JVM, Delphi having fast iterations, and Ada probably more safety and runtime speed. They had a C FFI for its libraries which should be compiled with all the error-finding options. Currently, I’d say Ada/SPARK with GNAT Pro or Rust with IntelliJ with an eye on Nim when its ecosystem improves. They’ll give you safe, maintainable code with Ada supported for decades so far.

                                                                                          If ease of learning and libraries are priority, Go language is easy to learn, tons of libraries, fast iterations, and performs well enough that few complain. Again, with IntelliJ IDE. I don’t use IntelliJ: just mentioning it since you wanted first-class tooling, supports both languages, and people online write good things about it.

                                                                                          “(If you say something silly like C I’mma gonna laugh :) )”

                                                                                          Making C as safe as Rust for arbitrary code would require running it through Softbound+CETS plus maybe Data-Flow Integrity plus a safe, concurrency scheme. Hard to be sure since I can’t remember what risks each tool counters at the moment. I am sure the combined slowdown of (wild guess) 300+% in performance from all the checks would, if running Electron apps, lead to a frame-by-frame, mouse-shadowing experience that would definitely leave you laughing. Then I nudge you saying maybe it be easier if we just try to further optimize safe Rust instead of securing unsafe, legacy C. ;)

                                                                                          1. 3

                                                                                            I don’t see golang as being particularly easy to learn. I find the level of abstraction it presents to be frustratingly low. As a programmer I prefer more abstraction, not less. But this is why making firm statements about programming languages gets tricky :)

                                                                                            And are you really honestly suggesting that large business with huge teams creating projects that are many many KLOC should try to use Clojure? I mean, I love the language but I think you’re asking too much of the average engineer.

                                                                                            And, again, Tooling. You may not like Java, but I challenge you to find a similar language with IDE support as rich or mature. I know the hipster engineer maxim is that IDEs are for chumps, and maybe they are, but the “cold dark matter programmers” in their thousands out there grinding away writing code day to day would disagree.

                                                                                          2. 1

                                                                                            You’re just listing most of the reasons why java won ;) Of course java has to be the answer when you load the dice like that.

                                                                                            1. 1

                                                                                              I don’t see it as loading the dice. Different programming languages are suited to solving different problems. Sure, you can implement a stock trading system in Bash (It’s Turing complete, right? :) but why would you WANT to?

                                                                                              1. 1

                                                                                                That’s an extreme example. Java was already inferior to languages like Common LISP with their IDE’s when it was introduced into the market. Its success had nothing to do with it being the right tool for the job. It was actually overly-complicated, used more memory, ran up to 15x slower than native competition, and so on. Its success was almost entirely due to a massive amount of money spent by Sun Microsystems on marketing it to enterprises and colleges. This put it everywhere. The ecosystem followed. The tooling and libraries improved getting it more useful overtime.

                                                                                                As a language and IDE, though, it’s still slower and with less power than a CL implementation. Actually, a number of things people were trying to build whole toolchains for in Java were done with a macro library in existing toolchain in CL before that. Maybe throw in Smalltalk, too, given what few studies were done showed their productivity ran circles around all the non-LISP languages. Easier to learn, maintain, change, and so on. I do wonder what productivity rate would be with Java’s refactoring tools vs whatever Smalltalk IDE’s have. I know Smalltalk could catch up quicky, though, since it looks to be a simpler language.

                                                                                                1. 1

                                                                                                  Again I am a HUGE fan of Lisp and Lisp-like languages. And I even have a tremendous amount of respect for Lisp-IDE like environments (which are usually built around/in emacs.)

                                                                                                  But I don’t think anyone would seriously argue that such environments had the depth and richness of features that something like IntelliJ has.

                                                                                                  I don’t LIKE Java, but there are reasons it’s used in so many places that people ignore but that would cause the world to stop spinning on its axis if they failed.

                                                                                                  1. 1

                                                                                                    “ argue that such environments”

                                                                                                    Those aren’t the LISP environments I compare a commercial offering to. You’d have to look at the commercial offerings, like LispWorks and Allegro, to make a comparison. AllegroCache looked like it was by itself justifiable reason to use that product given all the object to relational crap people were buying to work around Java and RDBMS weaknesses. They were doing that kind of stuff when folks were fighting to get Java to handle server workloads well. Unlike Java, you can live update these kinds of systems, too.

                                                                                                    “but there are reasons it’s used in so many places that people ignore but that would cause the world to stop spinning on its axis if they failed.”

                                                                                                    I agree it’s important given it’s entrenched. I also agree there can be good reasons to use it. I think that overstates the case, though, where you’ve jumped from there’s good reasons for some to use it to they had something to do with widespread, legacy adoption. Sun’s marketing work plus enterprise buyers and college admins caused almost all of the latter. Now, we’re stuck with it like we’re stuck with COBOL in business, Fortran in HPC, VB in some Microsoft shops, 4GL’s in some data-driven shops, and so on. Companies are stuck with it since the transition costs are too high and they prefer maximizing ROI on existing investments. Has nothing to do with the language features.

                                                                                                    Heck, the people making the decision to force all developers to use Java don’t even usually know how to program. They’re managers and CIO’s. How much could language decisions have factored in if they didn’t know the language? They just read in CIO magazine that .NET and Java were the enterprise languages with all the commoditized talent and tooling. Some truth to that. Then, they went with it. Or they inherited it from others that made a similar decision with no intention to switch given they have to keep costs low to look good to their superiors. Again, that works against considering any language design but the existing one [which might be crap].

                                                                                                    1. 2

                                                                                                      I’ve seen LispWorks and it’s definitely a nice IDE. Point taken.

                                                                                                    2. 1

                                                                                                      And hey, I’m not saying Java + an IDE like IntelliJ + its libraries aren’t an amazing combination. If you can get that, Java might be a great language to use for a lot of use cases. That’s all working despite its weaknesses due to massive investment over time by companies. The IntelliJ people cranked out IDE’s for Rust and Go in no time. I think they internally use metaprogramming that Java doesn’t support for some of what they do. I’m just arguing the language itself should be avoided wherever possible if you don’t have that IDE money or can do the job without it.

                                                                                                      Also, Oracle owns it now. They sue all kinds of people. They also say anything using their API’s is a derivative work. I won’t touch anything they own for legal liability and to avoid giving them more opportunities to sue. Same with Microsoft. Not everyone is worried about multi-billion dollar lawsuits, though. :)

                                                                                                      1. 3

                                                                                                        You’re allowed to not like Java, but you need to come up with better arguments :)

                                                                                                        Oracle? Solution? OpenJDK. It’s totally usable for pretty much everything now, and in fact so much so that my employer has created an enterprise supported version of it, Coretto for companies that want to build things in Java with LTS - e.g. someone to yell at :)

                                                                                                        The reason I keep battling back is because a LOT of people (Not you in this case.) hate Java for really, REALLY stupid reasons. Note that I’m not talking about not preferring to use it, because that’s another story, but saying things like you just did about “Avoid it whenever possible.”

                                                                                                        Because yeah Common Lisp is amazing, no doubt. My first gig in the industry was with a small company called ICAD which was built upon Symbolics Genera but then later ported to Franz Allegro CL. I was watching convex wing surfaces of the Boeing 7X7 (I can’t remember whether it was 6, 7, or 8 :) that was in development because our LISP based CAD system could model the curve in ways traditional CAD systems would burst a blood vessel trying to accomplish.

                                                                                                        But many, MANY engineers find Lisp very hard to wrap their minds around and, much as it’s full of Boiler plate (The IDE can handle that) and has design flaws (anonymous inner classes anyone? Fixed in Java 7 or 8 I think :) there is a HUGE programmer pool for Java and a positively freaking GINORMOUS third party ecosystem that’s all basically just plug and play and as amazing as Common Lisp is, you can’t say that between CL implementations, because the Common Lisp standard at least was kinda fuzzy around the edges WRT implementation detail.

                                                                                                        It’s gross, sure, but it’s the right tool for the job for a lot of very complex business cases, and some technical use cases as well.

                                                                                                        1. 2

                                                                                                          What I find strangest is how people focus on languages’ weak aspects.

                                                                                                          The weak parts matter. I remember a bug where I needed to read two giant blobs of IDE-generated boilerplate line by line. That wasn’t a fun bug, and I dare say I ranted a bit about autogenerated builders afterwards.

                                                                                                          But focus on that strikes me as wrong. We want to do things, so the focus should be on the positive traits — the traits that make something suitable. I chose java for this thing because (omitting nontechnical stuff) it’s sort of low-level (it’s the kind of language where you can look at a method’s source code and estimate its number of register spills), it has a rich ecosystem and it has a very hard border downwards (no sizeof for example, no insight into memory layout). These are positive traits in my context, they are what enable.

                                                                                                          The negative traits is what we should complain about over beer. Dragging them into the spotlight during the working day is… may I say morally wrong?

                                                                                        2. 1

                                                                                          compiling java to native code, a little differently. I think I’ll get perhaps 30% speed performance over C when targeting today’s usual superscalar processors, a GIANT size advantage over the openjvm, and a certain predictability/reliability. No GC pauses, no JIT pauses, no warmup,

                                                                                          So, you made a Java-to-Rust transpiler? You’re brilliant!

                                                                                          1. 1

                                                                                            Is this a joke or is it fanboyism? I can’t make up my mind.

                                                                                            FWIW, making a language suitable for writing by humans and editing by humans is strongly at odds with making it suitable for writing by compilers and simple to reason about using algorithms. Fanboys might think that Rust’s general qualities make it suitable for every role, but fanboys are wrong now and then, even Rust fanboys.

                                                                                            You may now classify me as either “unable to recognise a joke” or “not a Rust fan”. I’n guilty of the first, mixed on the second.

                                                                                            1. 1

                                                                                              I was one of main dudes countering Rust Evangelism Strike Force here and on HN. One made Twitter with an official response. You can bet I was joking. Some truth built in like most of my jokes where at least one person advocated it as a compiler target (vs C or LLVM) to take advantage of its type system (among other things). Mostly joking, though.

                                                                                              The serious version of that idea was when I thought about mocking dynamic languages like Python in a high-performance CL or Scheme to leverage all the work put into optimizing their compilers. Also, get their powerful macros as a language extension. Generating such a mapping might be easier than me figuring out all the optimizations for a new language. How’s that sound?

                                                                                          2. 1
                                                                                            • Automatic memory management without runtime GC or rust-like ownership restrictions.*

                                                                                            That sound interesting – what technique are you using?

                                                                                            1. 2

                                                                                              Several known techniques and some new insight. This part is the longest, most complicated story in the entire software.

                                                                                              1. 2

                                                                                                Why so mysterious? Can you not share the details with us?

                                                                                                1. 3

                                                                                                  I didn’t mean to be mysterious. But I don’t think I can explain, either. (And perhaps I shouldn’t, this might be patent stuff and for once a patent I wouldn’t scoff at.) I’ve tried to explain this before, and failed quite frustratingly. The people I talked to ended up with either nothing or misunderstanding. Perhaps future-me will be able to give a useful four-paragraph overview, but I’m not able to do that now.

                                                                                                  It really is complicated. Sorry.

                                                                                                  1. 1

                                                                                                    I lost all interest when you brought up patents. Good luck with your stuff.

                                                                                                    1. 1

                                                                                                      Doing anything to do with java means that Oracle’s opinions and behaviour matter. You don’t have to care whether I paint myself into any corner, but I care.

                                                                                                      1. 1

                                                                                                        Are you planning to patent it to license for money? If so, then you need to remain secret. If not, then anyone else can patent it or sue you for it once it’s in public as words or an app. They can also sue you right now for any number of things. The good news is they tend to just sue people making piles of money since leeching off them is the goal. You’re nowhere near the six digits and up on this project where Oracle would consider suing you.

                                                                                                        So, you can probably describe it somewhere in any scenario where you aren’t patenting it. By the way, if trying to prevent others patenting it, I think you can make it prior art by describing it in a journal like in ACM and IEEE. They usually let you keep a personal copy on your website. Otherwise, you sneak a draft out that’s Creative Commons with a second, final copy going to them. Lots of folks are doing that. Ask a lawyer, though.

                                                                                            2. 1

                                                                                              Oh, cool :)

                                                                                              Are you following along the recent JDK developments, “syntax”/feature wise?

                                                                                              1. 2

                                                                                                Yes. With variable lag.

                                                                                            1. 2

                                                                                              So…, uhhh, write your own crypto with unique prime generation? :)

                                                                                              Also, it’s a real good thing we put our public RSA keys in key servers to make this so easy for malicious actors.

                                                                                              1. 2

                                                                                                it’s a real good thing we put our public RSA keys in key servers

                                                                                                Uh, it was, at the time? The ideal was that I could look up your PK in a keyserver and send you an encrypted message without having to go through the rigmarole of first contacting you in plaintext and arranging a key exchange.

                                                                                                And public keyservers could let you periodically update your keys with newer versions (maybe with different RNGs!) and still let people communicate with you with encryption.

                                                                                                I’m betting that the majority of the keys that were broken had been used to encrypt one or two test messages, at most. At least that’s my experience from the heady days of PGP and creating keys.

                                                                                                1. 1

                                                                                                  i should have used me sarcasm hat.

                                                                                                  1. 1

                                                                                                    Yep, sorry I missed that!

                                                                                                2. 1

                                                                                                  Right. And the way to generate unique primes is to have lots of entropy, it’s as simple as that. You’re trying to choose a number at random from a very, very large set of candidates, of course you need a large pool of entropy.

                                                                                                  I bought some entropykeys while they were still being made. I don’t know anyone who makes a similar product now.

                                                                                                  1. 2

                                                                                                    I bought some entropykeys while they were still being made. I don’t know anyone who makes a similar product now.

                                                                                                    There’s a couple of hardware RNGs on crowdsupply, like: https://www.crowdsupply.com/13-37/infinite-noise-trng

                                                                                                    I might get one now that I understand why there’s such concern about the RNG…

                                                                                                    1. 2

                                                                                                      Note that some modern CPUs have real RNGs. I don’t trust those completely, but they do mean that if your CPU is modern and you run linux, then the worst case isn’t even nearly as bad as it was a decade ago.

                                                                                                1. 17

                                                                                                  Somebody needs to solve the mismatch between the value generated by free software and the inability to get paid. Programmers shouldn’t have to take a huge pay cut to work on libre software.

                                                                                                  Having to ask for ‘donation’ is an insult to the dignity of a competent programmer who can otherwise get a very lucrative offer for his/her skills.

                                                                                                  1. 11

                                                                                                    I honestly think to a large degree this has been solved if we follow the example of SQLite. Rather than trying to reach out all all possibly users of SQLite, trying to get a monthly donation of like $1/$2/$5 from each user, they focus on the corporate users of the software and ask for significant investment for things that corporate users specifically care about and aren’t “donations”:

                                                                                                    • Email Support: $1500 a year.
                                                                                                    • Phone Support: $8000 a year base, for SLA response time goes up to $50,000.
                                                                                                    • Consortium Membership: $85,000+
                                                                                                    • Title/License: $6000
                                                                                                    • AES Custom Version: $2000
                                                                                                    • CEROD Custom Version: $2000
                                                                                                    • ZIPVFS Custom Version: $4000
                                                                                                    1. 2

                                                                                                      Note that this doesn’t work if the software isn’t going to be used by companies. For instance I have a hard time picturing a company pay for sway or aerc.

                                                                                                      1. 1

                                                                                                        Absolutely, stuff that is of no use to a corporation is harder to deal with this way. I would argue that at certain levels of corporate dependency, even niche products like text editors and diff tools can get widespread financial backing. I have seen both (text editors and diff tools) get major contributions in terms of cash from corporations.

                                                                                                    2. 9

                                                                                                      Donations are difficult to justify by companies both legally and in terms of business. They also cannot guarantee any continuity to the recipient. Moreover, donations are inherently unfair to donors VS non-donors.

                                                                                                      Public funding has been invented exactly for this.

                                                                                                      1. 2

                                                                                                        Moreover, donations are inherently unfair to donors VS non-donors.

                                                                                                        Could you elaborate “fair” a little?

                                                                                                        I cannot settle on a definition of fairness around donations (esp. to develop open source software) that I, myself, would use in this situation, and so I would surely fail at assuming the definition intended in your comment.

                                                                                                        1. 4

                                                                                                          Forgive me for the platitude: If a company donates a lot and a competitor does not (while still benefiting from the same shared public good), the latter has an advantage. This little prisoner dilemma around donations encourage greed over cooperation. That’s why taxes are mandatory.

                                                                                                          1. 2

                                                                                                            If a company donates a lot and a competitor does not (while still benefiting from the same shared public good), the latter has an advantage.

                                                                                                            That sounds right but might not be. IBM backed Linux when Microsoft and SCO were going to patent sue everyone trying to use it. IBM both was donating labor to Linux and would counter-sue to keep it open. The result over time was IBM got massive benefit from Linux while proprietary offerings folded or went into legacy mode over time.

                                                                                                            I mean, IBM and Linux are both kind of unique. That success might not extrapolate. It might for companies who can stand to gain a lot from a community around their product. The community might be bigger if the company significantly invests in open source and community activities.

                                                                                                          2. 3

                                                                                                            I assume the rational is that open source code is a public good in the same way that clean water or science is. If you spend a lot of money making sure that your local river has clean water, or a lot of money to study physics then the benefits are shared by everybody but the costs were incurred by just you.

                                                                                                            “Fairness” in the context of shared resources generally means that the costs of providing the resource are shared by the users of the resource in proportion to the benefit those users each receive.

                                                                                                          3. 2

                                                                                                            I agree that public funding was meant to solve problems much like this, but that doesn’t make it an easy solution.

                                                                                                            There are thousands of new libraries created every day, which ones will you fund? How much money should you give Pixie Lang?

                                                                                                            The NSF gives grants to researchers who are motivated to publish papers, which journals will only accept if the papers reveal something new and important. If you give money to open source developers do they have any external motivation to produce useful things? What is preventing them from adding a million new features to OpenSSL rather than carefully auditing the code and fixing tricky bugs?

                                                                                                            If ruby is given enough public funding to hire 10 developers, won’t that make the developers who weren’t chosen feel like they’re not as important? Would they continue contributing as much as they have when they know somebody else is getting paid?

                                                                                                            Many open source projects have contributors from many different nations. Is the agency doing public funding okay with giving money to non-nationals?

                                                                                                            1. 2

                                                                                                              public funding was meant to solve problems much like this, but that doesn’t make it an easy solution

                                                                                                              It worked better than other alternatives during the last 100 years to develop phones, early computers, semiconductors, satellites, a lot of medicine, aeronautics, chemistry… Anything that does not have a short or medium-term return.

                                                                                                              Is the agency doing public funding okay with giving money to non-nationals?

                                                                                                              A lot of long-term scientific research is funded through global nongovernmental organizations.

                                                                                                          4. 6

                                                                                                            Not a great comfort to a libertarian, I’m sure - but for those who believe in government intervention, taxpayer-funded work on core infrastructure is an obvious way to share the load (since broadly speaking, society at large benefits from access to improved technology).

                                                                                                            IIRC at least one of the OpenSSL core team was funded - for years - off the German social security pension. RMS’s work at MIT was largely funded through the US government. DARPA paid for a ton of computing infrastructure.

                                                                                                            1. 4

                                                                                                              Who is this somebody who needs that?

                                                                                                              Describing your own desires as someone else’s needs is a cop-out.

                                                                                                              1. 1

                                                                                                                I discuss this a lot. It usually breaks the moment I bring in the notion that this somebody should probably be paid, at around, say, 10-20% if what the developers get.

                                                                                                              2. 1

                                                                                                                If the software is valuable, you can license it such that you can sell it for money.

                                                                                                                1. 6

                                                                                                                  This is a pretty often mentioned, but not every FOSS software has a straight forward business model attached. For example, programming languages are far too remote from an actual product for people to actually invest in them on large scale. Yet, they certainly have huge value! If you see the struggle to get a widely used project as MRI funded…

                                                                                                                  Sure, I could get my money by consulting in that programming language and being an expert in it, but there, the incentive is actually again to have other people developing it and just run around using their stuff.

                                                                                                                  Also, not every programmer wants to become a salesperson or build end-user products.

                                                                                                                  1. 3

                                                                                                                    You can also license it freely and sell it for money. There’s no inherent contradiction in “commercial free software”. Indeed, sr.ht seems like it fits this category.

                                                                                                                    1. 1

                                                                                                                      Great example (and congrats again) :)

                                                                                                                      In my experience, most such software is very hard to deploy for yourself (since the primary maintainer has no real reason to work on that aspect and nobody else tends to step up).

                                                                                                                      This is in no way a jab at your fantastic work - merely an observation of how this, like every funding structure, exerts a pull on the project funded.

                                                                                                                      1. 1

                                                                                                                        Congrats? For what? I’m not Drew.

                                                                                                                        1. 1

                                                                                                                          Huh, somehow I got my wires crossed, sorry.

                                                                                                                    2. 1

                                                                                                                      I wonder if that’s true, and if not, why.

                                                                                                                      You’ve done it. And perhaps I have too (although one might tell my own story in different ways). But the people who manage to create functioning open source software from scratch have failed to earn real money from it with such frequency that I wonder whether there’s some blocker. That some personality trait that’s necessary to create that software also blocks generating income from the software.

                                                                                                                      1. 1

                                                                                                                        I absolutely believe this is the case, personality traits that draw people to open source software tend to push them away from the obvious avenues of income. I think they also fear angering their communities if they start to value corporate users over regular users. I think this fear is misguided, if regular users get much a much better product because of that corporate support, I believe they will be very understanding / support (ala sqlite).

                                                                                                                        1. 1

                                                                                                                          That some personality trait that’s necessary to create that software also blocks generating income from the software.

                                                                                                                          I don’t believe this is the case. FOSS comes out of a culture where many people could make their ends meet. Either by being employed by MIT or by having a paid day job.

                                                                                                                          It’s something our community - as a whole - could easily ignore and not build structure around. That’s falling on our feet now. Those structures will take years to build.

                                                                                                                    1. 2

                                                                                                                      This rant seems to be a response to something:

                                                                                                                      Another week comes along and with it, another assault on CSS

                                                                                                                      Unfortunately I didn’t see this [yet another assault on CSS] so I’m just left scratching my head what arguments he’s defending against.

                                                                                                                      1. 2

                                                                                                                        You can have some of the ones I’ve seen; I’ve seen more than I need. The general format is “CSS has [disadvantage]; […] would be better because [advantage]”. The suggested/outlined/sketched alternative would have disadvantages or problems too, but those aren’t mentioned.

                                                                                                                        It’s largely a matter of comparing existing software against vapourware, and vapourware never segfaults.

                                                                                                                        1. 2

                                                                                                                          No, we cannot. Or yes, we can.

                                                                                                                          Caching doesn’t optimise the function that solves the problem, it optimises the use of the function. It optimises repeated calls to f(16), assuming 16 comes up often enough to make the extra lookup worthwhile. But the combinatorial problem is f, and that 16 is outside the combinatorial problem.

                                                                                                                          Sometimes (often) problems are solved by repeated or recursive application, so f might call itself. That doesn’t really change the argument in the preceding paragraph, it only complicates the question of whether 16 in the final instance occurs often enough.

                                                                                                                          1. 1

                                                                                                                            To clarify: Is it in nature of combinatorial problems that they are built upon overlapping sub-problems?

                                                                                                                            1. 1

                                                                                                                              Minimum spanning tree seems like a combinatorial problem that isn’t easily solved as overlapping sub-problems.

                                                                                                                              1. 1

                                                                                                                                Assume that they are. In that case, estimating how many times a particular cachable case is called (even recursively) is difficult. But the estimate remains a function of how the rest of your code uses the function.