1. 1

    I miss clean protocols and truly native clients, but it’s hard to imagine that we’ll ever see their like again. It’just too seductive to implement atop the browser.

    1. 3

      I’m installing Linux from Scratch (LFS) for the first time (in a VM, not as a daily driver). I have a vague idea to build an experimental system atop the Linux kernel, and LFS seems like a decent way to get a small core of functionality.

      Part of my vague idea involves replacing init and other low-level tools, so we’ll see how it goes!

      1. 5

        Nokia 101 (phone booth replacement, if I’m outside. holds my private phone number) motorola one vision (pretty much my “landline”, stays in the office. holds my public phone number)

        I do not like home banking or even email on my phone: with the first holding my money and the second many, many online accounts, it would be a single point of failure, should someone pick it from my pockets or should I lose it.

        Mobile Websites are not for me: I’m almost 50 and my eyes clearly show age, I hate dumbed down mobile versions of websites, and I have given up on taking all the adblocking and privacy efforts twice.

        1. 3

          I do not like home banking or even email on my phone: with the first holding my money and the second many, many online accounts, it would be a single point of failure, should someone pick it from my pockets or should I lose it.

          I feel the same way, and it’s interesting to me that more tech-savvy folk do not also. I don’t even have an email client on my laptop; I only do email from a desktop. I guess in this modern age, people have grown so attached to their digital lives that they’ve accepted that having silicon parasites on you 24/7/365 is just considered normal.

          1. 2

            What do you do with your laptop? Staying connected is my main use case for a laptop these days.

            If I am away from my desk and can get someone who’s working for me unstuck by firing up my laptop and helping them over email or chat in less time than it would take me to get back to my desktop once or twice a year, I’ve paid for my laptop and justified keeping it nearby most of the time. I could make a slightly less direct argument for using it to help someone I’m working for.

            1. 1

              My big use case is maps, public transit directions and/or Uber, looking up locations (e.g. restaurants or museums) and light web browsing to kill time in transit.

          1. 49

            Kind of annoying that you have to read thru a third of the article to get to the important part:

            Is Flow open source?

            No. […]

            1. 19

              It is this part of the answer that I find more interesting: “There’s no current plan for that as we don’t have a large corporation backing our development. “

              It just makes me sad. Open source was supposed to destroy the corporations, not empower them! It was to bring freedom to the development world, not leave it at the mercy of big money operators.

              Nothing new, no big comment. Just lamenting :( (though the khtml legacy may be interesting - and it is LGPL… perhaps we have that to thank for the openness we do still have at least)

              BTW I also hate the name “Flow”. Gah I can’t wait for this era of names to come to an end.

              1. 14

                Open source was supposed to destroy the corporations, not empower them!

                Was it? I always thought that free software was about empowering the users — raising then up, not dragging anyone down.

                1. 8

                  Open source has always been about empowering the corporations from the beginning, and free software has always been about preventing corporations from exploiting users, which under the current capitalist system amounts to destroying or crippling them.

                  1. 1

                    yeah i was being kinda loose to fit the star wars meme.

                    But open source is basically corporations taking over the free software idea and twisting it for their own benefit. So I should have said “free software” of course but eh the article said “open source”.

                  2. 10

                    This is a weird attitude. I’m all for open source and have been working on open source full time for several years.

                    But just because someone starts an important/interesting project doesn’t mean anyone should demand it be open source.

                    The obvious response is: Start your own open source browser project, and recruit or pay the 100+ developers it will take over decades! If it were easy or cheap, we’d see a lot more of these types of projects.

                    1. 5

                      I don’t demand it, I just would prefer not to run some person’s code nobody can read.

                    2. 8
                      1. 4

                        It is this part of the answer that I find more interesting: “There’s no current plan for that as we don’t have a large corporation backing our development. “

                        Well, imagine they release it today: people will report issues, create PRs, ask for features, etc.

                        Responding to that in a vaguely timely fashion takes up a lot of time. If you’re a small company, you may not want to spend the time/money.

                        1. 15

                          You don’t have to have an issue tracker, or forums, or accept contributions, or even have source control.

                          It’s open source if you dump a tarball once per release.

                          1. 7

                            I’ve worked on open source without a public bugtracker. We were flamed for that. “Not really open source” etc.

                          2. 5

                            More than that, they want to sell it.

                            1. 2

                              I would like Flow to be open source, but I don’t care enough to do anything about it. If I really wanted to make it happen, here’s how I’d go about it:

                              1. Find enough developers who will commit to maintaining it properly
                              2. Approach Ekioh and ask them to make a deal
                                • They would benefit from additional contributors without paying maintenance costs
                                • They will probably want some cash too
                              3. Crowdfund to raise the cash

                              That’s pretty simplistic, I realize. But my point is just that license problems are business problems, and can sometimes be solved.

                            2. 4

                              I think it’s honestly a misdirection. There have been plenty of good open source projects with small businesses behind them. It’s like saying “Oh I can’t do the dishes tonight because I don’t have large corporation backing”.

                              1. 12

                                I’ve worked at one of those (one of the first to do it) and when I read that sentence it, I just nodded. “Yeah, can understand that.” What they mean is probably that they need income, every month, and they’re worried that by opening the source their existing business model is at risk and they don’t have an obvious replacement.

                                The worst case is roughly: zero outside contributions, a wide user base that pays nothing and expects much, the user base does not contain prospective customers, and too many of the existing customers decide to stop paying and just use the free offering. With skill and luck it’s possible to devise a new business model and sales funnel that uses the width of the user base, but doing that takes time, and without a corporation backing it, how does one keep the lights on meanwhile?

                                1. 4

                                  What they’re really saying is they don’t have the skill or finesse to pull it off. That’s fine, however plenty of small businesses have made great profits while open sourcing their products. You don’t “need” corporate backing, and I’d argue if anything it’s an obstacle rather than a benefit.

                                  1. 10

                                    The skill and finesse to pull it off is considerable, IMO it can be regarded as infinite unless you have two more things:

                                    • skill and finesse
                                    • luck
                                    • funds to last you through a period without income.

                                    Skill alone isn’t enough.

                                    A “large corporation” in this context is simply one that’s large enough to have one or more sources of income unaffected by the product being developed, and whose other income is large enough to carry a team through the product development phase.

                                    (I’ve worked at three small opensource companies and spoken to my counterparts at others.)

                                    1. 1

                                      Not saying your concern is entirely invalid, I think those things DO matter. I just also think the reality is probably somewhere between “It can’t be done” and “It’s trivial to do”. The idea that you can’t run an OSS business without backing by a major corporation is probably untrue. The idea that you can run an OSS business without capital, luck, or skill is probably also untrue. I personally found it upsetting that he was attempting to put it all on a lack of corporate backing instead of just saying it was a strategic decision to keep an edge on competition or something. I often find when people deflect blame on to things they can’t control they are often trying to sidestep the extent they do have responsibility or control over the situation.

                                    2. 5

                                      What fields were they operating in? Are they still prominent or even around? Were they ever prominent?

                                      Where did they get money? Corporate customers, side gigs or a big inheritance? Did they detour from their core paying business to do open source?

                                      How long did it take for them to become sustainable? Did they?

                                      What’s the proportion of “plenty” in comparison to the competition that didn’t make it? To the corporate-backed competition? To the competition that’s still around with the same premises?

                                      Not to come off as too much of a duck here, but all these questions are very important when saying someones have generally made money. Surely the response might warrant more of a study than a reply, but seeing how under-staffed and -paid open source is, I’m a bit triggered by negating legit concerns with “others done it”.

                                      1. 2

                                        I’m a bit triggered by negating legit courses of action with “it can’t be done”, so… I doubt we’ll have a tremendously productive discussion. I think your questions around it are fair and reasonable but I think our stances and positions are too far apart to find the center in the comment thread. I’m not really interested in debating this out however I do appreciate that you took the time to come up with good challenges to my point.

                                2. 1

                                  I’m curious about the name. What would have been your choice?

                                  1. 2

                                    Seeing as the company is named Ekioh, perhaps “Ekioh Browser Engine”, EkEng or EBE for short, or maybe a four letter word that isn’t already used by multiple software projects

                                    1. 2

                                      I probably would go Ekioh Browser - descriptive yet unique by including the existing company name. There’s just a trend right now to use fairly short, generic names. I imagine the marketers are like “we want to evoke a feeling” but I just want some decent idea of what it is and how it is distinct.

                                  2. 17

                                    Does the fact that the browser is not open-source mean that it is not bringing diversity to the market? I’d argue that browser diversity was in a healthier state when Opera had a proprietary engine than it is now that Opera uses Chromium and Blink.

                                    Don’t get me wrong, I’d much rather see this be open-source, but I don’t think the fact it’s closed source means it’s irrelevant.

                                    1. 40

                                      one thing to keep in mind is that privately controlled web engines can disappear without leaving a base for a community to develop, as with presto.

                                      1. 8

                                        That’s a fair argument.

                                        1. 3

                                          Open source software can disappear, too, when the entire development team goes away.

                                          I’m not aware of any open source that was

                                          • developed by a smallish company
                                          • opened
                                          • received substantial contributions from outside

                                          AFAICT, if something comes from a company and isn’t an obvious non-product like e.g. lepton, then outsiders regard it as that company’s product, and don’t spend their time developing that company’s product for free. A community does not develop.

                                          I’d be thrilled to learn otherwise. Particularly how small companies might get others to develop their product for them.

                                          1. 2

                                            IIRC even the Mozilla codebase languished for quite a while, long enough for the company to go under before it got really picked up by a community. It was a last-ditch desperate effort, but still…

                                            1. 2

                                              Doesn’t Netscape/Mozilla/Firefox fit your criteria? Plan 9 also comes to mind.

                                              1. 2

                                                Wasn’t Plan 9 a Bell labs thing? That is to say, unless I misunderstand what you mean by “Plan 9” it was produced by one of the largest, most famous monopolies in US history. Or pretty much the opposite of a smallish company.

                                                I would not call Netscape or AOL (depending on who you want to attribute the open source release to) smallish either… if memory serves they were worth $10 Billion or so at their peak. But that pales in comparison to Bell.

                                                1. 2

                                                  Right. (The $10B is irrelevant IMO, the relevant number is about $2B according to Wikipedia.)

                                                  So from the point of view of the Flow people who might be considering going an open source route, there’s a distinct shortage of examples to learn from. A $2B company whose CEO regards as an “amalgamation of products and services” is hardly relevant.

                                                  Mozilla was founded with a ten-digit endowment from AOL. Fine for the users, but it makes Mozilla irrelevant as a case to learn from for teams without such fortune.

                                                  1. 1

                                                    (I was assuming that the poster I replied to was sincerely arguing that Netscape or Plan 9 would count as something from a small-ish company. If my sarcasm detector was miscalibrated, mea culpa.)

                                                    This is perhaps the only case in the world where I’d call a difference of $8B “splitting hairs” :)… I’m no more prepared to argue that a $2B company is small than I am to argue that a $10B company is.

                                                    1. 1

                                                      No, your rant detector was miscalibrated.

                                                      Some of these pseudo-arguments annoy me so very much. I wish opensource advocates would use real arguments, not shams that look good at first glance, but make open source look bad in the eyes of developers/teams that are considering going open source. 39 upvotes for something that silently implies that open source can’t/won’t disappear means 39 people who aren’t thinking as carefully as I wish opensource people would. It gets to me and I start posting rants instead of staying properly on-topic. Sorry about that.

                                                      1. 1

                                                        sorry; i missed that you asked about “smallish” companies and i misunderstood the thrust of your argument. i guess you were arguing that it would be a risk for flow to open source their browser? i don’t disagree, but that’s different from the question of how much we should care about or support this effort, as people who care about browser diversity.

                                                        are you trying to argue that free software can disappear without leaving a base for a community to develop? what line of careful thinking would lead you to that conclusion?

                                                        1. 2

                                                          The careful thinking is based on two things.

                                                          First, an observation that the number of outside committers to a conpany’s product is extremely small. People don’t choose to use their own time to work on someone’s product — they find something else to work on. Because of that, the development team for any opensource product is overwhelmingly in-company.

                                                          Second, source access is necessary but not sufficient for good software development. Much of what makes development practical is in the team. It’s drastically easier to develop software (both fixing bugs and developing new features) if you can speak to the people who’ve worked on it so far, ask questions, get answers.

                                                          Both of those are rules of thumb, not laws of physics. If you however assume both to be absolutely true, then there’s no difference between a single-product closed-source company doing an opensource dump when it’s acquihired and an opensource company with a single opensource product. If you (more realistically) assume both things to be true with exceptions, then the difference is as large as the exceptions permit.

                                                          You may compare threee scenarios for product/team/company closure, whether it’s an acquihire, bankruptcy, pivot or even things like the whole team going on a teambuilding exercise on a boat, and the boat sinking:

                                                          • Open source company closes (any reason): New team may form from volunteers, continuity is lost.

                                                          • Closed source company closes, dumps source on github: New team may form from volunteers, continuity is lost.

                                                          • Closed source company closes, does not dump source on github: End of story.

                                                          Ie. open source has advantages and some of them are IMO significant, but safety or continuity in the event of the team going away isn’t one of them. “Safety” and “continuity” are big words. A new team may spontaneously form, but that’s far from automatic, so there’s no safety, and and if it does form it hardly provides continuity.

                                                          1. 1

                                                            that all makes sense, and does not contradict the fact that open source products provide a base for community development, even if the base is just a source code dump. there may be a continuity barrier, but it can be overcome.

                                                            for a browser engine, it makes a difference whether it is released like gecko, allowing forks and community development, or released like presto, where a pivot by a private company ends the possibility of further development.

                                                            hopefully you see now that my argument was real and not a sham, and your wish for open source advocates to think carefully is fulfilled.

                                                            1. 1

                                                              Well, it provides a base in almost exactly the same way as, say, Mitro’s code dump did when it was acquired. Mitro could have opened the source earlier (it actually did so on the day as part of its acquihiring process), and I don’t see any reason why an earlier open source process would have provided more of a base.

                                                              1. 1

                                                                sure, but before a company does a code dump there is no assurance that they will if the company pivots or goes bust.

                                                                1. 2

                                                                  True. However, do you think that’s a major aspect of uncertainty? I think the users you have in mind aren’t paying customers, right? Someone who isn’t a paying customer (who has no contractual relationship with the maintainers) can hope for continued development, support, years of unpaid service, but only hope, no more. There’s no assurance of bugfixes, of new features, of a port to the next OS version, of compliance with next years’s laws or the ability to read next year’s Microsoft Word files, or that the next version will be open source.

                                                                  It’s just one more item on the list of hopes.

                                                                  You’ve probably heard stories about companies who implement major new features and then leave them out of the open source tree? I heard about someone who did that with Catalina support recently. It was a tool often used by system integrators, can’t remember the name, but it’s said to be the only open alternative in its niche. For these system integrators, open source was basically a free trial. Once they had invested in that tool, deployed it widely, their customers upgraded to Catalina and they needed to react in a hurry.

                                                                  1. 1

                                                                    True. However, do you think that’s a major aspect of uncertainty? I think the users you have in mind aren’t paying customers, right? Someone who isn’t a paying customer (who has no contractual relationship with the maintainers) can hope for continued development, support, years of unpaid service, but only hope, no more. There’s no assurance of bugfixes, of new features, of a port to the next OS version, of compliance with next years’s laws or the ability to read next year’s Microsoft Word files, or that the next version will be open source.

                                                                    the same applies to proprietary projects so i’m not sure what you’re getting at.

                                                                    are you saying even corporate-led open source projects don’t provide a guarantee that the project will continue to be open source? that’s fine but again doesn’t contradict anything i’ve said. it’s still better than proprietary from the perspective of browser diversity because the latest open source release would still provide a base for community development.

                                                  2. 1

                                                    i must have missed the word “smallish,” whoops

                                            2. 2

                                              Even Internet Explorer, shitty as it was, using its own engine made the web more diverse and forced developers to at least keep some semblance of portability. With the arrival of Edge, they also went the Blink/Webkit path.

                                              There are basically only two (or three, if you count Blink and Webkit as distinct) rendering engines left which matter. That’s truly sad.

                                              So yes, seeing a new browser emerge is actually something that I find hopeful.

                                              1. 2

                                                With the arrival of Edge, they also went the Blink/Webkit path.

                                                They did not do that with the arrival of Edge. They started Edge on its own engine and only just recently released a blink-based version.

                                                IE may have initially encouraged some portability, but its net effect was quite the opposite. There were a lot of IE-only products by the time we saw version 6 or so.

                                                1. 2

                                                  IE may have initially encouraged some portability, but its net effect was quite the opposite. There were a lot of IE-only products by the time we saw version 6 or so.

                                                  That was when IE had “won” the browser wars and had added nonstandard features which other browsers didn’t support. Once they’d killed off Netscape people didn’t have any incentive to run other browsers, and those extra features got used by developers, entrenching it further because of these IE-only products you mention.

                                            3. 6

                                              This is the only thing I was looking for too. Not sure how Flow is supposed to solve any of the problems posed by a lack of browser diversity if it isn’t open source.

                                              1. 11

                                                Any alternative implementation of web technologies that isn’t WebKit gaining a non-trivial market share is a positive for those of us concerned about browser diversity, regardless of whether that implementation is open-source or not.

                                                1. 1

                                                  Android might be a point, but without Windows it will not get a non-trivial market share.

                                              2. 3

                                                Thank you, thats one of the first items I check

                                                1. 4

                                                  Can you come up with a better way to sustain its development than “people paying for it”? Unfortunately, free software isn’t free to develop.

                                                  1. 1

                                                    I’m not complaining that they’re charging for it; I just wish the article was up-front about the licensing at the outset so I would know not to waste my time on it.

                                                1. 3

                                                  Mickens is always funny, but in this case I don’t find him as persuasive as I imagine he would like. E.g. while cryptocurrency was definitely overhyped, maybe it is simpler/cheaper/easier/more ethical to fix the problems with cryptocurrency than with fiat monetary systems (I’m not arguing that’s the case, just that it’s possible). Or as another example, maybe developers do have a moral imagination, and maybe they disagree with James Mickens.

                                                  Mickens’s writing and delivery are hilarious (and I heartily recommend his early work), but I think that he’s not sufficiently challenging himself here.

                                                  1. 2

                                                    Agreed. There are a lot of comparisons in this video, but not necessarily good points. I don’t see how anyone can find this convincing, but maybe it could cause some to reflect on their views.

                                                  1. 5

                                                    Apparently we’re not supposed to TL;DR in story text, so here’s a comment:

                                                    Signal using Intel SGX to allow for secure backups in case you lose your device. Password-based encryption works, but offline dictionary attacks are problematic. So, password authentication into an enclave, with few attempts. However, you want to replicate the enclave without allowing parallel password guesses. So build a consensus protocol out of SGX attestation operations.

                                                    This is really cool, well-developed applied crypto research. Only major concern is how much it relies on SGX, which has been broken seven ways to Sunday.

                                                    1. 6

                                                      I am surprised by how much the folks at Signal trust SGX. Frankly I don’t understand it. I don’t believe that it’s been maliciously compromised — I just don’t believe that SGX is bug-free in both design and implementation.

                                                      Likewise, this secure value recovery proposal sounds neat and great, but also really complex. That complexity means lots of opportunity to fail. One of those clear failure points is SGX: if that fails then I believe the entire system fails completely.

                                                      1. 10

                                                        A comment I read a few years ago stated that Moxie (at least) had absolutely no trust in the government but had no issue with large companies, making him actually typical American in this regard. I don’t like classifying people that way andI didn’t expect such a thing but it matches the reality pretty well: trying to protect against government spying but trust in Intel, in Intel SGX (*), cloud providers (Amazon, Google, Microsoft iirc), …

                                                        (*) Trusting Intel SGX and publishing an article only a few days after https://pludervolt.com was announced is so unrealistic that it’s actually almost funny.

                                                        1. 3

                                                          It’s funnier when you consider Intel operates in a police state that can secretly compel backdoors and targeted surveillance. They’re also one of the most likely to be cooperating with that. That said, they currently use the secret capabilities for fewer targets than the broader, more-public enforcement goes after.

                                                          1. 3

                                                            Non-typo’d version of that link: https://plundervolt.com/

                                                          2. 4

                                                            That’s what I told Moxie. He also didn’t seem to know why there’s a preference for tamper-resistant HSM’s in these use cases.

                                                            1. 2

                                                              IIRC he also strongly trusted google, choosing to only work with GCM and nothing more. It took a huge amount of input from users and no less than 2 forks for him to finally support other means of message delivery that don’t rely on google.

                                                          1. 1

                                                            Y2K is fascinating to me as someone born afterward. Such a silly thing looking back, but I’m curious as to why many people found the mythical bug a serious issue. Perhaps it was that knowledge of computers was not yet ‘mainstream,’ and people just didn’t understand computer systems in general?

                                                            1. 27

                                                              It was a serious issue, and we fixed it.

                                                              A lot of folks look at the fact pattern as: people said Y2K was a problem; we took them seriously and spent a lot of money addressing it; nothing happened — and therefore there was no problem to begin with. That’s just not the case: there was a problem, and those sums of money solved it.

                                                              It’s though a bit cried ‘wolf!’ and the villagers banded together and drove it off, successfully defending their flocks — and then got angry at him, because the wolf didn’t eat any sheep.

                                                              What really worries me is that the next Y2K issue won’t be fixed, and will result in death and destruction, precisely because folks think that the first one was a hoax.

                                                              1. 6

                                                                This, exactly. A couple weeks ago, my daughter was raving about the cleanliness of the floors in our house, as if this sort of thing happened naturally. I had to remind her that she’s just absent when I spend a lot of time taking care of home things like cleaning the floors. Not so different.

                                                                1. 4

                                                                  What really worries me is that the next Y2K issue won’t be fixed, and will result in death and destruction, precisely because folks think that the first one was a hoax.

                                                                  I’m not really that worried about that; anyone who knows how computers work would find an argument like “These old machines count time as a 32-it number of seconds which overflows in a few years” convincing. When the entire IT department takes the issue seriously, I can only assume the people above that would let them so what they deem necessary to keep the critical systems running. This isn’t really something the general public needs to believe in to fix.

                                                                  Or maybe I’m just naive and people in charge don’t trust their IT staff to know what’s best for IT infrastructure.

                                                                  I’m worried about sporadic failures going forward due to the hacks intended to fix y2k though. If some people’s solution to y2k was to read all numbers below 20 as 20xx and all numbers above it as 19xx, because those 20 years ought to be enough to fix the issue properly…

                                                                  1. 3

                                                                    This is why good(and public) history sources are something I will always champion. A post this week comparing these attitudes on Y2K to the 1987 treaty banning CFCs really resonated with me, having grown up witnessing both events first-hand.

                                                                    Even here in Seattle where newcomers love the views: Metro didn’t start as a bus service and many of our beaches were unsafe for swimming until the 80s.

                                                                    1. 2

                                                                      I had no idea it wasn’t a hoax. Thanks for filling me in; I’ll go research it more for myself.

                                                                    2. 6

                                                                      There were some people who were scared that old computers running deep down in cold war nuclear silos might go haywire and launch missiles (I kid you not). I guess it’s the unpredictability of the whole thing that scared people, mostly. Nobody was able to tell exactly what would happen when these date counters would overflow, which kind of makes sense because overflow bugs can cause really strange effects.

                                                                    1. 1

                                                                      Evidence of an early proto-dark-theme, too.

                                                                      1. 2

                                                                        Nitpick: Advent consists of the four weeks before Christmas (starting with Sunday), rather always starting on December first. (Source: https://www.fisheaters.com/customsadvent1.html)

                                                                        (Yes, I know. This year Advent does start on December first. Your point?)

                                                                        “First get your facts straight. Then distort them at your leisure.”
                                                                        -Neil deGrasse Tyson (https://twitter.com/neiltyson/status/835938739784314880?lang=en)

                                                                        1. 7

                                                                          You are technically correct, but most advent calendars start from 1 Dec and end on Christmas Eve or Day.

                                                                          Using the secular form (i.e. all days in December up to 24 or 25) prevents acrimonious debate about when, exactly, Advent Sundays fall. For example, according to the fount of all human knowledge:

                                                                          In the Ambrosian Rite and the Mozarabic Rite, the First Sunday in Advent comes two weeks earlier than in the Roman, being on the Sunday after St. Martin’s Day (11 November), six weeks before Christmas.

                                                                          1. 4

                                                                            And in the Orthodox world the Christmas fast starts 40 days before Christmas!

                                                                            1. 2

                                                                              In the Ambrosian Rite and the Mozarabic Rite, the First Sunday in Advent comes two weeks earlier than in the Roman

                                                                              I didn’t know that until today (I’m a member of the Roman Rite). However, this only strengthens the point that Advent is a religious thing and secularization is diluting the meaning of such names.

                                                                              Using the secular form (i.e. all days in December up to 24 or 25) prevents acrimonious debate about when,

                                                                              I’m not angry at anyone in particular over this. It would be nice if secular customs didn’t use religious names, but I think it’s a bit late for that.

                                                                              1. 4

                                                                                These rites were news to me too!

                                                                                The creator of the project seems to have German Lutheran (cultural) roots, so the project was inspired by his memory those calendars. See this talk: https://lobste.rs/s/ay9oft/advent_code_behind_scenes

                                                                                1. 2

                                                                                  See this talk: https://lobste.rs/s/ay9oft/advent_code_behind_scenes

                                                                                  I don’t know if I’ll watch the whole thing, but it’s good to know this was addressed. Thanks for the link!

                                                                                  1. 2

                                                                                    A big part of the talk was that the planned audience for this contest was ~70 people. Virality expanded that to 100K (?) during the first year. No doubt if this had been a corporate project it would have been focus-grouped and someone might have raised the issues with naming it after Advent.

                                                                          1. 2

                                                                            Rust and Erlang are not the only programming languages with a good concurrency model. C is not dead or dying. Type systems do not objectively make it easier to program. Dynamically typed languages are not on the way out, obsolete or in any way bad. People do not need parametric polymorphism in their fucking text editor extension language. Emacs does not need a fucking web browser to ‘keep up with the kids’ or whatever ridiculous reasoning is being given. And Visual Studio Code isn’t even remotely close to Emacs in any respect at all, it’s a text editor you can write extensions for, just like virtually every other popular text editor in history. Taking lessons from Visual Studio Code is like taking lessons from Sublime Text or any other momentarily popular editor that is and will be forgotten about within a couple of years.

                                                                            What Emacs should be taking note from is not Visual Studio Code, it’s vim. Vim is everything Emacs is not: fast, ergonomic and with a language that’s PERFECT for writing very small snippets of but an awful pain to write extensions with. Emacs Lisp is what you get if you write commands to your text editor in a proper programming language (100s of characters just to swap two shortcuts) while Vimscript is what you get if you write extensions to your text editor in a command language (sometimes harder to understand than TECO).

                                                                            Vim is also evidence that trying to fix extensions with bindings to extra languages is a terrible idea. Vimscript needs to be improved, not replaced. Emacs Lisp is the same: improvements are necessary, not its replacement with something else. It’s not just bad to replace Emacs Lisp entirely, adding a new language beside it and making Emacs Lisp ‘legacy’ also means that in order to access new parts of the Emacs infrastructure, extensions/modes need to be rewritten anyway.

                                                                            The contention made that C needs to go to make contributing to Emacs more accessible is.. frankly insane. C is one of the most widely known languages in the world. It’s very easy and accessible to learn. Rewriting code parts of Emacs in an obscure, esoteric language like Rust is not going to make contributing to Emacs easier, it’s going to make it much harder. It should also be made quite clear that Rust is not a ‘guaranteed safe language’ as is claimed here. Not even close. It’s not at all safe. It literally has a keyword called unsafe that escapes all the safety that the language provides. Under certain conditions that are very easy to check Rust is totally safe, which is what it provides over C, and even outside those conditions, any spots that introduce unsafety are enumerated with grep -Re unsafe, but it’s absolutely untrue that Rust is safe, and any gradual move from C to Rust in Emacs would involve HUGE swathes of unsafe Rust to the point where it’s probably safer to do it in C++ than in Rust, simply because the infrastructure for checking C++ programs for safety are stronger than the infrastructure for checking Rust-programs-that-are-full-of-unsafe for safety.

                                                                            A very statically typed language like Rust makes no sense for a text editor like Emacs. The strength of Emacs is that it’s insanely dynamic. It has a relatively small bit written in C, and most of it is actually written in Elisp.

                                                                            1. 4

                                                                              Yeah, I very much disagree with his desire to get away from a real Lisp. The value proposition of Emacs is very much about having a dynamic, dynamically-typed, dynamically-scoped (yes, I am aware of the optional lexical scoping) Lisp extension language.

                                                                              You are right that vim makes simple things simple, but man-oh-man is VimScript a hideous misfeature of an extension language. I do think that Emacs could probably stand to have some more sugared ways to do some basic config out of the box — use-package is an example of improved keybinding, for example.

                                                                              I wouldn’t mind a Rust-based runtime, but what I would really love to see is a Lisp-based runtime: a core in Common Lisp, with a full Elisp engine to run all the code from the last fourty-three years, and with full programmability in Common Lisp.

                                                                              But then, I’d also like to see an updated Common Lisp which fixes things like case, pathnames, argument ordering and a few other small bits. And I want a pony.

                                                                              1. 3

                                                                                Lisp is probably one of the few things that makes emacs actually interesting, and I’m not even an emacs user or a lisp user…

                                                                              2. 3

                                                                                It literally has a keyword called unsafe that escapes all the safety that the language provides

                                                                                This is not true, and this (very common) misunderstanding highlights the fact that one should think about – for lack of a better term – marketing, even when creating the syntax for a programming language. unsafe sure makes it sound like all the safety features are turned off, when all it does is allow you to dereference raw pointers and read/write to mutable static variables – all under the auspices of the borrow checker, that is still active.

                                                                                1. 2

                                                                                  It is true. As soon as unsafe appears in a program all safety guarantees disappear. That’s literally why it’s called ‘unsafe’. The rust meme that it doesn’t take away all the guarantees ignores that.

                                                                                  It turns off enough safety guarantees that you no longer can guarantee safety…

                                                                              1. 14

                                                                                I don’t like the direction he wants to go with HTML and web integration. If I want webmail and web chat and perfect HTML rendering, I’ll open my browser. I don’t think the core of Emacs should be bogged down with a browser engine. I understand that some people want to use Emacs for everything, but at the end of the day it is supposed to be a text editor and not a web browser.

                                                                                It’s easy enough to say “Just turn it off”, but it’s bound to be a security mess, bloat the memory use, and make the code base even more complicated, so there are side effects even for people who don’t want these features.

                                                                                1. 13

                                                                                  at the end of the day it is supposed to be a text editor and not a web browser.

                                                                                  It’s supposed to be an operating system; more of a Lisp Machine refugee camp, making do with what’s available on Unix.

                                                                                  1. 2

                                                                                    Maybe, but the modern web is a product of Unix, continued on with technical compromises and consumer friendliness. If Emacs were to become a text browser, sure I would not have to leave Emacs anymore, but the product would really not be the computational environment built on free software any more – rather it would just be reduced to another, probably sub-par browser.

                                                                                    1. 2

                                                                                      Done right, I think that it would very much not be just another browser, and done right it need not be sub-par. I can easily imagine a Lisp-extensible editor with enough knowledge of HTML, JavaScript and CSS to be indistinguishable from Firefox or Chrome.

                                                                                      What I can’t easily imagine is the FSF, GNU or the Emacs project itself having the resources to build such a thing. And sadly, no-one with those resources cares enough about Lisp or Emacs to do it either.

                                                                                      1. 1

                                                                                        I can easily imagine a Lisp-extensible editor with enough knowledge of HTML, JavaScript and CSS to be indistinguishable from Firefox or Chrome.

                                                                                        If it’s indistinguishable, what’s the point?

                                                                                    2. 1

                                                                                      Guixsd already exists, and uses a better lisp!

                                                                                    3. 4

                                                                                      I don’t think the core of Emacs should be bogged down with a browser engine.

                                                                                      Emacs already has a web browser and if you haven’t noticed it by now then I doubt a new one getting added is going to bogg anything you already use down.

                                                                                      1. 3

                                                                                        I know, and I use it almost every day. I have it set as my default browser in Emacs. I use it to open Org mode web links, browse documentation, do web searches/research while working, and occassionally use github and sourcehut. It’s nice because I can avoid the distraction of Slack, email, Lobste.rs, and the rest of the web.

                                                                                        I guess my argument is that Eww already supports the right amount of HTML for a browser embedded in a text editor. Obviously, very much IMO.

                                                                                      1. 3

                                                                                        I still can’t get why there’s no competitive implementation of Org, especially without Emacs dependency.

                                                                                        1. 1

                                                                                          There’s an Android App. I think there are some Vim implementations?

                                                                                          1. 1

                                                                                            While Turing-completeness means one can do anything in one language you can in another, it doesn’t mean that all languages are equally productive, and it doesn’t mean all environments are equally productive. Emacs and Elisp make a great, productive pair.

                                                                                            Also, among Org Mode’s features are the ability to execute Elisp with Emacs: a compatible implementation would thus need to include a whole Emacs.

                                                                                            1. 1

                                                                                              It’s not that hard to imagine using a subset of org mode that doesn’t include executing elisp. Most org files I’ve seen don’t even have elisp. Perhaps you could write an article on how to use elisp with orgmode for good.

                                                                                          1. 5

                                                                                            panic() is the equivalent of the exception mechanism many languages use to great effect. Idiomatically it’s a last resort, but it’s a superior mechanism in many ways (e.g. tracebacks for debugging, instead of Go’s idiomatic ‘here’s an error message, good luck finding where it came from’ default.)

                                                                                            1. 5

                                                                                              Go’s idiomatic ‘here’s an error message, good luck finding where it came from’

                                                                                              I think the biggest problem here is that too often if err != nil { return err } is used mindlessly. You then run in to things like open foo: no such file or directory, which is indeed pretty worthless. Even just return fmt.Errorf("pkg.funcName: %s", err) is a vast improvement (although there are better ways, such as github.com/pkg/errors or the new Go 1.13 error system).

                                                                                              I actually included return err in a draft of this article, but decided to remove it as it’s not really a “feature” and how to effectively deal with errors in Go is probably worth an article on its own (if one doesn’t exist yet).

                                                                                              1. 6

                                                                                                it’s pretty straightforward to decorate an error to know where it’s coming from. The most idiomatic way to pass on an error with go code is to decorate it, not pass it unmodified. You are supposed to handle errors you receive after all.

                                                                                                if err != nil {
                                                                                                    return fmt.Errof("%s: when doing whatever", err)
                                                                                                }
                                                                                                

                                                                                                not the common misassumption

                                                                                                if err != nil {
                                                                                                    return err
                                                                                                }
                                                                                                

                                                                                                in fact, the 1.13 release of go formally adds error chains using a new Errorf directive %w that formalises wrapping error values in a manner similar to a few previous library approaches, so you can interrogate the chain if you want to use it in logic (rather than string matching) .

                                                                                                1. 5

                                                                                                  It’s unfortunate IMO that interrogating errors using logic in Go amounts to performing a type assertion, which, while idiomatic and cheap, is something I think a lot of programmers coming from other languages will have to overcome their discomfort with. Errors as values is a great idea, but I personally find it to be a frustratingly incomplete mechanism without sum types and pattern matching, the absence of which I think is partly to blame for careless anti-patterns like return err.

                                                                                                  1. 3

                                                                                                    You can now use errors.Is to test the error type and they added error wrapping to fmt.Errorf. Same mechanics underneath but easier to use. (you could just do a switch with a default case)

                                                                                                  2. 4

                                                                                                    I guess you mean

                                                                                                    if err != nil {
                                                                                                        return fmt.Errorf("error doing whatever: %w", err)
                                                                                                    }
                                                                                                    

                                                                                                    but yes point taken :)

                                                                                                    1. 3

                                                                                                      Sure, but in other languages you don’t have to do all this extra work, you just get good tracebacks for free.

                                                                                                      1. 1

                                                                                                        I greatly prefer the pithy, domain-oriented error decoration that you get with this scheme to the verbose, obtuse set of files and line numbers that you get with stack traces.

                                                                                                    2. 1

                                                                                                      I built a basic Common-Lisp-style condition system atop Go’s panic/defer/recover. It is simple and lacking a lot of the syntactic advantages of Los, and it is definitely not ready for prime time, at all, but I think maybe there’s a useful core in there.

                                                                                                      But seriously, it’s a hack.

                                                                                                    1. 13

                                                                                                      Okay, the real solution is protocols like SRP or the new OPAQUE draft. The even more real solution is something better than passwords. It’s a shame SQRL did not take off (I’m not aware of any public services using it exactly, but Yandex does support a very very similar but custom scheme). But the push for U2F is very good, push notification confirmations are also not bad…

                                                                                                      But when you use the classic password auth, just use scrypt/argon2, abandoning good password hashes for silly concerns about computation time is not a good idea.

                                                                                                      1. 6

                                                                                                        Okay, the real solution is protocols like SRP or the new OPAQUE draft.

                                                                                                        You may find this thread on /r/crypto interesting, in particular since some people seem to believe an adjusted B-SPEKE is the better PAKE than OPAQUE or SRP. CC @Loup-Vaillant since you asked about that originally and probably have a somewhat educated opinion by now.

                                                                                                        If PAKE functions take off, I sincerely hope it won’t require JavaScript in browsers. The NoScript crowd is the one that cares most about security—thus ironically also the one most likely to resist using a JavaScript-based method of authentication. This has already been raised as an issue in the WebAuthn spec, but not yet addressed there.

                                                                                                        But the push for U2F is very good, push notification confirmations are also not bad…

                                                                                                        $36 for two Yubico Security Keys (let’s be real, you need two of them, one to use, one in your bank safe in case the first one is lost or breaks) is a non-trivial investment for the masses. Though I suppose Windows Hello (and whatever browser vendors accept from Apple) will help out with adoption. The JavaScript requirement is still iffy.

                                                                                                        1. 4

                                                                                                          (Mentioning me didn’t trigger any notification like replies do…)

                                                                                                          As far as I can tell, the only way to avoid having the server perform a slow hash, is client side computation. On the web, that means JavaScript, WebAssembly, or some standard added to HTML itself. No way around it. Personally, I think using JavaScript in this case would be justified. It sucks, but good PAKEs have advantages that benefits the user directly, such as not giving away their password to the server.

                                                                                                          The (modified) B-SPEKE that was proposed on the thread I started on /r/crypto is excellent. I’m sold. The biggest advantage over OPAKE is that it doesn’t require point addition. This means we can Montgomery curves, which take less code to implement than Edwards curves, without killing efficiency. And I love small crypto libraries (sorry, couldn’t resist). Now it does require some non trivial primitives:

                                                                                                          • Scalar multiplication (which you need for key exchange anyway)
                                                                                                          • Hash to point (which you have if you use Elligator2 to hide the fact that you’re transmitting a public key)
                                                                                                          • Inversion (modulo the order of the curve), for blinding. Not needed elsewhere, but fairly straightforward.

                                                                                                          I personally plan to add it to Monocypher.

                                                                                                          1. 3

                                                                                                            some standard added to HTML itself

                                                                                                            Or to HTTP instead! It would be awesome if HTTP Authentication supported one of these modern PAKEs in addition to Basic and Digest.

                                                                                                          2. 1

                                                                                                            The NoScript crowd

                                                                                                            Does it really exist anymore? Do people still try to disable all JS? (heck, back when the NoScript addon was a thing, you’d usually configure it to only block 3rd party scripts or only block everything on random blogs and stuff where you don’t ever log in)

                                                                                                            There’s a simple solution for the hypothetical “you’re stuck on an island with w3m” situation:

                                                                                                            <noscript>
                                                                                                              <b>WARNING WARNING WARNING you have JS disabled!
                                                                                                              this fallback form is reduced security
                                                                                                              only use if stuck on an island without a JS capable browser</b>
                                                                                                              <form action="/login-legacy-style-with-the-pake-client-on-the-server">…</form>
                                                                                                            </noscript>
                                                                                                            
                                                                                                            1. 2

                                                                                                              These people do still exist, but they’re very rare. More likely reasons for (transient) lack of JavaScript execution are enumerated in: https://kryogenix.org/code/browser/everyonehasjs.html

                                                                                                              /login-legacy-style-with-the-pake-client-on-the-server

                                                                                                              That would go contrary to server relief (since bad actors could stress the server again through that).

                                                                                                              1. 2

                                                                                                                server relief

                                                                                                                Just rate limit it. I honestly haven’t heard concerns about “server relief” from anyone who actually runs scrypt/etc :D

                                                                                                                Also isn’t PAKE client side lighter than scrypt/etc?

                                                                                                          3. 3

                                                                                                            The even more real solution is something better than passwords. It’s a shame SQRL did not take off (I’m not aware of any public services using it exactly, but Yandex does support a very very similar but custom scheme).

                                                                                                            We find it amazing that only 20 years ago, there was very little encryption on most web sites. Most of the time, the only pages anyone bothered to encrypt in-transit were credit card forms and often not even then. What fools we were! I feel like 20 years from now, we will look back and shake our heads with a sensible chuckle and wonder how anyone was ever expected to remember one long high-entropy password, let alone dozens at a time.

                                                                                                            FWIW, SQRL is not dead, it’s just now finally being considered “done” by its creator. The reference implementation and docs are done and Steve Gibson is traveling and doing talks about it. I believe his intent is to hand off maintenance and further development to the SQRL community so he can get back to working on things that make him money.

                                                                                                            1. 1

                                                                                                              Another approach from the engineering side of things (rather than using more advanced crypto, a la OPAQUE) is to use something like Tidas which effectively just takes the iOS password manager out of the loop and lets you auth directly with public key authentication using touchID/faceID.

                                                                                                            1. 14

                                                                                                              I agree that Windows is a pain for Linux development — but if one wants to do Linux development, why not … just use Linux? I’ve been using it for two decades now, and I would never willingly switch to Windows or macOS.

                                                                                                              I have a desktop which is finely tuned to exactly the way I work, which enhances my efficiency and productivity, and is fun. Isn’t that the goal?

                                                                                                              1. 5

                                                                                                                I’ve been running Linux for decades and for the most part I couldn’t imagine using anything else. I’m very comfortable with all of the tools available and the highly configurable desktops. The main pain point for me these days is that the more popular desktop environments handle hotplugging of peripherals extremely poorly.

                                                                                                                My main workstation is a laptop with a hardware dock. When undocked, it’s a regular laptop. When docked, the laptop sees (at least) another screen, another mouse, another keyboard, and sometimes a few other things like USB sound cards and scanners. Modern DEs handle this poorly and every time I dock the thing (which can be multiple times per day), I have to spend up to 30 seconds fixing the display layout, window placement, keyboard repeat rate, or audio configuration. I suspect Mac and Windows do better with this but wouldn’t know. I know it can work because a decade ago, GNOME 2 had this all figured out. (Unfortunately it’s successor MATE has other issues.)

                                                                                                                I can’t imagine the pain I will experience when I have to switch to a USB 3 or Thunderchicken dock because that’s what all laptops seem to be moving to.

                                                                                                                1. 1

                                                                                                                  The post is from DHH, the creator of Ruby on Rails. He doesn’t want to do Linux development, he wants to work on a Rails application. Using the *nix toolset is just a proxy, because he heard that this works best on Windows using WSL.

                                                                                                                1. 7

                                                                                                                  Signal: It straight up sucks on Android.

                                                                                                                  That’s completely not my experience. It does everything I want or know I need from a messaging client.

                                                                                                                  I completely agree regarding the lack of federation though.

                                                                                                                  1. 3

                                                                                                                    I completely agree regarding the lack of federation though.

                                                                                                                    That said, IRC is no better in this regard; I can’t just federate with Rizon or freenode, since IRC is not set up for a distributed system of untrusted server instances.

                                                                                                                    1. 1

                                                                                                                      IRC was originally a single network, but an incident occured, because the net was too trusting…

                                                                                                                  1. 9

                                                                                                                    This is not a post complaining about how bad Slack/Skype/YourFavoriteMessanger™ is compared to IRC and that we should continue using IRC instead of them. I’m a pragmatic person and that battle is lost.

                                                                                                                    Well, of course it is with that kind of attitude…

                                                                                                                    https://p.hagelb.org/line.jpg

                                                                                                                    I’ve been using Slack thru https://github.com/wee-slack/wee-slack for a couple years now and while it’s significantly better than using it in a browser (or heaven forbid, Electron), it’s still nowhere near as nice as the chat clients I have in Emacs. Unfortunately bitlbee’s slack implementation still doesn’t support threads, which are used heavily at work, but when they fix that I’ll definitely give it a spin: https://github.com/dylex/slack-libpurple/issues/76

                                                                                                                    1. 4

                                                                                                                      Have you tried https://github.com/yuya373/emacs-slack? I’m giving it a trial run and so far so good.

                                                                                                                      1. 4

                                                                                                                        I tried it but could not get it to work reliably. I also tried using weechat.el as a better frontend to weechat and it was more usable but had issues keeping unread messages in sync. I should give it another shot.

                                                                                                                        1. 3

                                                                                                                          FWIW I have been using emacs-slack for several years now and am extremely happy with it. Not completely perfect, but … no Electron, no browser, no JavaScript. That’s close enough.

                                                                                                                      1. 3

                                                                                                                        So what should people have used instead?

                                                                                                                        1. 2

                                                                                                                          S-expressions for data encoding. Seriously.

                                                                                                                          1. 1

                                                                                                                            There’s plenty of standards to choose from:

                                                                                                                            https://en.wikipedia.org/wiki/Comparison_of_data-serialization_formats

                                                                                                                            1. 2

                                                                                                                              The table in that article doesn’t list the dates from when they were conceived but from eyeballing many of them are newer than xml. So they were not an option at xml’s inception.

                                                                                                                            2. 1

                                                                                                                              For structured data? JSON/CBOR/Msgpack is probably fine. The author mentioned JSON which is good enough most of the time.

                                                                                                                              1. 3

                                                                                                                                The author was saying that people made the wrong choice with xml right from the start. JSON (etc) weren’t invented then. My question was what those folks should have done.

                                                                                                                                1. 1
                                                                                                                                  1. 1

                                                                                                                                    How does netstrings handle Unicode data?

                                                                                                                                    1. 1

                                                                                                                                      Any string of 8-bit bytes may be encoded as [len]":"[string]",".

                                                                                                                                      It sounds like that means netstrings are an 8-bit safe transport for arbitrary binary data. That means that, while the netstring spec itself prescribes no encoding for its data payloads, it can transport UTF-8 with no problems.

                                                                                                                                  2. 1

                                                                                                                                    Oh. I have no clue.

                                                                                                                              1. 4

                                                                                                                                One of the major issues with XHTML in practice was its requirement for strict error handling, which it inherited from XML and which was generally observed by the browsers. If an XHTML web page didn’t validate, the browser showed you only an error message. As reported in other comments, most ‘XHTML’ web pages didn’t (and were saved from this draconian fate only because they were served in a way that caused browsers to interpret them as HTML instead of XHTML).

                                                                                                                                The direct problem with this is that it is both a terrible user experience and directed at the wrong person; it is directly punishing you (the browser user), while only the page authors and site owners have the power to correct the problem. If their entire site is broken, they may get punished indirectly by a traffic volume drop, but if it’s only some pages, well, you get screwed.

                                                                                                                                The indirect problem is the implications for this. Because the consequences of invalid XHTML are so severe, the W3C was essentially demanding that everyone change how they created web pages so that they only created valid XHTML. In a world where web pages are uncommon and mostly hand written, perhaps this looked viable. In a world where a very large number of web pages are dynamically generated on the fly, it is not. Major sites with dynamically generated pages were never going to rewrite their page generation systems just to produce assured-valid XHTML, when XHTML gave them essentially nothing in practice except a worse user experience for visitors if something ever went wrong. And even by the mid 00s, the web was far more like the latter than the former.

                                                                                                                                (How well people do even today at creating valid XML can be seen by observing how frequently Atom format syndication feeds in the wild are not fully valid XML. Every feed reader that wants to do a really good job of handling feeds does non-strict parsing.)

                                                                                                                                1. 5

                                                                                                                                  If an XHTML web page didn’t validate, the browser showed you only an error message. As reported in other comments, most ‘XHTML’ web pages didn’t (and were saved from this draconian fate only because they were served in a way that caused browsers to interpret them as HTML instead of XHTML).

                                                                                                                                  This is worth elaborating on a bit. People now mostly think of it in terms of obvious errors, like you forgot to close a tag or quote an attribute. But XHTML had some truly nasty hidden failure modes:

                                                                                                                                  • Using named character entities, like &copy; for a copyright symbol? You’re now at the mercy of whatever parses your site; a tag-soup HTML parser or a validating XML parser will load and understand the extra named entities in XHTML, but a non-validating XML parser isn’t required to and can error on you (and remember, every error is a fatal error in XML) for using any named entity other than the base five defined in XML itself.
                                                                                                                                  • Using inline JavaScript (which was common back then)? Well, the content of the script element is declared in the XHTML DTD as PCDATA. Which means you now have to wrap your JavaScript in an explicit CDATA block or else risk well-formedness errors if you use any characters with special meanings. You know, like that < in your for loop.
                                                                                                                                  • Oh, and speaking of JavaScript: the XHTML DOM is not the same as the HTML DOM. Methods you’re used to for manipulating the HTML DOM will not work in an XHTML document parsed as XHTML, and vice-versa. But you still have to support processing as HTML because not all browsers can handle XHTML-as-XML. Good luck!
                                                                                                                                  • And while we’re on the subject of content types: did you know the Content-Type header can suddenly make your XHTML documents not be well-formed? Turns out, if you serve as text/html or text/xml, and don’t also specify the charset in your Content-Type header, the consumer on the other end is required to parse your document as ASCII. Even if your XML prolog declares UTF-8. Really. So better not have any bytes in your document outside the ASCII range or else you’ll get an error.

                                                                                                                                  And that’s just some of the stuff I still remember a decade and a half later. XHTML was a mess.

                                                                                                                                  1. 1

                                                                                                                                    XHTML DOM is not the same as the HTML DOM. Methods you’re used to for manipulating the HTML DOM

                                                                                                                                    Personally client side mangling of the DOM was one of those places where the www truly jumped the shark. Then client side mangling and animation of CSS….

                                                                                                                                    Shudder.

                                                                                                                                  2. 3

                                                                                                                                    I drank the cool-aid and really really tried to do it well. I’ve even used XSLT and XML serializers to generate proper markup. But even when I did everything right, it was undone by proxy servers that “optimized” markup or injected ads (those were dark times for HTTPS). First-party ads didn’t work with the XHTML DOM. Hardly anything worked.

                                                                                                                                    So in the end users were unhappy, stakeholders were unhappy, and I could have used simpler tools.

                                                                                                                                    1. 1

                                                                                                                                      it was undone by proxy servers that “optimized” markup or injected ads (those were dark times for HTTPS). First-party ads didn’t work with the XHTML DOM.

                                                                                                                                      Well, do be honest you weren’t serving up XHTML then so you can’t blame XHTML for that.

                                                                                                                                      If there was a flaw in the XHTML design was the inability to insert standalone sub-documents. ie. Like the <img src=“foo.png”> and no matter what was inside there, your document rendered, maybe with a “broken image” icon… but your outer document rendered. You needed for what you’re talking about is a <notMyShit src=”…”> tag that would render whatever was at the other end of that url in a hermetically sealed box same as an image. And if the other shit was shit, a borked doc icon would be fine.

                                                                                                                                      1. 1

                                                                                                                                        You mean an iframe?

                                                                                                                                        1. 1

                                                                                                                                          Ok, my memory is fading about those dark days…. I see it was available from html4 / xhtml1.0 so basically he had no excuse.

                                                                                                                                          /u/kornel’s problems didn’t arise from xhtml, it arose from his service providers doing hideous things. So don’t blame xhmtl.

                                                                                                                                          1. 2

                                                                                                                                            The blame that can be placed squarely on XHTML is, I think, that of being an unrealistic match for its ecosystem. Hideous behavior from service providers may have occasionally been part of the picture, but a small one compared to a lot of what’s been brought up in this thread.

                                                                                                                                            1. 2

                                                                                                                                              It’s clear from your other comments that you view the existence of any type of scriptable interface to an HTML or XHTML document as a mistake, but the simple fact is that it was already a baseline expected feature of the web platform, which consisted of:

                                                                                                                                              • Markup language for document authoring (HTML/XHTML)
                                                                                                                                              • Style language for document formatting (CSS)
                                                                                                                                              • An API for document manipulation (DOM)

                                                                                                                                              Ad networks, and many other things, already made use of the DOM API for the features they needed/wanted.

                                                                                                                                              And then XHTML came along, and when served as XHTML it had a DOM which was different from and incompatible with the HTML DOM, which meant it was difficult and complex to write third-party code which could be dropped into either an HTML document, an XHTML-served-as-HTML document, or an XHTML-served-as-XHTML document.

                                                                                                                                              1. 2

                                                                                                                                                Ad networks, and many other things, already made use of the DOM API for the features they needed/wanted.

                                                                                                                                                Yup. Ad networks as well as those features were never a thing I have needed or wanted….

                                                                                                                                                API for document manipulation

                                                                                                                                                Never needed or wanted that except as a rubber crutch given to cripple to overcome the fact that html as a standard had completely stalled and stopped advancing on any front anybody wanted.

                                                                                                                                                difficult and complex to write third-party code which could be dropped into either an HTML document

                                                                                                                                                And the 3rd party code was written as a hideous kludge to overcome the stalled html standard.

                                                                                                                                                It’s amazing gobsmacking what they have achieved with say d3.js …. but that is despite the limitations rather than because. If I look at the code for d3 and look at the insane kludges and hacks they do and compared to other better graphics apis… I literally cry for the wasted time and resources.

                                                                                                                                          2. 1

                                                                                                                                            I was serving application/xhtml+xml, but the evil proxies either thought they support it or sniffed content.

                                                                                                                                            HTML5 actually added <iframe srcdoc="">, but it’s underwhelming due to iframe’s frameness.

                                                                                                                                        2. 2

                                                                                                                                          In a world where web pages are uncommon and mostly hand written, perhaps this looked viable. In a world where a very large number of web pages are dynamically generated on the fly, it is not.

                                                                                                                                          On the contrary, I would expect that any dynamically-generated site should be able to quite easily generate valid XML, while any sufficiently-complex hand-written XML will likely have at least one validation error.

                                                                                                                                          If it’s really that difficult to generate well-formed XML … maybe we should have just dumped it and stuck with S-expressions?

                                                                                                                                          Correctness matters, particularly with computers which handle people’s sensitive information.

                                                                                                                                          1. 3

                                                                                                                                            People who hand write web pages do so in small volume, and can reasonably be pushed to run validators after they save or switch to an XHTML-aware editing mode or web page editor. Or at least that is or was the theory, and somewhat the practice of people who did produce valid XHTML.

                                                                                                                                            Software that produces HTML through templates, which is extremely common, must generally be majorly rewritten to restructure its generation process to always produce valid XHTML. At scale, text templating is not compatible with always valid XHTML; the chance for mistakes, both in markup and in character sets, is too great. You need software that simply doesn’t allow invalid XHTML to be created no matter what, and that means a complete change in template systems and template APIs. Even if you can get away without that, you likely need to do major rewrites inside the template engine itself. Major rewrites are not popular, especially when they get you nothing in practice.

                                                                                                                                            1. 2

                                                                                                                                              Correctness matters

                                                                                                                                              It does, but the presentation layer is not the right place to enforce it.

                                                                                                                                            2. 1

                                                                                                                                              It was ahead of its time. I think strict validation will maybe be an option in… let’s say 2025 or 2030. We still have a long way to go before people consistently use software that produces HTML the same way as JSON—which is to say, via serialization of a data structure.

                                                                                                                                              We’re slowly, slowly getting there.

                                                                                                                                              1. 4

                                                                                                                                                I don’t think it’s ever going to happen for HTML, because there’s no benefit. At all.

                                                                                                                                                XML was meant to solve the problem of unspecified and wildly different error handling HTML clients, but HTML5 solved it by specifying how to parse every kind of garbage instead (including what exactly happens on 511th nested <font> tag).

                                                                                                                                                XML parsers were supposed to be simpler than parsers that handle all the garbage precisely, but we’ve paid that cost already. Now it’s actually easier to run HTML5ever parser than to fiddle with DTD catalogs to avoid XML choking on &nbsp;.

                                                                                                                                                We have some validation is some template engines and JSX, but that’s where it should be — in developer tooling, not in browsers.