1. 90
    1. 16

      I think Drew would be the first one to be glad for that statement to be proven wrong. :) Currently he’s also implementing another of those “impossible” projects: a new OS.

      Also I think the “impossible”-ness was referring to a mass appeal product (either the browser, or the OS) that can compete on an equal footing with the Chrome/Firefox behemoths.

      A small team with limited resources will probably lose steam (as Kling points out in TFA) when faced with the massive amount of things that a browser needs to do. Having a browser for a niche group of enthusiasts that are willing to accept that despite having the discord login page load, they can’t operate their bank’s website is a smaller endeavour and, I imagine that everyone is rooting for them to succeed in it.

      1. 22

        I think a big part of it is the 90% problem. You can probably make a browser that works for most sites with acceptable performance but for every user there will be one critical thing that doesn’t work. This has been a problem for Firefox for a long time.

        1. 4

          Yes, although it might not affect Ladybird strongly.

          Firefox is supposed to be a browser for everyone, which means that everyone’s single broken site is Mozilla’s problem. Ladybird is a browser for the subset of people who are attracted to SerenityOS. A smaller set of people, and I assume it’s composed of people who won’t complain about the browser when they run into a problematic site.

          1. 6

            Another new from-scratch browser is Flow https://www.ekioh.com/flow-browser/ tho it is closed source. Their target market is set-top boxes or in-TV software, where the browser does not need to be compatible with the entire web, just a few select sites.

            I haven’t heard much about Flow recently; their last blog post was 2021, so I hope they are still going strong.

          2. 3

            I’m not a SerenityOS user but would love to have an alternative to Chrome derivatives once Firefox finishes it’s slide into obscurity.

          3. 1

            Ladybird seems to have broader scope though. The push to get LibWeb running and capable on Linux was strong since it was first shown off–with enough pressure that author took the time to make a Qt chrome for the engine. But it’s still a similar audience in that it’s the same crowd that won’t get mad if one site is broken.

      2. 10

        There are issues though. You may be able to implement large parts, then people can use large parts of the week.

        But then you hit a hard wall with DRM, so you’ll have things like Netflix that you simply won’t support, because there’s a gatekeeper for a binary and since they aren’t supporting somewhat big Browsers you’re out of luck. That part of the web is not free and open, but it has become a huge part of it in terms of usage.

        So there you’ll be stuck when you just implant all the specs

        I also don’t think this issue will become smaller. My main browser is configured to not use DRM even though it’s just a click away. There’s an upwards trend, largely spreading due to usage outsourced streaming solutions.

        If other things will be streamed as well (software, games, etc.) I think it may become worse.

        And then you have other topics. After some time where devs simply followed standards we now seem to drift back in a situation where Websites will tell you to use Chrome and refuse to work on other broesers even when they’d be capable.

        I think the project of making a new broeser is great, but I think there needs to be a push away from many developments where despite standards there only is one browser.

        This time I think it would be even harder to get out of it. Google, etc. used to have incentives to support breaking up the market share and they certainly profited a lot in the long run. I don’t see such incentives anymore. At least not right now.

        1. 1

          If other things will be streamed as well (software, games, etc.) I think it may become worse.

          This is only true if consumers of the content are willing to only operate inside of those ecosystems. If consumers want to watch something without DRM, then they can visit a streaming site that does not require DRM. This may be viewed as piracy on the side of the conglomerate or free access on the side of the consumer.

          Just because corpos will only tell you their world is safe and legal doesn’t make it true.

          Corpos will only take action if they lose enough money, otherwise there will always be a freer solution even if it is only known by a minority.

        2. 1

          I’m not familiar with what you mean for Netflix DRM that is somehow tied to browser tech. Could you be more specific?

          because there’s a gatekeeper for a binary and since they aren’t supporting somewhat big Browsers you’re out of luck.

          What does this mean? What does binary mean in the context of web apps?

          1. 9

            If you want to use Netflix or others, you have to use DRM. There is a standard for the interface, but how it’s implemented is that there’s closed source binaries.

            Currently the biggest implementations is Widevine. It’s pretty much the standard way that video streaming platforms implement DRM these days.

            Widevine is by Google. So they are gatekeepers and for example don’t build for Brave, Electron Applications, and many many other browsers, operating systems, architectures, etc. out there.

            1. 1

              fair enough, thanks for sharing

    2. 14

      Remember that KHTML started as an independent hobby project to make a new web browser. In part because it was very cleanly architected, it became the basis of Safari and Chrome, even though Gecko was already mature and open source at that time.

      1. 23

        and that’s why you should use GPLv3 for your hobby projects, folks

        1. 10

          Also as far as I know Apple and Google never hired any of the KHTML folks. So even if your hobby project is wildly successful, don’t expect it to lead to a job!

          1. 7

            Yea, Creative Selection by Ken Kocienda goes into detail as to how KHTML turned into WebKit. Basically they hired a guy to make Safari, he went on the internet, found Konqueror, and decided to steal it

            1. 6

              The narrative that companies project around their use of open source is so much different from the reality. Marketing is truly incredible.

              Safari/WebKit was pivotal to the value of the iPhone early on. Imagine dedicating thousands of hours of your life into a project for the purposes of lifting up the community just to have some private corporation rip off your work and literally sell hundreds of billions of dollars of product in large part based on the value of your work and then not giving you nor the community anything in return. Well actually, lots of open source developers would have no issue with that. It’s strange to me. Even monkeys, not even apes, demonstrate an instinct towards reciprocity.

              What’s funny to me is that if I were that guy working for Apple, I would have done exactly the same thing. The pressure to quickly put out useful products incentivizes economical engineering. It’s a systemic outcome. I value reciprocity, even if I don’t desire to gain, so I have to agree with @caleb, if I were the original authors, I would have chosen GPL. I’m happy to get cucumbers but if the other guy is getting grapes, that’s not fair.

          2. 1

            You’re right, in the sense that the job it led to was neither at Apple nor at Google, and that getting a job like that is far from automatic.

            FWIW he took this picture on the weekend after the job interviews. I took him and another new hire for a stroll in the mountains.

        2. 4

          So that they can languish in obscurity with no developers and die when you get bored of them?

          1. 9

            for KHTML, that would have been preferable compared to what happened

            1. 6

              You mean “better that no one use my code than a fork become more popular”?

              1. 6

                better that no one use my code than a fork be used to further the degradation of the open web

                1. 4

                  by adding all the features that make web apps actually possible?

                  by making the web actually meaningfully usable on mobile phones?

                  by actually having the resources to make the various web specs actually usable for the purpose of implementing a browser?

                  1. 1

                    I don’t understand the last one, but yes to the first two. and more than that, doing it in a way that locks out independent developers due to the tremendous and unnecessary complexity.

                    1. 3

                      I don’t understand the last one

                      Writing a fully compatible browser engine in the era of khtml was not feasible either, and in many respects was harder.

                      As this article actually notes the web specs of the era were incomplete and frequently out right inaccurate.

                      One of the big parts of working on webkit for many years was actually investing the time and effort into actually fixing those short comings. Without that work developing a new engine that actually worked was not really feasible, To put this in perspective:

                      • When WebKit was first developed khtml could not render yahoo.com correctly. A lot of people over state the power of khtml. While it was certainly better than other “browser engines” it was a long way from gecko, trident, or even presto.

                      *Actually fixing page and site rendering require mammoth amounts of work: the specifications were close to useless for everything other than SVG, and that was largely due to lack of support for SVG meaning that there wasn’t a huge amount of existing content to hit the ambiguities in the spec. That means a rendering issue on a site generally meant working out what exact piece of content was being rendered differently, then spending weeks trying to reverse engineer what the actual semantics of that were in the existing engines.

                      • You get similar issues with the DOM and with JS (where the language specification failed to define everything, or defined things in a way I might call “aspirational”)

                      The whole point of the old “html5”/whatwg (and the parallel es3.1+es5) process that a lot of people then complained about was the recognition that the “standards” were not remotely good enough to actually build a browser. Prior to the huge amount of work done by paid engineers from Apple, Mozilla, Google, Opera, and Microsoft you simply could not implement a browser just by following the specs.

                      doing it in a way that locks out independent developers due to the tremendous and unnecessary complexity.

                      I do not understand what you’re talking about here. Are you saying that browsers/“the open web” should have stopped gaining any features a decade or two ago?

                      1. 1

                        That background on specifications is interesting, but I doubt that the tradeoff for increased complexity has made it easier to implement a browser. According to @ddevault, W3C specifications now total over 100 million words [0].

                        doing it in a way that locks out independent developers due to the tremendous and unnecessary complexity.

                        I do not understand what you’re talking about here. Are you saying that browsers/“the open web” should have stopped gaining any features a decade or two ago?

                        Safari and Chrome added features for mobile browsing and web apps in an unnecessarily complex way. KHTML enabled them to do it more quickly than they otherwise would have. Hopefully that clarifies my previous comment?

                        [0] https://drewdevault.com/2020/03/18/Reckless-limitless-scope

                        1. 3

                          That background on specifications is interesting, but I doubt that the tradeoff for increased complexity has made it easier to implement a browser.

                          Um, no, prior to the “let’s make the browser specs correct and complete” work building a browser was vastly more difficult. Fixing the specs was a significant amount of work, and why would we do that if it was not beneficial.

                          There are two reasons for the increased “complexity” of modern specs:

                          Firstly, word count is being counted as complexity. This is plainly and obviously false. Modern web specs are complete and are precise, and have no room for ambiguity. That necessarily makes them significantly more verbose. The old ecmascript (pre-3.1) said something long the following for for(in) enumeration in ES3:

                          ‘Get the name of the next property of Result(3) that doesn’t have the DontEnum attribute. If there is no such property, go to step 14.’

                          In the current ES specification, which actually contains the information required to implement a JS engine that actually works this is:

                          a. If exprValue is either undefined or null, then
                              i. Return Completion Record { [[Type]]: break, [[Value]]: empty, [[Target]]: empty }.
                          b. Let obj be ! ToObject(exprValue).
                          c. Let iterator be EnumerateObjectProperties(obj).
                          d. Let nextMethod be ! GetV(iterator, "next").
                          e. Return the Iterator Record { [[Iterator]]: iterator, [[NextMethod]]: nextMethod, [[Done]]: false }.
                          

                          Which references EnumerateObjectProperties (https://tc39.es/ecma262/#sec-enumerate-object-properties) which is multiple paragraphs of text.

                          Note that this is just the head in the modern specification, the body logic for a for-in includes the logic required to correctly handle subject mutation and what not, which again the old “simple” specification did not mention. I had to spend huge amounts of time working out/reverse engineering what SpiderMonkey and JScript did for for-in as the specification failed to actually describe what was necessary. Because that wasn’t in the specification KJS didn’t do anything remotely correctly - not because the KJS folk weren’t talented but because the spec was incomplete and they didn’t have the time or resources required to work out what actually needed to happen.

                          This is all for ecmascript which was honestly probably the best and most complete and “accurate” of the specs for the web. I want to be extremely clear here:

                          You could not use the old specs to implement a browser that was compatible with the web as it existed even at that time

                          If you matched the specs exactly your browser would simply be wrong in many cases, but even that word “exactly”breaks down: the specs did not provide enough information to create a compatible implementation.

                          The modern specifications are much much more verbose, because any ambiguity in the specification, or any gaps in possible behaviour are now understood to be spec errors, and so the specifications are necessarily “verbose”.

                          Safari and Chrome added features for mobile browsing and web apps in an unnecessarily complex way.

                          I am curious what you think would make them less complex? You keep making variations of this claim but you haven’t provided any examples, nor shown how such an example would be made less complex.

                          KHTML enabled them to do it more quickly than they otherwise would have.

                          I would put the time savings as not more than about a year of time saving, if that.[*]

                          I think the long term win for everyone came about from khtml being open source, and then the blog post complaining about apple’s initial “here’s a tarball” approach to development that led to the more reasonable webkit development model we have today. Even before the modern “lets bundle a 400Mb browser with our 500kb app”/electron development model having a actually usable embeddable web component was useful - that’s a significant part of why khtml had the architecture it had. The only other browser engine that had that was MSHTML - Gecko was integrated deeply into the UI of the browser (in both directions), as were the other engines like Presto and omniweb, but they were also closed source and the latter was only marginally better at web compatibility than khtml.

                          WebKit being based on KHTML forced it to be open source, doing that meant that QtWebKit, WebKitGTK, etc were able to exist giving KDE and Gnome robust embeddable web views that were production quality and could actually be used on the real web, which seems like quite a big win vs KHTML based components. One of the big architectural features of webkit is the way it integrates with the target platform, and by big I mean at the time of the blink fork someone published something about how much code blink was able to remove by forking from webkit, except more or less all the code they removed was support for Gtk, Qt, Wx, etc (and JSC of course) and the architecture that made supporting those platforms possible.

                          Anyway, I get that you apparently hate Apple, and presumably google, and that’s fair, they’re corporations and generally corporations aren’t your friend - but you are making a bunch of claims that aren’t really true. First off the specs aren’t more complex for no reason. There’s more text now because the specs are actually usable, and therefore have to actually specify everything rather than leaving out details. There are more specs, because people want browsers to do more. If the browser is going to do something new, there needs to be a specification for that thing, and so “more words”. The old specs that weren’t as large and complex weren’t meaningfully complete, so while there might be fewer features to implement than a modern browser, it was much harder to implement any of them. Finally, the general claim that open source or other developers didn’t get anything out of the webkit fork seems to me to be fairly clearly false: a large number of modern OSS projects use webkit, blink, and v8 (some use JSC, but JSC’s much unloved C API is super unwieldy and out of date :-/)

                          [*] KSVG2 was a much bigger saving simply due SVG’s spec actually being usable so implementing SVG did not necessitate weeks or months of work determining the actual behaviour of every line in the specification. That said if webkit hadn’t picked up SVG, then I don’t really see it being as common as it is now - webkit/safari was the first implementation of SVG in a mobile browser at all, and for an extended period I was the only browser representative on that committee, a lot of which consisted of me trying to stop terrible decisions or correct false assumptions of what was/was not possible. As with Hixie I eventually left as a result of BS behavior (though not as bad as Hixie’s experience), but if not for Hixie and I, and WK shipping SVG I suspect SVG would have been supplanted by another format.

                          1. 3

                            Actually, out of curiosity I did some bugzilla spelunking and found one of the first incredibly complex (:D) changes I ever made in webkit (you can even see my old student id in the changelog): https://bugs.webkit.org/show_bug.cgi?id=3539

                            What I think is particularly great is that this is a trivial bug, bought about entirely by the low quality of the specifications of the time. It even has Brendan Eich pop up and explicitly call it out as an error in the specification. If you follow the conversation in you can see I was having to manually test the behaviour of other browsers, and come up with related test cases to see what was happening in those cases.

                            This is a trivial API (Array.join()) used in a core behaviour (Array.toString is in terms of join()) yet didn’t specify exception propagation, and completely failed to specify how recursion should be handled.

                            Another fun thing: in looking for my ancient commits I found a bunch of the webkit svg work back ported into ksvg and khtml, directly benefiting khtml.

                          2. 1

                            Um, no, prior to the “let’s make the browser specs correct and complete” work building a browser was vastly more difficult.

                            I don’t know the last time a browser having parity with established alternatives was developed from scratch, do you? It would certainly be illuminating if there were a case in the past 5 years and we could compare the man hours of development with that of an earlier example. You think it would be less for a modern browser?

                            Fixing the specs was a significant amount of work, and why would we do that if it was not beneficial.

                            I hope you didn’t think I was claiming that more rigorous specs for the same standard would not be beneficial.

                            I am curious what you think would make them less complex? You keep making variations of this claim but you haven’t provided any examples, nor shown how such an example would be made less complex.

                            I thought it was common knowledge that the web was unnecessarily complex. A priori it makes sense that rapid incremental development combined with the need to support every previous way of doing things would create a lot of complexity that could be avoided if development was slower and more carefully planned out.*

                            You seem to know a lot more about browser development than me, so perhaps you can be the one to offer specifics. What’s an example of user-facing functionality which is possible today but was not possible with the web standards of 10-15 years ago? Or what complex additions to web standards in the past 10 years were necessary to enable some useful functionality?

                            Your reply to your own comment mentions bugs due to under-specified exception propagation in Array.join(). Perhaps one way the web could be simpler would be if JavaScript used return values for error handling, similar to C or go, rather than exceptions.

                            Anyway, I get that you apparently hate Apple, and presumably google, and that’s fair, they’re corporations and generally corporations aren’t your friend - but you are making a bunch of claims that aren’t really true. First off the specs aren’t more complex for no reason. There’s more text now because the specs are actually usable, and therefore have to actually specify everything rather than leaving out details. There are more specs, because people want browsers to do more.

                            Just flagging that “people” can mean a lot of things, and if you want to argue that a majority of web users wanted the features to be added, that seems quite bold and requires some justification. Did web users want publishers to regain the ability to show them popups, which were effectively eliminated in the early 2000s by popup blockers?

                            Finally, the general claim that open source or other developers didn’t get anything out of the webkit fork seems to me to be fairly clearly false: a large number of modern OSS projects use webkit, blink, and v8 (some use JSC, but JSC’s much unloved C API is super unwieldy and out of date :-/)

                            I don’t think I made that claim, did I?

                            * - To clarify, I’m not saying KHTML being GPL would have made browser development more careful, just that it would have delayed the reckless development that has been happening and making things worse.

                            1. 3

                              Ok, at this point I’m not sure what you’re trying to say, so I’m going to try to make what I am saying very clear:

                              • It is exponentially easier to implement a browser using the modern specs than the old specs.

                              • There is more work to implement a browser now, because they have more features. Those features are all individually something people can reasonably implement.

                              What Kling is doing with ladybird would simply not be possible with the specs from 15 years ago. Again, at the time apple started working on webkit/khtml, it could not render yahoo.com correctly despite the being one of the most popular sites at the time. The first 5-8 years of webkit and safari’s existence was essentially trying to reach consistent web compatibility with gecko, and that was a funded team. This is because pretty much every time a site was not rendering correctly would mean having to spend weeks or months working out what was actually happening - because the specs were incomplete or wrong - to then make changes, while trying to ensure that the change in behavior didn’t result in things going wrong somewhere else.

                              I cannot emphasize how hard it was to fix any rendering issue.

                              Meanwhile in a few years with the modern specs Kling has almost on his own got vastly more complex websites than anything khtml had do, much more correct than khtml ever did. Which was only possible because the vastly more precise wording in the modern specs, even though that makes the specs more complex by the addition of words.

                              I thought it was common knowledge that the web was unnecessarily complex.

                              No. It’s common knowledge that the web is complex. Unnecessary is a subjective value judgement. There are plenty of specs in the web I don’t care about, but that others do, and vice versa. Just because you don’t use something doesn’t mean some else does.

                              A priori it makes sense that rapid incremental development combined with the need to support every previous way of doing things would create a lot of complexity that could be avoided if development was slower and more carefully planned out.

                              Modern features are carefully thought out, and often take months if not years to specify. Specifically to avoid unnecessary complexity. The problem is many people will say “I want to do X, and that only requires Y”, and then spec author and engine developers have to ask the ignored follow up “what if you want to do related Z?” or “how does that interact with these other features?”. Because the specifications need to solve the problem of “how do we make X possible” not “how do we do X only”. A lot of the pretty terrible misfeatures of the web come down to engines saying “we want to do X and that only requires Y” and then just doing that rather than considering any other factors.

                              The modern web is not “rapid incremental development”. It’s obviously incremental because we aren’t starting from scratch every few years. There aren’t versioned releases of the spec because we learned from the past is that that doesn’t actually work, for many reasons, but at a most basic level: what does it mean if a single feature in the spec is complete and fully specified and a browser ships said feature? should that require a versioned release of the spec? what if a different engine has a different set of fully implemented features? The result is the realization that version numbering a spec is meaningless.

                              What’s an example of user-facing functionality which is possible today but was not possible with the web standards of 10-15 years ago? Or what complex additions to web standards in the past 10 years were necessary to enable some useful functionality?

                              This is a subjective complaint - specifically preferring the “the web was good enough then why did we not just freeze it in time” model of development. The answer is a lot of things on the web aren’t necessary. But that applies to everything: did the web really need more than ncsa mosaic?

                              For example:

                              • CSS has many more features that make more complex design possible without being unmanageably complex (or just outright impossible). But all you need to do is be able to show text and images, so by your position none of that is necessary

                              • CSS animations are totally unnecessary, and were achievable using JS before then, It was incredibly slow and laggy on all but the highest end PCs, and basically unworkable on phones due to the touch screen interface reducing the time constraints for human perceptible latency.

                              • XHR was clearly not needed for a long time, I think it was only introduced in IE5? so that’s 99/2000 and wasn’t copied by Mozilla for some time after that, the web was big enough at the time and no one needed it before then

                              • Canvas could easily be handled server side - it was only invented because a number of core widgets in Tiger’s dashboard couldn’t leverage servers for rendering, but accidentally made available to the web at large and aggressively adopted by people even though the API was kind of bad because it was meant to be an internal feature and so essentially just modeled the CoreGraphics API.

                              And I could do this for almost every feature introduce over the last 30 years. Are there some features I detest? absolutely. notifications are basically just for spam, but web apps want to be able to act like real apps, and apparently people want that. personally I turn off notifications for non web apps as well. I hate them with a passion, but clearly people do want those. Google constantly wants to give web access to arbitrary hardware because they literally want chrome to be the only thing running on people’s computers lest they do something that google can’t monetize (WebBluetooth, MIDI, ….)

                              There are a few specs that have been added that I don’t think you could reasonably pretend could be done on older specs:

                              • The media specifications, and it’s reasonable that you would want JS to start/stop audio at least, and get playback position for the UI. That means you need additional specification. Alas DRM happened. OTOH if DRM did not happen then you either require plugins or you have an unspecified feature that everyone implements. Still DRM remains BS.

                              • The text event specs. JS, and so key events (and events in general) entered the browsers as developed in the English world. As such they utterly failed to handle any language that isn’t one key press equals one character. They added “input” events at some point, which were still bad. The new text composition event spec is not something you could emulate. That said in my opinion people should not be trying to handle text entry themselves, and instead should use the builtin text area or content editable (itself a new spec, but also awful to use if you were trying to make a word processor)

                              Your reply to your own comment mentions bugs due to under-specified exception propagation in Array.join(). Perhaps one way the web could be simpler would be if JavaScript used return values for error handling, similar to C or go, rather than exceptions.

                              No, the error is in the spec, not the language. In a language specification you have to say exactly how every step occurs, and that includes error/exception propagation. In fact the spec is entirely in terms of returned “Completions” IIRC the completion consists of a value and a completion kind, which could be normal, return, break, continue, or exception. Any step that gets a completion should be testing the kind of the completion, in this case it should have been checking for an exception. If JS was instead error value based (which it essentially was originally and is part of why JS is so lenient) the only change would be checking for an error return instead of an exception.

                              Just flagging that “people” can mean a lot of things, and if you want to argue that a majority of web users wanted the features to be added, that seems quite bold and requires some justification.

                              No, web developers want features, because they believe those features will let them make things that users want, or they’ll make more complex things easy and/or possible. To be clear engine developers don’t just say yes to every proposal because sometimes webdevs want things that have tremendous privacy or security implications. But then if a feature does seem like it will be generally useful, engine devs, web devs, work together to create an actual specification. Engine developers also propose features, generally based on making it easier, more efficient, or safer to for web developers to do the things that engine developers see happening at scale.

                              Did web users want publishers to regain the ability to show them popups, which were effectively eliminated in the early 2000s by popup blockers?

                              I’m unsure what you’re talking about here? I’m guessing notifications or similar in which case: yeah, actually people do want them. The problem you hit is the general webapp problem: a browser can’t distinguish a notification request from a site you want to be at from one you’re at through misclick or ad. But this applies to many of the more terrible features web developers propose which boil down to “well a native app can do X safely, why can’t my website?” followed by “then just have a permission dialog” which demonstrably does not work.

                              I don’t think I made that claim, did I?

                              I’m unsure what your underlying complaint is then? you said it was bad that apple (and eventually by proxy google) forked khtml, even though apple followed all the rules of open source, and even though apple’s work ended up back in khtml directly. The existence of safari and webkit was a significant driver of the “let’s make the specifications of the web actually usable”, so if you argument is “apple should not have made a browser engine” then you’re saying “we should have stuck with the specifications of the 90s and earlier 2000s”, the ones that could not be used to implement a browser.

                              1. 2

                                I just want to say that I’ve really enjoyed your comments in this thread. They’ve added some context to the development of web standards I have not thought about before. This is this site at its best.

                              2. 1

                                Ok, at this point I’m not sure what you’re trying to say, so I’m going to try to make what I am saying very clear:

                                • It is exponentially easier to implement a browser using the modern specs than the old specs.

                                • There is more work to implement a browser now, because they have more features. Those features are all individually something people can reasonably implement.

                                This still lacks clarity to be honest. Are you conceding that implementing a modern browser would take more man hours, while taking “difficulty” to refer to the character of the work (reading specs vs. reverse engineering) rather than the sheer amount?

                                I’m unsure what your underlying complaint is then? you said it was bad that apple (and eventually by proxy google) forked khtml, even though apple followed all the rules of open source, and even though apple’s work ended up back in khtml directly.

                                I said that WebKit has been used to further the degradation of the open web, in part by adding features for web apps and mobile usage in a way that locks out independent developers due to unnecessary complexity. This is far from saying “open source or other developers didn’t get anything out of the webkit fork.” Can you see the difference?

                                The existence of safari and webkit was a significant driver of the “let’s make the specifications of the web actually usable”, so if you argument is “apple should not have made a browser engine” then you’re saying “we should have stuck with the specifications of the 90s and earlier 2000s”, the ones that could not be used to implement a browser.

                                That’s a complete non-sequitur. Apple not forking KHTML would not have prevented them from making a browser engine, and it would not have prevented them or anyone else from improving the specifications. I have no idea why you would think that.

                                I can address some of the other things in your comment but I want to be sure I’m being understood.

                                1. 3

                                  This still lacks clarity to be honest. Are you conceding that implementing a modern browser would take more man hours, while taking “difficulty” to refer to the character of the work (reading specs vs. reverse engineering) rather than the sheer amount?

                                  There are more features to implement, but it is much much easier to implement those features. Hence it is easier to implement a browser. For more or less any feature you want to implement from 15 or 20 years ago, if you say “I’m going to open the spec and implement the feature” you will be unable to do so. For more or less every feature today, you can open the spec, essentially copy line for line, and you will have a correct implementation of that feature. This is exactly what Kling has been doing, and has been a demonstrably successful approach.

                                  I said that WebKit has been used to further the degradation of the open web, in part by adding features for web apps and mobile usage in a way that locks out independent developers due to unnecessary complexity.

                                  Again this reads like “the internet was fine and didn’t need any more features than it had 20 years ago” and “the only reason the internet has more features is because of webkit”. The first is more subjective but I’d say is not true, the latter is simply false.

                                  That’s a complete non-sequitur. Apple not forking KHTML would not have prevented them from making a browser engine, and it would not have prevented them or anyone else from improving the specifications.

                                  In that case the former means apple forking/not-forking khtml has no impact on the growth of the web as a platform, and to the second: one of the biggest drivers for improving the specs that make up the web platform was apple and Mozilla trying to ensure compatibility between webkit and gecko. If you remove an apple created browser engine a huge amount of that effort disappears, and while sure nothing is stopping anyone from doing that work, there’s also nothing stopping anyone from starting it.

                                  Honestly this entire argument seems to be predicated on webkit being solely responsible for the amount of stuff in the modern web platform, which isn’t true, and that only happened because webkit forked khtml, which isn’t true, and that the web platform was fine fine 20 years ago and none of the features added in the last 20 years were of any value.

                                  The motivating factor for the vast majority of new specifications is not “mobile”, it is web developers trying to make more and more powerful apps. Historically driven by MS (that’s how we got XHR), and then by Google. Chrome was developed largely because it is in Google’s interest to have people in a browser as close to 100% of the time as possible, and so they needed browsers to be able to basically be an app platform. That is why Chrome exists. When they were starting they did spend time deciding between gecko and webkit, and the only reason they chose webkit was because the architecture was not so heavily tied to the embedding app. If webkit was not there, they would have just based chromium on top of gecko, only without the preceding 5-8 years of specification cleanup. They’ve also historically been pretty bad about trying to add random half-thought out features to the web without considering anything beyond their own immediate use case, something that only apple’s webkit team really successfully pushed back on. So again removing apple and webkit means worse specs.

                                  Simply claiming “large spec == bad”, or even somehow “anticompetitive”, but if you’re going to make claims like that you really need have done more research than just appealing to things being “obvious” while dismissing the actual history and technical details involved.

                                  1. 1

                                    This still lacks clarity to be honest. Are you conceding that implementing a modern browser would take more man hours, while taking “difficulty” to refer to the character of the work (reading specs vs. reverse engineering) rather than the sheer amount?

                                    There are more features to implement, but it is much much easier to implement those features. Hence it is easier to implement a browser. For more or less any feature you want to implement from 15 or 20 years ago, if you say “I’m going to open the spec and implement the feature” you will be unable to do so. For more or less every feature today, you can open the spec, essentially copy line for line, and you will have a correct implementation of that feature. This is exactly what Kling has been doing, and has been a demonstrably successful approach.

                                    Should I infer an answer to my question from that?

                                    I said that WebKit has been used to further the degradation of the open web, in part by adding features for web apps and mobile usage in a way that locks out independent developers due to unnecessary complexity.

                                    Again this reads like “the internet was fine and didn’t need any more features than it had 20 years ago” and “the only reason the internet has more features is because of webkit”. The first is more subjective but I’d say is not true, the latter is simply false.

                                    If that’s how you read it, where did you get “open source or other developers didn’t get anything out of the webkit fork”?

                                    I will leave it as an exercise to the reader to identify the difference between what it “reads like” and what it actually says.

                                    That’s a complete non-sequitur. Apple not forking KHTML would not have prevented them from making a browser engine, and it would not have prevented them or anyone else from improving the specifications.

                                    In that case the former means apple forking/not-forking khtml has no impact on the growth of the web as a platform

                                    It does not. Usually when it gets to the point of deconstructing sentences word by word to identify the invention of a new stawman position, the opportunity for fruitful discussion has past.

                                    At any rate it seems your disagreement is with someone you’ve invented in your head, so I invite you to carry on the argument there.

                                    1. 3

                                      At any rate it seems your disagreement is with someone you’ve invented in your head, so I invite you to carry on the argument there.

                                      Ok. I tried to answer repeatedly, and you just kept coming back round to claim that specs are bigger and harder to implement Because Apple. You showed zero interest in actually learning anything about what you were claiming. My apparent “straw men” was me trying to understand what you were actually trying to say, as I answered your original claim of “apple forking khtml resulted in much anticompetively complex specifications”, so I assumed I was misunderstanding.

                                      But yeah lets end this, as far as I can tell there is nothing I can say, no history, no information, or anything that will sway you from that position, as you just dismiss anything I do say without any evidence to support your position.

                                      1. 1

                                        And you showed zero interest in understanding my position. Nice use of quotes without an actual quote by the way. Perhaps there is some solace in knowing that I don’t actually hold any of the positions you have ascribed to me in your last few comments.

                  2. 1

                    All those things can be done with a GPLed code base.

                    Companies do invest in truly free software. Linux, for example.

                    1. 1

                      I’m unsure what you’re trying to say here? WebKit is BSD and LGPL licensed, has had significant investment by numerous companies, and is used by a variety of non-commercial products.

            2. 4

              As a direct result of the permissive license on KHTML, KDE was able to adopt a WebKit-based web view (built around QTWebKit) a few years after Apple picked up the code. The new web view, unlike KHTML, was able to correctly render most pages on the web at the time.

              But, sure, it would have been better if KHTML had remained a partial implementation used only by KDE. I’m sure Apple developing a proprietary web engine rather than working on an open source codebase would have been much better for the open web.

              1. 5

                Your comments and reasoning seem to imply that we should be grateful to Apple for taking the KHTML code and turning it into something much better. I don’t necessarily take that premise for granted. I think there is an argument to be made that a non-insignificant part of the intention behind Apple’s continuation of WebKit as an open source project was so that they could rally free (as in beer) open source labor and expertise towards their purposes. Or put another way, I don’t think it’s a given that Apple alone would have been able to create or sustain a high-quality proprietary web rendering engine.

                We have a pretty representative example of this, Microsoft was unable to maintain Internet Explorer alone. So much so that they abandoned their proprietary engine in favor of an open source engine. I don’t think it’s unreasonable to assume the same fate would have befallen Safari. Safari’s death is still often predicted.

                In many cases it may be that large corporations need open source ecosystems more than open source ecosystems need large corporations. I would guess that that is the situation in most cases when I consider the events of the past twenty years. If so, open source developers and communities shouldn’t be so quick to sell themselves short or sacrifice the terms on which they allow others to use their IP.

                1. 4

                  Your comments and reasoning seem to imply that we should be grateful to Apple for taking the KHTML code and turning it into something much better.

                  If the goal of releasing software under an open license is not to allow people to improve it so that the amount of useful code under F/OSS licenses increases, then what is the goal?

                  I think there is an argument to be made that a non-insignificant part of the intention behind Apple’s continuation of WebKit as an open source project was so that they could rally free (as in beer) open source labor and expertise towards their purposes

                  The goal for Apple was to share the cost of maintaining the engine with other people. They made no secret of this. A lot of the early contributions were from Nokia, who had a similar need for their Series 60 mobile platform. Apple maintained abstraction layers that allowed Nokia and others to plug in their own rendering back ends and widget sets. KDE benefitted from this because it made it easy to maintain the Qt support, Nokia benefited because they got a web engine for a fraction of the cost of writing one and cheaper than licensing one from Opera, Apple benefitted because other people were improving WebKit.

                  The vast majority of contributors to WebKit were not ‘free (as in beer) open source labor’, they were paid employees of Apple, Nokia, Google, and so on. If you looked at the WebKit repo around 2010, you’d have seen around a dozen different platform integrations, maintained by different groups.

                  If you are opposed to corporations contributing to F/OSS codebases for reasons of self interest, what economic model do you propose instead to fund F/OSS development?

                  We have a pretty representative example of this, Microsoft was unable to maintain Internet Explorer alone. So much so that they abandoned their proprietary engine in favor of an open source engine.

                  This is not what I’ve heard from folks on that team (disclaimer: I am a researcher at Microsoft, I don’t work on anything related to Windows or Edge). The reason that Microsoft invested in Blink was largely driven by Electron. It is used in Office and a bunch of third-party applications. The choices were:

                  • Invest in Blink / Chromium for Electron apps and invest in Edge’s HTML engine for the web browser.
                  • Invest in Blink / Chromium for Electron apps, the Windows WebView and the web browser.

                  There is no possible world in which the second is not cheaper than the first because it is a subset of the work.

                  Again, Microsoft invests in an open source project because it allows sharing costs with Google and other contributors. Some of these people may be volunteers doing it for fun, most are not.

                  I started working on LLVM because I wanted a modern Objective-C compiler for non-Apple platforms. The GCC codebase was an unmaintainable mess. Clang had Objective-C parsing support but not code generation support. I was able to add enough support for Objective-C code generation for GNU Objective-C runtimes that we could build all of GNUstep in a few weeks. I gained far more from Apple’s contributions to LLVM than Apple gained from mine. Should I be mad because Apple shipped a load of system frameworks compiled with code that I worked on and all I got back in exchange was the free use of a few tens of millions of dollars worth of code written by their engineers?

                  1. 1

                    I am primarily a pragmatist. I think both the restrictive and permissive F/OSS licenses are appropriate in different circumstances. In general, however, for individual hobbyist programmers who may not yet understand why they are doing what they are doing, I would recommend they use GPL, or even a more restrictive license, until they have a well-understood reason to use a more permissive license. Often you don’t know how your code will end up being used or by whom so it’s more prudent to reserve IP rights early on and release them later once you have the benefit of hindsight.

                    If the goal of releasing software under an open license is not to allow people to improve it so that the amount of useful code under F/OSS licenses increases, then what is the goal?

                    I think this is a classic misunderstanding between the “free software” and the “open source” camps. The first camp is more oriented towards social good and the second camp is oriented towards utilitarianism.

                    If you are opposed to corporations contributing to F/OSS codebases for reasons of self interest, what economic model do you propose instead to fund F/OSS development?

                    As above, I am not opposed to corporations contributing to F/OSS codebases for reasons of self-interest in principle. As far as economic models to fund F/OSS development go, that’s a separate question but there are many answers. Linux is a GPL project, that has never stopped corporations from using it and contributing to it. That development is funded in two primary ways as far as I know: through fundraising and paying developers to work on it.

                    The choices were:

                    There is a third choice. Re-base electron on top of the IE browser engine for their own products. That option was probably out of the question because it was probably cheaper for Microsoft to make use of an open source engine then continuing to maintain their own proprietary engine. This was part of my thesis above. It’s not a given that even large corporations themselves are able to efficiently maintain their own proprietary web browsers / rendering engines, they are reliant upon F/OSS communities.

                    Should I be mad because Apple shipped a load of system frameworks compiled with code that I worked on and all I got back in exchange was the free use of a few tens of millions of dollars worth of code written by their engineers?

                    I think this comment was based on the premise that I was opposed to corporations contributing to F/OSS codebases for reasons of self-interest in princple. Since I’m not I won’t address it.

                    My primary contribution to this discussion is only to counter the premise that F/OSS developers necessarily gain more than large corporations do when large corporations take their code and use it in proprietary products. Or that F/OSS software would fade into obscurity without usage by large corporations in proprietary products. This premise is often what drives a F/OSS developer who is on the fence between choosing a permissive BSD-style license and a restrictive GPL-style license. The license does have some effect but ultimately I think large corporations will find a way to use GPL software if they need to, even Apple used GCC for many years before LLVM came around. Under those circumstances, F/OSS developers who value the terms of the GPL should not choose against it just because they fear the license will drive contributors away or inhibit their opportunities. It may be the case that your work is more valuable to them than them using your work is valuable to you.

                    1. 1

                      I think this is a classic misunderstanding between the “free software” and the “open source” camps. The first camp is more oriented towards social good and the second camp is oriented towards utilitarianism.

                      Note that the concept of “social good” with regard to the FSF is very narrow – it is: “contribute back the source code if modified”. Thanks to Freedom Zero, if a repressive regime were to use GPL’d software to regulate its gas chambers, and made sure to contribute back the source for other gas chamber operators, the FSF would be A-OK with it.

                      1. 3

                        Thanks to Freedom Zero, if a repressive regime were to use GPL’d software to regulate its gas chambers, and made sure to contribute back the source for other gas chamber operators, the FSF would be A-OK with it.

                        I get the point you’re trying to make but this example is excessive and most likely untrue to the point of making your point difficult to take seriously. I highly doubt the FSF would be A-OK with their software being used to commit murder. I think you mean to say that the GPL would in theory permit that usage.

                        In any case, the intention and spirit that motivated the creation of the GPL was the promotion of social good in general (just read RMS’s blog to get a sense of his general priorities). The specific terms were just their best tactic for accomplishing that in the context in which they were operating.

                  2. 1

                    This is not what I’ve heard from folks on that team (disclaimer: I am a researcher at Microsoft, I don’t work on anything related to Windows or Edge). The reason that Microsoft invested in Blink was largely driven by Electron. It is used in Office and a bunch of third-party applications.

                    That matches my understanding from the engine dev grapevine. MS was trying to deal with electron apps being gigantic and wanted to be able to essentially have it built into the OS (a la WebKit.framework on Darwin) because otherwise the resource costs are huge as every app is essentially a full copy of chrome and so has a huge dirty memory footprint. My understanding is that adopting blink didn’t actually change anything in that regard because blink doesn’t actually have any real API stability guarantees, and people just include whichever version of blink they built on anyway rather than trying to rely on a system framework.

              2. 3

                To clarify, I would call LGPL a weak copyleft license. If KHTML had a truly permissive license, WebKit could have been proprietary.

                My heuristic is anything that would have slowed down the process by which the web became the intractable skinner box it is today would have been for the good, and any change that would have given less help to the companies who did that probably would have helped.

                Being more rigorous, you can’t really predict what would have happened if KHTML were GPL rather than LGPL, and I grant it’s possible that it would have hurt. But my sense is that it probably would have helped slow things down.

                As a direct result of the permissive license on KHTML, KDE was able to adopt a WebKit-based web view (built around QTWebKit) a few years after Apple picked up the code. The new web view, unlike KHTML, was able to correctly render most pages on the web at the time.

                This overlooks that if Apple developed their own engine, it would have taken longer for them to implement the same features that ended up in WebKit, so it would have taken longer for the web to reach the same level of complexity, and maybe KDE could have kept pace with web standards via in-house development.

                On the other hand, maybe Apple would have adopted Gecko instead of KHTML, and who knows how that would affect things. Maybe web code bases would have converged even faster, leading to a faster transformation of the web into something like we have today, ultimately worsening our present situation.

                But, sure, it would have been better if KHTML had remained a partial implementation used only by KDE. I’m sure Apple developing a proprietary web engine rather than working on an open source codebase would have been much better for the open web.

                I honestly don’t know. I guess if WebKit were proprietary, Google may have adopted Gecko, which could have hurt in the long run as above. But if Gecko were also GPL, then Google would not have been able to develop Chrome as quickly, and making it proprietary would have made it harder for them to promote it to people who care about web openness.

                Anyway, the point is that it’s generally better to avoid helping companies whose interests are opposed to effective web or software openness. I will defend that heuristic against the one you appear to be using, that it’s better to have more free software even if it’s used by tech giants to impose costs and harms on the rest of us.

            3. 2

              Have the original KHTML developers said so? If not, we have no right to be indignant on their behalf at what Apple did. For all I know, maybe the KHTML developers are happy at the outcome, not least because they got to benefit from Apple’s work, even if not monetarily.

              1. 2

                it led to an acceleration of a problem that effects all of us, so yes we have a right to be indignant

          2. 3

            Let’s be realistic: that’s what is overwhelmingly likely to happen in any case.

          3. 1

            Plenty of GPLed projects are quite popular, and the great thing is that their users’ rights are respected, too.

        3. 2

          The patches Apple contributed back were a bit shitty. Giant diffs with useless commit messages, and often quite late.

          If Lars had used the GPL3, what would Apple have done? My guess is that its patches would have looked like the patches various hardware vendors contribute back to the linux kernel: Giant diffs with useless commit messages, late.

          1. 1

            my guess is that they would have used gecko

            1. 1

              I worked at Trolltech at the time; we discussed this with them. They didn’t sound like fans of Gecko.

              1. 1

                ?

                1. 1

                  A colleague of mine spoke with them in person, then came back and told us that they were nice and that they thought konmqueror was nice but with performance problems, compared to Gecko’s million lines of unfixable sloth. They also offered to contribute fixes back, but it was clear that it would happen a little late (it was very far down on their management’s priority list).

                  1. 1

                    That makes sense. But as the web gets more complex, the cost of starting anew also increases, so by the time Apple was deciding on a browser engine to adopt it may have been impractical to start their own. If KTML was GPLv3, I feel like that would have been a deal breaker, so they might have been stuck with Gecko.

                    1. 1

                      Well, there was also another browser engine Apple might have bought, like how Apple bought EasySW to use CUPS. Presto was small, fast, maintainable and all the rights were owned by Opera.

                      1. 1

                        ah true

    3. 15

      A bit misleading to claim they are doing what others have claimed is impossible. In the case of @ddevault’s article, which they don’t even link to, he says: “Starting a bespoke browser engine with the intention of competing with Google or Mozilla is a fool’s errand.” And that’s not what they’re doing, as far as I can tell.

      People haven’t been saying “for years” that it’s impossible to implement something like Netsurf, which seems to be about where this is headed.

      1. 9

        Sorry to burst your critical bubble but he also says, without qualification:

        I conclude that it is impossible to build a new web browser.

        Which is obviously what the article is referencing.

        1. 6

          Still misleading for them to base their framing on a deliberate misreading, adapting a definition of “web browser” which is clearly different from the writer’s intent.

          1. 4

            It may have mislead you apparently but I didn’t feel mislead at all. It’s pretty obviously a common sentiment that building something like a web browser is a monumental task outside the reach of about everyone, even without aiming for parity with existing ones. In fact the authors mention that people regularly say this to them online; are you saying that they are lying about their anecdotal experience?

            1. 4

              I don’t follow your premise. Are you saying there’s somewhere in the article where the authors claim people told them building a web browser without aiming for parity with existing ones is a monumental task? I don’t see that, but I certainly wouldn’t think it was a lie if it were there.

              I was mislead because the title and image and first sentence made me think they were attempting what others claim is impossible, namely building a browser which can compete with Firefox and Chrome.

              1. 4

                It is the first two lines of the article…

                “How is the SerenityOS team making such good progress on building their Ladybird browser, when we’ve heard for years that it’s impossible”?

                I’ve seen this question a few times on sites like Hacker News and Reddit, and I thought I’d offer my own personal take on it.

                I feel like you’re just trolling at this point lol.

                1. 5

                  I distinguish between “impossible” and “a monumental task,” and the first sentence says nothing about “without aiming for parity with existing ones.” You really didn’t think they were referring to the same “impossible” task that Drew describes in the screenshotted article?

                  1. 5

                    This is just incredibly silly pedantry and I’ve really nothing to say except it feels like you’re playing a joke. At least it gave me a laugh.

                    1. 1

                      Yet it wasn’t silly for you to say I was mistaken to begin with? Or to make it seem like I was accusing them of lying about their personal experience?

                      1. 2

                        I’d say this entire thread is impossibly silly. Monumentally, even.

                        1. 2

                          Well then I hope you made yourself laugh with your first comment.

                          The difference between building a browser that can compete with Firefox/Chrome and one that can’t is not “pedantry.”

                          1. 1

                            Ok.

                2. 1

                  I think you didn’t properly read his original comment. It was a clarification that the original claim was that starting a new browser that can compete with Google, Mozilla is impossible. The intent of the original claim never denied that something like Ladybird, in its current form, could exist. It is still to be seen if Ladybird can reach the level of maturity and popularity of something like Chrome, only then would the original claim be invalidated.

      2. 6

        “Starting a bespoke browser engine with the intention of competing with Google or Mozilla is a fool’s errand.” And that’s not what they’re doing, as far as I can tell.

        Yes they are. Drew is talking about the never ending specs and compatibility. So could you please stop comparing ladybrid to netsurf? They (the ladybird developers) aim for correctness and compatibility they pass the acid tests [0] and adhere to the standards. And drew [1] is saying

        It is impossible to:

        • Implement the web correctly
        • Implement the web securely
        • Implement the web at all

        and if you are talking about projects like dillo, netsurf and co that is the case. They can’t render the modern web at all and they don’t want to (as far as I can tell). Netsurfs javascript implementation is more than lacking (they are using duktape which is fine if you want minimal javascript, but not enough for the web.) Ladybird has it’s own rendering AND javascript engine and not only pass the acid3 test, but also get 279 points (which is a great result) on html5test (my build is a week old could be a few points more or less).
        Ladybird is by far the most standard compliant open source web browser that is not build on webkit (blink) or gecko and it gets better by the day (or week). It’s still slow and inefficient and has some problems with it’s implementation, but they are indeed trying to compete with google and mozilla in compatibility. The code they write is full of spec references and they made in this short while more progress than any other browser (I know of)

        [0] https://de.wikipedia.org/wiki/Acid_(Browsertests)
        [1] https://drewdevault.com/2020/03/18/Reckless-limitless-scope.html
        [2] http://html5test.com/

        1. 1

          Thanks for the correction. This part of the thread gave me the impression that they had more modest ambitions. I was unable to find a definitive statement on whether they expect to maintain parity with Chrome/Firefox, e.g. whether they expect to be able to use Ladybird for online banking, etc., but the project does seem more ambitious than I thought, so maybe their statements are more foolhardy than misleading.

    4. 7

      As an aside, I would actually use a minimal HTML/CSS-only browser if it were able to only consume orders of magnitude less memory than Chrome/Firefox. As it is, I already use Chrome in “no JavaScript” mode for the vast majority of my browsing (which is casual), since I don’t trust Chrome to execute JavaScript from random websites without being exploited. I have a separate profile to use JavaScript-heavy web applications from websites I trust, like my bank.

      As another aside, people often ask me what I do when I encounter a random untrusted website that requires JavaScript. The answer is that I click the back button. I find that I’m usually not missing much. It’s funny how many tech ecelebs use mandatory JavaScript on their literal blogs to which sites like this and Hacker News link. It’s a microcosm of a serious problem that plagues consumer tech.

    5. 4

      It’s not impossible to build a custom web browser, it’s impossible to get a significant amount of people to use your custom web browser, or influence web “standards” in any way, shape, or form.

    6. 2

      Reading the headline I thought this was going to be about Arc.

    7. 2

      This approach is in part possible because the web itself is designed around graceful degradation, meaning that browsers can render web content while only supporting some of the features used by the site.

      I’m not buying it. This is the ideal, and it used to largely be true, but in the “Web Platform” era, I don’t think it is. In my experience of the last half-decade or so, it’s much more common for sites to fail entirely, either by not loading at all, or by showing partial content, or the wrong content, or non-functional interface elements. The only place I’ve seen graceful degradation is on sites that are specifically designed for it, often because they have legal accessibility requirements.

      Go on the orange site and read any article about Safari. See how much absolute hate and vitriol web developers have for it, because its behavior differs in excruciatingly minor ways from Chrome, and it doesn’t implement proposed standards that are only implemented in Chrome. That’s a better perspective on what people mean when they say building a web browser is impossible.