Threads for syn-ack

    1. 8

      One thing I don’t see being discussed too much is structural (“duck”) typing vs nominal typing. In my opinion, leaving the actual identity and behavior of the value (object, if you will) to the value itself gives the best flexibility, as you don’t need to know the actual type of the value in question; you only care about the aspects of the value relevant to the code that demands the interface. This doesn’t necessarily need to be about runtime interfaces, either: Rust’s traits are an excellent solution for giving values extra functionality, provided they satisfy the interface that is expected by traits. (Although, I prefer interfaces that work at runtime, together with the runtime type information that would be available to an integrated programming environment running in the same context as the program being developed.)

      1. 2

        Rust’s traits are nominal, you have to explicitly declare that a type implements them.

    2. 1

      I’ve done this at my current job. We had to move fast in order to survive and I built a One Abstraction To Rule Them AllTM, similarly for our CRUD controllers (well, more like django-admin screens, but same idea). It has also gotten unwieldy because of the numerous edge cases I had to support due to business requirements (mainly, I had to graft API endpoints on top of Django’s admin system because we had to implement Javascript and oh boy it sucks).

      The main issue we had is that we had to work with django-admin’s existing assumptions. For instance, django-admin only accepts a well-known set of parameters on the “change list” page (object list page of your CRUD app). Adding any other parameters wipes them and replaces them with e=1, which is pretty annoying. Another big one is the fact that you relinquish control over your page layout when you let django-admin generate it for you. I worked around this by integrating django-crispy-forms to build the form layout in code but it’s very buggy due to it clashing with how django-admin thinks forms should work (it bypasses the AdminForm/AdminFieldset stuff entirely, which causes a lot of fun).

      Another one of the major pain-points of django-admin is that it lives in a very HTML-templatey world, which makes adding things like computed/dependent fields in your forms (which needs JS) very painful. Currently what we do is instantiate the JS code by reading all of the formset data off the page (deserializing as necessary) and then shoving all of that into a MobX object so we can have some semblance of sanity, but the one page where we do this accounts for over 50% of the bugs within the system, it’s so very painful.

      (One big thing I’m not getting into is Django forms. I can write a whole article about its major design problems, and probably will at some point.)

      Having learned my lessons from all of this, I plan to write a new version of our admin dashboard system, this time focused on composing things together and not basing it on django-admin but rather just solving our problems ourselves. I’m also thinking of making the server a simple API which describes which pages are available and how pages should be laid out to the client (which itself is an SPA). I believe the old system is an experience we had to go through to know what we need for our real system.

    3. 19

      More than the feature itself, it’s the air of mystery around it that baffles me. What are you protecting me against? On which sites? Why does Mozilla’s seal of approval bypass this, rather than giving the user the option to selectively enable those add-ons on specific sites? The permissions system already has a way to grant add-ons permissions to specific pages only, why is this additional layer necessary?

      1. 22

        One of the quirks of working in the open and of working in a large organization is that often the code and the communications prep get out of sync. If you decide you’re shipping on a fixed schedule and your communications event falls off-cycle then you have to decide whether to ship something and then explain it or explain something and then ship it.

        From the language of “quarantined domains” and browsing through the related bugzilla entries I get a very different sense than the author of this post has. It isn’t “quarantined extensions” and the user will have the ability to override so this really seems like a usability kludge rather than a market power grab. The author of this post shows the extension panel with the warning but doesn’t seem to link to the same page as that “Learn more” link. That page clearly reiterates that these are to improve security for users and the user will have the ultimate decision.

        If I had to guess, I would say that these are related to accidental disclosures of sensitive information from sites to extensions, and the confidentiality is to allow the affects sites to finalize their own messaging about the potential security issues for their users. Just a guess.

      2. 3

        My guess is that google/youtube said to mozilla: “look, you have to stop with these download extensions because copyright blablablabla or otherwise we lock you out of youtube and you will be irrelevant.” I am not saying that is what happened, but that is how it feels to me.

        1. 12

          Where is the youtube thing coming from? Youtube only appeared in the blog post because it was manually added by the author for demonstration purposes. Mozilla isn’t actually blocking any extensions on youtube. Right now the restricted domains list is empty, but if I had to guess I’d think the restricted domains would be comprised of mozilla owned domains or banking sites (since the list is ostensibly for security purposes).

          1. 6

            Google is experimenting with blocking adblockers on Youtube. Maybe the thinking is, instead of getting in a rat race with blockers, it’ll just get Firefox to turn them off.

            1. 7

              Firefox already has a tiny market share, making it less useful will only decrease that.

              Why not simply disable adblocking on Google properties in Chrome?

              1. 2

                because then people would flock to firefox

                1. 1

                  Nah, they’d switch to Edge. It’s Chromium based so it would be more familiar.

                  1. 1

                    yeah something like that

        2. 6

          Rather than threatening to lock them out of Youtube, a much more direct (and omnipresent) threat is that they could just cut Mozilla’s funding.

      3. 2

        Protecting against losing money or logins, on sites dealing with money or logins, moz’s seal of approval is given as part of mozilla actively reviewing those specific extensions and their changes. AFAIK the permissions system doesn’t have a way to grant permissions to all-except-specific-pages, hence requiring additional changes there.

    4. 3

      A prime example of nice c++. Operator/ is also used as a path separator in std::filesystem: https://en.cppreference.com/w/cpp/filesystem/path/operator_slash

      1. 8

        I disagree. Language features that hide control flow and make things non-obvious such as changing the meaning of the division operator lean towards abuse over use, in my opinion. It’s not like the result looks particularly good from a syntactic perspective, either (imo).

        1. 6

          This. It’s a cute trick but not one that is particularly more useful than auto ymd {2021, January, 23};, especially since there’s plenty of countries out there that tend to use things other than / as a date separator.

        2. 3

          How does it hide control flow? An operator is just a different way to write a function.

      2. 2

        It’s nice syntax, but a bit over engineered IMO. How often does one need to construct Gregorian calendar dates this way? Regular struct declaration would look nearly as good.

        The / operator for concatenating filesystem paths is great, though.

    5. 3

      I believe this is because the current set of dynamically typed languages we have are at a local maximum, where the type information is perfectly clear and fully introspectable at runtime (even more so than statically-typed, compiled languages), but the language has no tools to introspect or take advantage of these types, because the development context is outside of the runtime where such types are unavailable. And for that matter, I don’t think it’s types that we actually wish for either; what we’re looking for is interfaces that allow us to ask questions to objects and receive responses, and concrete types happen to be one common use of such a construct.

      From these two points, I envision the next step in dynamically typed programming to be a system where the development workspace executes in the same context of the program being modified. The workspace allows you to modify and extend a program while it’s running, and the type information that’s available at runtime guides the programmer in creating interface definitions for both documentation and as guides while editing.

      Obviously, such a system needs to have proper isolation mechanisms in order to prevent the workspace from being affected by any errors that occur within the program. A process abstraction would be in order here, as well as a way to prevent untrusted programs from doing things that the user didn’t intend (a capability based system sounds nice).

      1. 5

        yes! So… Common Lisp and its image-based development?

        1. 1

          Either Lisp or Smalltalk (because the two paradigms are isometric). I’m working on a version of the Self programming language that I want to eventually get to the point I’ve described.

    6. 12

      This was almost 10 months in the making, with 2 attempts and coming back from the PR being closed by Stalebot multiple times. So glad it’s finally done.

      1. 13

        I hate stale bots so much. They generate way too much noise and make it hard to work on larger, longer features.

        1. 3

          Exactly. Make use of sorting/filters/labels instead.

        2. 3

          One more argument for leaving GitHub! I haven’t seen stale bots on any of the FOSS code forges yet, and maybe we can build a culture that has better ways of dealing with the (entirely valid) problems of overwhelmed and demotivated maintainers.

        3. 1

          They are a necessary evil, especially for a project like Serenity. People love to invent, or port a feature or some grand refactoring, and then they get tired of it and leave their work to rot forever. The serenity stale bot as closed ~500 PRs over it’s life time that were left with no activity for over a month.

          1. 3

            In my opinion, pull requests that are not ready for review (i.e. drafts or failing CI checks), are fair game to be closed as stale. However, bug reports, feature requests, and PRs that pass CI checks should remain open until they get attention from a maintainer.

            1. 3

              I totally agree with that. We explicitly don’t enable stale bot for any issues, only PRs. There have been cases where PRs slipped through the cracks in SerenityOS and stale bot happened to close them. It’s just one click to bring them back and get them merged, so it always seemed like a reasonable trade-off to me. The community is generally watching for what stale bot reaps and have been good about raising anything that should be given attention.

              1. 1

                Then you have a good community and I’m glad that it works for you. A wasteland of open-but-stale stuff is no fun to wade through.

    7. 6

      Here me out. I love the ZigSelf project, but I really dislike when projects have taglines or names that reference the language they are written in when that is especially irrelevant. An implementation of Self can stand alone without being “propped up” by the fact that it’s written in Zig.

      Likewise, a CLI tool does not need to specify “ls, but in rust!”

      Of course, who am I tell someone what to name their project? I’m just a grump.

      Aside, I really enjoyed this article, and subscribed to read the next ones in the series. It’s a really neat project.

      1. 3

        I agree! The main problem is that I really suck at finding a name for projects. The choice was between “ZigSelf” and “Untitled Project”. The name is very much temporary, however.

        1. 1

          I’m normally pretty good at coming up with names for things, but “self” is a very difficult word. My immediate thought is “selfish” which is fun because “it’s self, more or less” but it could easily have a negative connotation which might not be great.

    8. 4

      Reading about these influential systems, I always wonder what they lacked that popular languages had. A marketing department that sold to business needs instead of developers? Random chance? Committed consumers instead of argumentative academics?

      1. 12

        Smalltalk and Self both suffered from living in a VM that tried to deny the existence of the outside world. Successful languages benefit from ecosystems in other languages. Swift built on exiting Objective-C libraries, C++ had seamless C interop, C# and VB could import COM components, Python made it easy to wrap C components. The only recent exceptions to this are Java and JavaScript. The latter succeeded by being the only option in its domain. Java is a bit more interesting, since it failed in the Applet space where it was originally pushed but managed to displace COBOL for business-logic programming, largely due to IBM’s backing.

        1. 8

          While I agree fully on Self, Smalltalks in the mid-90s that fully integrated with their environment existed and were quite popular. VisualWorks, Smalltalk/X, VisualAge Smalltalk, and even things like Pocket Smalltalk (for Palm OS) all integrated well with their operating systems and did not try to live in their own bubble at all. Most of those products died once Java came on the scene, leaving the FOSS implementations like Squeak (which 100% were antisocial islands), but blaming Smalltalk for being isolationist in the mid-90s when Java came out just isn’t quite right.

        2. 4

          I think the secretly big thing about Java is a massive standard library including proper GUI things that … well they look ugly but you could build apps with it. There are so many things people get done in Java with very few-to-no dependencies.

        3. 2

          Indeed. It will probably never gain popularity in its true form but I think the “closed off VM” approach is actually really interesting for recreational programming purposes.

          1. 1

            WebAssembly is a closed off VM. The host determines which host functions will be available to a WebAssembly module.

      2. 6

        Self ran on (expensive, niche) Sun workstations, not ordinary personal computers. I’m guessing that its memory requirements (due to all the JITted code) would have made it infeasible to port in the early 90s, when RAM still cost ~$100/MB and almost no PCs had more than 4MB.

      3. 3

        It was not as easy to find out about new and cool programming languages in the late 80s. (source: was alive and programming at the time).

    9. 1

      Yeah… Serenity is fundamentally a C++ project and it shows. The C standard library itself uses C++’s runtime type information in order to link properly. It’s unfortunate, but it works, so it’s fine.

      I would love to hear more details about this. I’ve never seen RTTI used in linking before!

      1. 2

        I probably misspoke there and meant “needs C++ RTTI symbols in order to link”.

        1. 1

          I’m still not sure what this means. What type_info structures does it need to exist in linked code?

          1. 2

            I had to dig it up, but here’s the exact original error:

            ld.lld: error: undefined symbol: vtable for __cxxabiv1::__class_type_info
            >>> referenced by spawn.cpp
            >>>               spawn.cpp.o:(typeinfo for AK::Function<int ()>::CallableWrapperBase) in archive /bitplane/Serenity/Build/i686/Root/usr/lib/libc.a
            >>> the vtable symbol may be undefined because the class is missing its key function (see https://lld.llvm.org/missingkeyfunction)
            
            ld.lld: error: undefined symbol: vtable for __cxxabiv1::__si_class_type_info
            >>> referenced by spawn.cpp
            >>>               spawn.cpp.o:(typeinfo for AK::Function<int ()>::CallableWrapper<posix_spawn_file_actions_addchdir::'lambda'()>) in archive /bitplane/Serenity/Build/i686/Root/usr/lib/libc.a
            >>> referenced by spawn.cpp
            >>>               spawn.cpp.o:(typeinfo for AK::Function<int ()>::CallableWrapper<posix_spawn_file_actions_addfchdir::'lambda'()>) in archive /bitplane/Serenity/Build/i686/Root/usr/lib/libc.a
            >>> referenced by spawn.cpp
            >>>               spawn.cpp.o:(typeinfo for AK::Function<int ()>::CallableWrapper<posix_spawn_file_actions_addclose::'lambda'()>) in archive /bitplane/Serenity/Build/i686/Root/usr/lib/libc.a
            >>> referenced 2 more times
            >>> the vtable symbol may be undefined because the class is missing its key function (see https://lld.llvm.org/missingkeyfunction)
            
            1. 2

              Okay, so it looks like their libc needs to be linked to a C++ runtime library (libsupc++, libcxxrt, libc++abi)? If you’re static linking, you need to add this explicitly because *NIX static libraries aren’t really libraries, they’re just archives of .o files. That doesn’t mean that it requires RTTI for linking, it just means that it depends on a C++ runtime. I’m a bit surprised that they enable RTTI in libc, I would generally expect libc code to be compiled with -fno-rtti -fno-exceptions, but it is useful to have C++ thread-safe statics in libc, so you do want at least the __cxa_guard_* functions from the C++ runtime.

              1. 1

                It’s pretty all in on c++ afaict lambdas and everything. Not that those require rtti (I don’t think), but I wouldn’t be surprised by an internal use of or exceptions or rtti

                1. 2

                  SerenityOS does not use exceptions, but it does make use of RTTI via its use of AK::Function (similar to std::function) within LibC.

                  1. 2

                    That does that use RTTI for? Most implementations of std::function (all of the ones that I’ve read, but I haven’t read all of them) work fine without RTTI. They use a templates constructor that wraps the statically typed lambda in a class with a virtual invoke function that calls the lambda’s call operator and either embeds the lambda (via a move or copy constructor) in the object or a separate heap allocation.

                    The only things that use RTTI in C++ are exceptions (which dynamically map the thrown type to one of the caught types), dynamic_cast and a dynamic typeid statement. If you don’t use exceptions, then that just leaves dynamic_cast and typeid.

                    Most modern C++ codebases avoid dynamic_cast because it’s very slow and you can get better and faster code with an explicit virtual cast method for the classes that actually need it. The only place where this is difficult is diamond inheritance (moving from one branch to another) and dynamic cast is very slow there (and it’s usually a bad idea).

                    There are also problems with typeid. It returns a std::type_info object, which has a name method that returns a char*. The contents of this string are implementation defined (though it must be unique), but the Itanium ABI specifies that it is the managed type encoding. This means that you end up with some very large strings embedded in binaries. You often see 20% of the total binary size of a C++ library made up of type info strings, which is the main reason that you’d want to disable them. Personally, I’d love to see an ABI that replaced them with 64-bit integers formed from a cryptographic hash of the mangled name and emitted a map from integer value to string in a separate section that could be stripped in release builds.

              2. 1

                You’re right, I couldn’t explain myself very well.

    10. 5

      I’ve been wanting to write publicly for years, but haven’t gotten around to it until now.

      Independent of the content, I would greatly appreciate any feedback on writing style, voice, the included diagrams, etc!

      1. 2

        Congratulations! I think you make your point well.

        Since you requested some feedback: while reading the piece, I did notice that you’re sometimes spending more sentences than necessary, sometimes on discussing what you’re not discussing.

        • E.g. in the section “Design”, you might consider mostly deleting the first three paragraphs and the image, leaving only “when working with striped development, I often like to think of it as a three-step process: (…) begun to stabilize”.
        • E.g. in the section “Implementation”, you could simply delete “As this isn’t meant to be a post about how to write code, there isn’t all that much to say about the implementation phase.”

        Nonetheless, I do - again - think you make your point well, and I hope you continue to submit relevant posts here. ;-)

        1. 2

          Thank you! This is exactly the type of feedback I’m looking for. Greatly appreciated.

      2. 2

        FYI, on smaller screens, the top and bottom halves of the side navigation crash into each other and cause a layout issue.

        1. 1

          Interesting, thank you. I checked it on mobile but not a small display. I’ll have to fix this tonight!

          Edit: top half of the sidebar has been disabled until I can fix this, thanks again!

      3. 2

        I especially like the “Stripes are completed left to right” diagram. It makes it really clear what this is all about.

    11. 9

      Q: Why bother? You can’t make a new browser engine without billions of dollars and hundreds of staff.

      Sure you can. Don’t listen to armchair defeatists who never worked on a browser.

      Armchair defeatist here 👋 I don’t believe it takes “billions of dollars” to create a new basic browser engine (i.e. HTML, CSS, JS); after all, there are already multiple projects doing exactly that (e.g. Netsurf and Dillo). I’m unsure however that the newer technologies like WebGL, WebDRM, WASM, etc. can be implemented completely in a feasible timeframe. You’d wind up with a browser that’s nice for reading news sites and maybe watching Youtube, but anything more complex would be at least partially broken. Maybe someone more knowledgeable can correct me on this.

      1. 28

        You’d wind up with a browser that’s nice for reading news sites and maybe watching Youtube, but anything more complex would be at least partially broken.

        Sounds great to me.

        1. 22

          Worth noting that the SerenityOS browser has some support for JavaScript, WebAssembly, WebGL, websockets, and other “modern” Web features. They plan to eventually support web apps like Discord, since that’s where they chose to host their community (/me sighs).

          Wrote my thoughts over at https://seirdy.one/notes/2022/07/08/re-trying-real-websites-in-the-serenityos-browser/

        2. 7

          Sounds great to me.

          Indeed. Sites who qualify for 1MB Club would probably work well.

          Case in point: My own site is generated via Hugo. The markup is very very simple. I’ve added a splash of (ready-made) CSS, but that’s mostly to get a nice typeface and neat margins – the stylesheet is not at all required to read the text, and there’s no JavaScript in use.

          And I’m far from alone in building sites like this.

        3. 6

          While it may sound great to you, it’s going to kill adoption if a new browser doesn’t have sufficient parity. And given how much Google is driving the specs these days and forcing everyone else to play catch-up, I’m not really sure that independent browser engines can maintain meaningful parity.

          I also worry that the final Chrome/Chromium monoculture will arrive pretty soon regardless of what anyone does at this point.

          1. 20

            I highly doubt their goals or expectations are mass adoption. And if, like you say, there is no way to beat google anyways they might as well not worry about it and just make whatever they enjoy.

          2. 3

            I also worry that the final Chrome/Chromium monoculture will arrive pretty soon regardless of what anyone does at this point.

            ya me too. but efforts like these will do one of three things: 1) nothing at all to improve the march towards a chrome/chromium “death”, 2) delay it, 3) provide a viable alternative and a way out from it

            #2 and #3 seem highly unlikely, but I’d rather not give up all hope and accept #1 as our fate. But I’m one of those crazy people who would rather use/promote webkit, even if it’s not perfect, since its survival is absolutely necessary to reach #2 or #3 (even if #1 is much more likely at this point…). Ya, it’s a sad situation out there.

      2. 18

        The author worked on WebKit / Safari for a long time, so I’d trust his judgement a lot more than mine on the amount of work. I wonder how many of the older web technologies can be implemented in the newer ones. Firefox, for example, decided not to implement a native PDF renderer and instead built a PDF viewer in JavaScript (which had the added advantage that it was memory safe). It would be very interesting to see if you could implement the whole of the DOM in JavaScript, for example.

      3. 17

        You could have said the same thing about Linux. How is it possible for a hobbyist who had never had a real job to create an operating system that’s fast and portable? That’s for companies like Sun, IBM, and HP, which were huge Unix vendors at the time.

        I also found it funny that as recently as ~2009 there were knowledgeable people saying that Clang/LLVM were impossible. You could never re-do all the work that GCC had built up over decades.

      4. 10

        That’s completely alright. Ladybird is a system made by its developers, for its developers. It does not intend to compete with other web browsers, and that’s okay. It’s the epitome of the https://justforfunnoreally.dev/ mindset.

      5. 5

        I also have a defeatist stance here. Various streaming services such as Netflix and Co are a hard wall since the web was made non-free and gatekeepers like Widevine (Google) don’t even grant pretty successful browser projects entry.

        But then maybe it’s time to just leave that stuff behind us anyways.

      6. 4

        While this is certainly true, using a browser as a user agent for hypertext documents and not as a cross-platform application runtime is a worthy exercise on its own. IMO, of course.

      7. 2

        WebDRM is likely the killer because it’s stupid :-/

        But Kling spent many years working on webkit and khtml, so the layout and rendering of the bulk of html and css shouldn’t be a problem for him alone. Bigger issues I suspect will be xml, xslt, and xpath :-D

    12. 4

      They actually have rewritten the compiler from Rust to Jakt! “Rewriting it in Jakt” is now a thing.

      1. 6

        Self-hosting was a really big priority as soon as the Rust compiler was mature enough to self-host. They entered about a month of feature freeze. The moment where all tests passed was great.

    13. 10

      Git work-tree and Git note are two of the most slept-on features of Git. Incredibly useful, and easy to use iff you are using the command line. For some reason, neither are often supported by third-party tools, even relatively fully featured ones like Sourcetree (well, work-trees are kinda supported, but you need to make the work-tree in the CLI, then add it as repository on Sourcetree). Which is incredibly frustrating when trying to pitch a useful workflow to the rest of the team if that’s their primary mode of interaction with Git.

      1. 5

        I am curious: What do you use git notes for?

      2. 2

        GitHub once supported notes but they removed that feature. I never used them and I do not really see much point in using them.

      3. 2

        Magit seems to support worktrees really well, allowing you to list worktrees, create/delete them, and even allows you to create a new pull-request worktree if you have magit-forge setup (i.e. it will create a new worktree from a pull-request on the repository, extremely useful if you’re a maintainer).

      4. 1

        Any tutorials? It’s not obvious if I have to clone the whole repo into the new worktree or if I can have different parts of a repo at different commits without creating new directories.

    14. 14

      Terrible article. The author simply mashes together concepts while apparently having only a superficial understanding of any of them. The comparison of uxn to urbit is particularly hilarious, considering that they have totally different goals. Well, yes, both are virtual machines and that is where it ends.

      Simplicity and elegance have appeals beyond performance, like ease of understanding and implementation, like straightforward development tool design. Judging the performance of a VM that (from what I see) has never been intended to be high-speed based on some arbitrary micro benchmark also doesn’t really demonstrate a particularly thorough methodology (the pervasive use of “we” does make the article sound somewhat scientific, I do grant that…)

      I suggest the author invests some serious effort into studying C. Moore’s CPU designs, the true meaning of “simplicity”, the fact that it can be very liberating to understand a software system inside out and that not everybody has the same goals when it comes to envisioning the ideal piece of software. The article just critizes, which is easy, but doesn’t present anything beyond that.

      1. 13

        Terrible article. The author simply mashes together concepts while apparently having only a superficial understanding of any of them.

        I suggest the author invests some serious effort into studying C. Moore’s CPU designs, the true meaning of “simplicity”, the fact that it can be very liberating to understand a software system inside out

        I don’t exactly agree with the author’s criticism of uxn (probably because I see it purely as a fun project, and not a serious endeavor), but let’s not descend into personal attacks please.

        1. 15

          Thanks.

          Now, with that out of way - this is not at all personal, the author is simply misrepresenting or confused, because there are numerous claims that have no basis;

          It is claimed this assembler is like Forth, but it is not interactive, nor it have the ability to define new immediate words; calling and returning are explicit instructions. The uxntal language is merely an assembler for a stack machine.

          Must Forth be interactive? What sense make immediate words in an assembler? Returning is an explicit instruction in Forth (EXIT, ;). That sentence suggests some wild claim has been made, but I can’t see where.

          Using software design techniques to reduce power usage, and to allow continued use of old computers is a good idea, but the uxn machine has quite the opposite effect, due to inefficient implementations and a poorly designed virtual machine, which does not lend itself to writing an efficient implementation easily.

          Again, I suggest studying Moore’s works, Koopman’s book and checking out the “Mill” to see that stacks can be very fast. The encoding scheme is possibly the simplest I’ve ever seen, the instruction set is fully orthogonal. A hardware implementation of the design would be orders of magnitude simpler than any other VM/CPU. Dynamic translation (which seems to be the author’s technique of choice) would be particularly straightforward. I see no poor design here.

          The uxn platform has been ported to other non-Unix-like systems, but it is still not self-hosting, which has been routinely ignored as a part of bootstrapping.

          This makes no sense. Why self-host a VM? “Routinely ignored”? What is he trying to say?

          After that the author discusses the performance of the uxn VM implementation, somehow assuming that is the only metric important enough to warrant an assessment of the quality of uxn (the “disaster”).

          Vectorisation is also out of the question, because there are no compilers for uxn code, let alone vectorising compilers.

          What does the author expect here?

          We can only conclude that the provided instruction sizes are arbitrary, and not optimised for performance or portability, yet they are not suitable for many applications either.

          (I assume data sizes are meant here, not instruction sizes, as the latter are totally uniform) Uxn is a 8/16 bit CPU model and supports the same data sizes as any historical CPU with similar word size. Again, I get the impression the author is just trying very hard to find things to complain about.

          Next the author goes to great lengths to evaluate uxn assembly as a high level programming tool, naturally finding numerous flaws in the untyped nature of assembly (surprise!).

          a performant implementation of uxn requires much of the complexity of modern optimising compilers.

          The same could be said about the JVM, I guess.

          To get to the end, I can only say this article is an overly strenous attempt to find shortcomings, of whatever nature, mixing design issues, implementation details, the authors idea of VM implementation, security topics, at one moment taking uxn as a VM design, then as a language, then as a compiler target, then as a particular VM implementation, then as a general computing platform.

          I like writing compilers, I have written compilers that target uxn, it is as good a target as any other (small) CPU (in fact, it is much easier than, say, the 6502). Claiming that “the design of uxn makes it unsuitable for personal computing, be it on new or old hardware” is simply false, as I can say from personal experience. This article is pure rambling, especially the end, where sentences like this that somehow let me doubt whether the author is capable of the required mental detachment to discuss technical issues:

          Minimalist computing is theoretically about “more with less”, but rather than being provided with “more”, we are instead being guilt-tripped and told that any “more” is sinful: that it is going to cause the collapse of civilisation, that it is going to ruin the environment, that it increases the labour required by programmers, and so on. Yet it is precisely those minimalist devices which are committing these sins right now; the hypocritical Church of Minimalism calls us the sinners, while it harbours its most sinful priests, and gives them a promotion every so often.

          No, brother, they are not out there to get you. They just want simple systems, that’s all. Relax.

          As O’Keefe said about Prolog: “Elegance is not optional”. This also applies to CPU and VM design. You can write an uxn assembler in 20 lines of Forth. There you have a direct proof that simplicity and elegance have engineering implications in terms of maintenance, understandability and (a certain measure of) performance.

          1. 16

            I agree with you in the sense that doing something for fun is obviously allowed, but I feel like the criticism in the article is not that you shouldn’t build anything simple and minimalist for fun, but that the things we build are usually not as revolutionary as some may claim just because they’re simple. Now, if the author of uxn made no such claims then that’s fine; however, that doesn’t mean something cannot be criticized for its perceived flaws (whether you agree with the style and tone of the criticism or not).

            I also agree that the Church of Minimalism stuff is a bit over-the-top.

          2. 6

            FWIW I had exactly the same reaction to this article as you, and I haven’t even heard of any of these projects. The article seems like it is in bad faith.

            Minimalist computing is theoretically about “more with less”, but rather than being provided with “more”, we are instead being guilt-tripped and told that any “more” is sinful: that it is going to cause the collapse of civilisation, that it is going to ruin the environment, that it increases the labour required by programmers, and so on. Yet it is precisely those minimalist devices which are committing these sins right now; the hypocritical Church of Minimalism calls us the sinners, while it harbours its most sinful priests, and gives them a promotion every so often.

            This part in particular is so hyperbolic as to be absurd. Completely unnecessary. Still, I guess if your goal is to garner attention, hyperbole sells.

            1. 12

              This part in particular is so hyperbolic as to be absurd. Completely unnecessary. Still, I guess if your goal is to garner attention, hyperbole sells.

              I wouldn’t say so. I’ve had folks tell me on tech news aggregators that the only way to make computing ethical is for computing to be reimplemented on uxn stacks so that we can all understand our code, or else the code we use can be used for exploitation. Now this may not be the actual uxn project’s stance on the matter at all, but much like Rust seems to have a bit of a reputation of really pushy fans, I think it’s fair to say that uxn has attracted a fanbase that often pushes this narrative of “sinful computing”.

              1. 2

                Oh interesting, do you have any links? I’m intrigued by this insanity.

                Edit: though presumably this is a vocal minority, making this still quite a hyperbolic statement.

                1. 3

                  I did a light search and found nothing off-hand. I’ll DM you if I manage to find this since I don’t like naming and shaming in public.

                  Edit: And yeah I’m not saying this has been my experience with a majority at all. The folks I’ve heard talk about uxn have been mixed with most having fun with the architecture the same way folks seem to like writing PICO-8. It just has some… pushy folks involved also.

                  1. 7

                    I believe the only way to make computing ethical is to reinvent computing to do more with less. I also believe uxn is trying to reinvent computing (for a very specific use case) to do more with less. But those two statements still don’t add up to any claim that it’s the only way out, or even that it’s been shown to work in broader use cases.

                    Disclaimer: I’ve also tried to reinvent computing to do more with less. So I have a knife in this fight.

            2. 4

              Actually, we regularly get posts here on lobste.rs espousing exactly that sort of ideology. I think there’s one or two trending right now. perhaps you’ve hit on the right set of tag filters so you never see them?

        2. 2

          “C. Moore…” as in Chuck Moore…

          1. 1

            The paste was cut off. Fixed.

      2. 13

        The comparison of uxn to urbit is particularly hilarious, considering that they have totally different goals.

        they both market themselves as “clean-slate computing stacks”, they both begin with a basic admission that no new OS will ever exist (so you have to host your OS on something else), they both are supported by a cult of personality, they both are obsessed with ‘simplicity’ to the point of losing pragmatic use and speed. I’d say they’re pretty similar!

        1. 3

          they both are supported by a cult of personality

          Strong disagree, from someone who’s been moderately involved with uxn community previously. Who is the cult leader in this scenario?

    15. 12

      There exists a formatting that has none of these problems. I’m of course talking about the self-evident pythonic/rustic formatting (which probably has many more names):

      Yes, and that style usually contributes to files being overly long and sparse. It is not the panacea and many developers (including myself) don’t prefer that style when having more things on the screen is more valuable.

      I remember a time before clang-format: I would say that professional developers did at least as good of a job as clang-format to begin with.

      Hahahahaha. No. I also remember the time where other developers checked things in with their customized tab-width and used spaces for alignment on top of tabs for indentation. I do not wish to go back to that time.

      In fact, in some ways better than any autoformatter could ever come up with, because the human knows best,

      I’m happy that you haven’t met some people I’ve worked with.

      Freedom of expression!

      What.

      If the purpose of automatic formatting is to avoid style disputes in code review, it doesn’t work, because too few people know the importance it gives to trailing comma – I have to nag people about it.

      Instead of being glad that it’s the only thing you have to fight over, you choose to complain because clang-format doesn’t solve everything and anything.

      I’ll give you point 3 in that it would be nice to give the option to have non-dependent syntax in clang-format for those who want it.

      If you haven’t noticed the trend, everything is wrapped in impenetrable all-encompassing dockerized CI-scripts that can’t just check a small change quickly.

      Sounds like a you issue. Our pre-commit linters at work finish in a couple seconds.

      I find your rant to be opinionated and not considerate of the other side of the aisle. By all means, feel free to fork clang-format to adjust it to your needs, but to complain without considering the environment which spawned clang-format (many different C++ projects with wildly different standards had to be appeased for adoption) is ignorant.

      1. 5

        I absolutely agree with the OP about clang-format being less than ideal; that being said their take is insanely self-centered, and practically nothing in their rant is actually about clang-format.

        Some real issues I have with clang-format:

        1. Different versions of clang-format with the same config produce conflicting output. Thus if you run both versions on the same code back to back they could flip-flop between output. In many other auto-formatters the newer version is solely a super-set upon the the previous versions.
        2. Getting everyone running the exact same version is non-trivial. This is largely a problem with C++ and it’s overall tooling story, not clang-format itself. But, especially when compared to built-in linters like go fmt and rustfmt, or selfcontained virtualized toolchains on smaller projects like the JS/Node or Python ecosystems via docker/venv it’s a pain.
        3. It can be insanely slow. Ideally your editor is auto-formatting on save, but we have not found that to work well with clang-format on Windows with VS2019 at least. It causes frequent very noticeable hangs in the seconds range compared to the built-in VS2019 code formatter. What’s weird though is it’s not consistent, you can write a bunch of code, save, clang-format runs and you hardly notice it, then ctrl+z to undo the format change, save again, and despite the exact same input as last time, clang-format will run for 2+ seconds. We do not have this issue with any other language + editor + formatter combo.
        4. How options inside .clang-format behave together can be surprising and confusing. Sometimes to get the output you want you almost need the inverse of the options you think are intuitive. It’s nice that it’s so configurable, but the usability could definitely be improved.

        That being said, we’re on a brand new project at work now, and I’m very excited to have time to get clang-format working again. Our code reviews have gone to shit without it. Everyone wants to feel like they are contributing to reviews by pointing out /something/ if they can, but we’re missing real issues because the [legitimate] style issues are easy to spot, then those people feel accomplished, and move on. Not only are we frequently missing the forest from the trees, we’re also wasting time iterating reviews on style, not to mention the time put in to styling before the reviews even go up.

        1. 1

          I agree on point 1. Point 2 can largely be solved with CI tooling. With GitHub Actions, you can allow actions to push to the PR branch and so it’s fairly easy to set up a CI job that just applies clang-format and adds an extra commit if it generates any changes.

          Point 3 is surprising though. I mostly use clang-format via ALE in Vim and it’s more or less instant, even on some of the 10 KLoC files in the LLVM repo on a decent machine. On a slow machine (low-power Pentium Silver core) it takes a couple of seconds on a small project, but that machine is a lot slower than my 10-year-old laptop.

          1. 1

            Point 3 could be specific to Windows or VS2019 which is why I called them out so much :) We’re all running brand new Threadrippers, and it’s just weird that it’s often instant, and other times it takes 3-5+ seconds (for one file).

            Any idea how would you solve Point 2 in conjunction with ALE in Vim? We’re primarily using Perforce, so we don’t quite have the modern luxuries of Git hooks and GitHub Actions. So what’s always been most important to me is if everyone’s always setup to format-on-save (with the same version of clang-format!!) ala what you have with ALE in Vim, then we hardly even need validation on the PR side. But… even just including a .clang-format without dealing with Point 2 could cause people with different versions of clang-format to start tripping over Point 1 (Which it’s important to note, I really only care about Point 2 explicitly due to Point 1). It wouldn’t be fun to say “Can you install this older version of clang-format that we use on this project to use with ALE so you just have the formatting right in the first place?” It’s nice in other languages/ecosystems where you can clone, and just get to writing code with the same tooling as everyone else on the project is using.

            I think my two issues with solving Point 2 as you noted are: A: Unnecessary formatting commits, (unless you squash the PR on merge.) B: Could be confusing to people if they have to fetch/pull after pushing up a PR (especially if they started commiting more code locally and end up with a merge conflict). These are just my opinion though, and it’s definitely better than nothing!

            1. 2

              Point 3 could be specific to Windows or VS2019 which is why I called them out so much :) We’re all running brand new Threadrippers, and it’s just weird that it’s often instant, and other times it takes 3-5+ seconds (for one file).

              I suspect this is the process start time on Windows. I wonder if clangd can be used instead there?

              Any idea how would you solve Point 2 in conjunction with ALE in Vim?

              The only way of solving this is to move responsibility of solving this away from individual developers and to a central location. Locally, you can format the code however you want, but on push it gets formatted correctly.

              In my ideal world, the solution to this would be to store an AST in your revision control system and have formatting the code be something that happened only locally (and in things like GitHub’s web UI). That way, anyone could have their own favourite presentation style for the code and not worry about how it looks on anyone else’s screen. Things like the colours for syntax highlighting are not stored in the repo, why is the number of non-significant whitespace characters?

              1. 1

                Hah very fair, I’m completely onboard with that, and I will too slowly work towards that ideal world.

                Thanks for sharing!

    16. 7

      When I first started building i_solated, I never could have predicted this current situation. This experience is a reminder that proprietary things come and go. I have already started the long journey towards future-proofing projects that I work on.

      To me, this is one of the best arguments for open source software.

      1. 4

        I agree, but the claim that the author never could have predicted the current situation is … come on! What did you think was going to happen? This was the most obvious outcome from the very start.

      2. 2

        Maybe not Free Software that is tied to a specific platform, but it definitely is an argument towards Free libraries and development kits. That’s why projects that I make are always GPL for applications and LGPL for libraries (with the occasional BSD where the platform does not allow for replacing parts of the program i.e. JS bundlers).

      3. 1

        What was the open source equivalent of Flash in its heyday and why didn’t it catch on?

        1. 1

          Java applets was an alternative, but they were heavy and slow to load and cumbersome to write.

          SMILE / SVG flopped for the same uses as Flash.

      4. 1

        Open source is no guarantee either. How many projects depend on abandoned libraries, let alone ecosystems? Good luck getting say, XMMS to build today.

        1. 1

          A very niche use-case I’ve been getting into is ripping local subtitles off DVDs that I upgraded to BD/UHD, which usually have English subtitles only.

          Everything I could find, starting with an old enough version of Avidemux, which hadn’t dropped support for ripping subtitles, was completely unbuildable.

          Solution: use Windows software in Wine and work from there.

          Open-source lost this one, but maybe the use-case simply is so niche no-one wants to maintain the packages.

          1. 1

            Just because there’s interest, doesn’t mean you have the maintainers able to maintain it. A lot of Flash users were inherently not C++ experts, and probably not able to maintain that codebase.

        2. 1

          That is true, but there is little interest in XMMS1, since there are such a wealth of good alternatives.

          Open source software that there is interest in keeping alive is pretty much guaranteed to live on for as long as there is interest.

          At the height of the Flash player popularity, if there had been an open source version of it, it would maybe not have died as abruptly as it did.

    17. 3

      Hey this is awesome! I’ll definitely download and play. You should really join the Self mailing list and let people there know.

      A couple of off the cuff questions -

      Are you doing any sort of JIT or only an interpreter at this stage? What bytecodes are you using, your own or ones from the existing Self implementations?

      Is the VM 64 bit clean? ie are object pointers 64 bits and if so, what are smallInts? Self VM of course has 31 bit smallInts (with 1 bit for tagging)

      1. 2

        Hey there, good to see you! I will join the mailing list, thank you.

        Are you doing any sort of JIT or only an interpreter at this stage?

        I’m currently focusing on a pure interpreter. I really want to eventually make a JIT incorporating the techniques used in the Self implementation paper, but that will only be after the language becomes stable and I have a good test suite which can smoke out any bugs.

        What bytecodes are you using, your own or ones from the existing Self implementations?

        No bytecodes for the time being, the VM interprets the AST. A bytecode would probably happen on a similar timeframe as the JIT above.

        Is the VM 64 bit clean? ie are object pointers 64 bits and if so, what are smallInts?

        Yep! Object pointers are 64 bits and object memory aligns itself to 8 bytes (so any object reference has 000 as its last 3 bits). I took a look at Java’s compressed OOPs, and perhaps that could be interesting, but right now I’m working towards stability instead of memory footprint and performance improvements (those are also welcome though!) I reserved the bottom two bits, so smallInts (which I call just integers) are 62 bits.

        1. 2

          A pure interpreter should be able to run faster than jitted Self on 1980s hardware :)

          I’m not sure that exposing the byte codes to the running Self code add much over exposing the AST, but I guess you will need to do one or the other to get a Self-level debugger.

          The existing Self VM is single-threaded with preemptive multitasking done in Self code (with the help of a couple of primitives). Do you have any thought on using more than one core?

          1. 1

            Yes, actually, I do! It’s going to be the next thing I will be tackling.

            The idea can be summarized as: Put mutation of the global tree (anything reachable from lobby) under a lock, and have actors who communicate by message passing only (no shared state). Then, use a non-preemptive scheduler (closer to an event loop) which would run these actors that would pass messages to each other.

            No shared state allows me to run these actors across multiple cores with no locks, since globally reachable objects are locked (you would be able to unlock it with a special primitive that stops the world while you add stuff to the global tree, for example). Reading from the global tree would be completely lockless since you’re not mutating anything. I’d still have to figure out a way to have the message queues for each actor be lockless somehow (not sure if it’s possible, but I’ll look into it).

            I got inspired by Erlang’s actors which use message passing for concurrency, and thought it would be a great fit for the natural message passing-based style of Self.

            1. 3

              You might be interested in the Verona runtime (MIT licensed) for this. The Verona model is a generalisation over the actor model, where each cown (concurrent owner) is the root of a tree of regions and work can be scheduled to run with exclusive access to multiple cowns. Scheduling a behaviour to run over a single cown is the equivalent of sending a message to an actor (the runtime refers to scheduling a behaviour over multiple cowns as a ‘multimessage’ because it’s effectively sending a message that’s simultaneously received by multiple actors. The simultaneity makes it safe to mutate all of the actors while the message is being processed).

              You could implement your proposed model very easily on top of the Verona runtime by making the global world a cown and each actor a cown. When you wanted to access global objects, you would send a multimessage to the receiving actor and the global world. In the message handler it would then be safe to mutate both local and global objects.

              1. 1

                Ah, that’s quite interesting. That means I could treat the global tree as the same as an actor’s own object tree, if I’m understanding you correctly. Thanks for sharing this.

                1. 3

                  Yes, exactly. It requires something with the multimessage abstraction - if you don’t want to use the Verona runtime then you could probably implement it yourself (though do read the code carefully to make sure you’re getting the deadlock-freedom guarantees). Exposing a Verona-like when clause in Self would be fairly easy, I think, you just need to make sure that you have a dynamic check to enforce the region invariants that Verona enforces statically (specifically, that an object in one region doesn’t contain pointers to another region). I know this is possible because I was doing it in Smalltalk 15 years ago. Verona is what happens when you start with Actor Smalltalk but want C++ performance.

    18. 2

      Probably gonna stay on 1440x900 + 1280x1024. Having the editor on the left and the preview/run window on the right is a pretty comfy workflow.

      As for my plans, I will probably try a change of employer abroad. Hoping to see more of the world this year. :^)

    19. 5

      Finally putting the new version of the frontend at work that I’ve been working on for about 3 months into production! It’s an exciting and also scary week, everything seems to work fine in staging but fingers crossed.

      1. 1

        Whew, good luck!

    20. 1

      After completing the Typescript conversion of the new frontend at work, the plan is to complete the work towards feature parity by the end of next week, so gonna spend half of the weekend on that, and the rest on hobby projects and resting.