Threads for ScriptDevil

    1. 5

      It’s cute to follow along while Go is slowly and meticulously catching up to 2004 Java.

      1. 8

        Go the language might be less feature-rich than Java. But Go’s tooling is surprisingly effective at building software. I have churned out side-projects at work in Golang in a couple of months than I have in the few years prior because it isn’t very flexible. There is no need to find the “most elegant” way to do something - a rabbit hole I commonly find myself in while coding in Rust. Just checking error and calling log.Fatal(err) is more than enough for most programs, and Go encourages you to just get the task done.

        The standard library is wonderful and feels “batteries-included”. go-mod and dependency management is trivial - even if it isn’t the most elegant. Unlike Java, the code needs no JRE, the language isn’t as verbose, and the practicality of tooling (like struct tagging for JSON/XML encoding/decoding), log, tabwriter without any external deps is extremely underrated.

        I think Go is meant for shipping code and not designing programs.

      2. 3

        Unfortunately, at the time when they will have achieved parity with Java 6, the language and especially the ecosystem will have to deal with so much legacy issues that eventually a new cool language will come around, making everything better this time(tm). Rinse repeat.

      3. 2

        Maybe someday Java will mature and catch up with Perl and Python.

    2. 2

      Eternal Terminal is usually a better choice than mosh, (unless you’re frequently using a slow, high latency or lossy link). ET offers native scrolling and tmux control mode.

      https://eternalterminal.dev/

      1. 4

        Last time I checked ET needed to run as a system daemon on the remote server. In my company, I frequently need to work on remote shared linux boxes but don’t have sudo on them. Mosh can run as a regular user

        1. 1

          This is a big win for me too

      2. 2

        Eternal Terminal is fantastic but another downside is that it doesn’t support as many platforms as mosh does, eg Windows.

    3. 25

      This is significant because it’s been five years since the last Mosh release. Lots of changes have piled up since then. It’s great to see the project is still active.

      1. 4

        Truecolor support is finally here!

    4. 19

      Our first boards are expected to arrive tomorrow. I’m really excited to start playing with them. I started working on CHERI almost 10 years ago. Back then, we had a 100 MHz MIPS softcore in FPGA that was very similar to the MIPS R4K - state of the art circa 1991 (a useful age, since any patents required to implement it had expired). Software development on a 100MHz CPU with PIO access to a slow SD card is not a fun experience. We later got a QEMU implementation that was a lot faster (around 200-300MIPS where the 100MHz core managed about 0.7 IPC, and with very fast I/O via VirtIO). Still a long way away from a modern environment. The experimental platform that we’re mostly using today is Toooba, which is an out-of-order RISC-V core, similar to a Cortex-A8 in terms of pipeline structure, which runs in FPGA at 50 MHz (though is faster than the in-order MIPS core) in a dual-core configuration.

      Morello is a 2.5GHz modified Arm Neoverse N1, which was Arm’s flagship server core until quite recently. It’s the same core as in AWS Graviton2. Clock for clock, I expect it to be significantly faster than Toooba and the clock speed is 50 times greater (and it has twice as many cores). That’s going to be a massive improvement for software development.

      The CHERI architecture has come along way since then as well. When I started, capabilities were 256 bits and the software stack was a tiny microkernel with all of the CHERI-specific bits in hand-written assembly. Capabilities didn’t have an offset / address field, so if you wanted to use them as pointers then you had to carry an integer offset around with the capability (which, with alignment requirements, made a 512-bit structure) - you could increase the base, but you couldn’t move it back again, so you couldn’t pass a pointer to the middle of an array / structure that allowed access to the whole allocation. Tag bits weren’t stored in registers, so you couldn’t implement memcpy (if you didn’t use capability instructions, you didn’t copy capabilities. If you did, then you’d trap on the first non-capability data that you saw).

      We made enough improvements that it’s now possible to compile large C/C++ codebases such as the FreeBSD base system and most of KDE as pure-capability CHERI code, with a fairly small amount of porting effort. I think our first port of tcpdump had more lines of code changed than the recent KDE port (including xlib and Qt) had in total.

      Morello has a few things that aren’t in the existing prototypes that I’m also excited to play with. There’s a new way of doing cross-domain calls that avoids using up space in the type field of capabilities by adding an indirection. All entry points can be sealed with the same type and they point to a pair of code and data capabilities. The jump instruction unseals the capability and loads the code and data capabilities at the target (one into the program counter, the other into a normal register).

      1. 2

        Do you know if it’s possible to get a board or two?

        I am working on a C/C++ build system/package manager (build2) and we have ~300 C/C++ packages (https://cppget.org) that are continuously built and tested on various platforms/compilers (https://cppget.org/?builds). Since all the packages are built with the same build system, it is pretty easy for us to try to build them for a new platform/compiler (normally all we have to do is add support in the build system). I think it would be interesting to try to test them on CHERI and see what it uncovers.

        1. 4

          The Digital Security by Design challenge fund has a Technology Access Programme, which gives successful applicants a £15K grant and a Morello board for 6 months. The University of Cambridge team is planning on doing ports build runs, which will see how many of the 30K things in the FreeBSD ports collection build.

          If you’ve got spare CPU cycles, there’s also an Arm Fixed Virtual Platform emulator and a qemu port that you can use to build and test things, though a lot slower than Morello. We’re planning on putting 30 Morello systems in a rack connected to GitHub Actions for CI for various things that we care about (probably around May), so if you ping me in a few months I can try running your test builds. Do you run tests for these packages as well? Roughly how long would you expect it to take to do a full build on a quad-core 2.5GHz machine? If it’s not too long, then we might be able to add a regular run.

          1. 1

            Thanks for the information, I will look into it.

            Do you run tests for these packages as well?

            Yes, we do. That would be the interesting part in this case.

            Roughly how long would you expect it to take to do a full build on a quad-core 2.5GHz machine?

            My back of the envelop estimate is around 8 hours.

            If it’s not too long, then we might be able to add a regular run.

            Thanks, though we have our CI infra that runs on bare metal. Not sure it will be easy to integrate it with your setup.

            EDIT:

            The Digital Security by Design challenge fund has a Technology Access Programme, which gives successful applicants a £15K grant and a Morello board for 6 months.

            From their FAQ this is only available for UK-based businesses.

            1. 1

              Yes, we do. That would be the interesting part in this case.

              Yup. We’ve been working hard to make things that will break at run time at least emit warnings, but execution tests are much better.

              My back of the envelop estimate is around 8 hours.

              That seems something that we could easily put in a weekly, possibly daily, CI job and if someone is actively working on fixing a particular package then we could probably give them an account on a machine for a little while.

              From their FAQ this is only available for UK-based businesses.

              That’s true. You might try reaching out to a company like embecosm and see if they’d be interested in being the designated holder of the grant and giving you access to the systems?

              1. 1

                That seems something that we could easily put in a weekly, possibly daily, CI job and if someone is actively working on fixing a particular package then we could probably give them an account on a machine for a little while.

                While this doesn’t fit our CI model well, I will ping you in a few months to see what’s available (we have more of an “online” CI service where anyone can submit a CI job at any time and expect to see the results quickly rather than the more commonly found “batch” CI).

                Also, are FreeBSD jails fully functional on CheriBSD (I assume that’s what you will be running)? Currently we run all our CI tasks in QEMU/KVM virtual machines and running them directly on the host doesn’t feel robust.

                You might try reaching out to a company like embecosm and see if they’d be interested in being the designated holder of the grant and giving you access to the systems?

                Thanks for the suggestion, but browsing the Technology Access Program pages I got a distinct whiff of a dysfunctional bureaucracy that I would rather not get involved with.

                1. 1

                  Also, are FreeBSD jails fully functional on CheriBSD (I assume that’s what you will be running)? Currently we run all our CI tasks in QEMU/KVM virtual machines and running them directly on the host doesn’t feel robust.

                  Yes. We’re planning on network booting the pool from a read-only NFS share so that they can have a local scratch space on their disk but be completely reset between CI runs. We might also use jails to simplify some of the management parts.

      2. 1

        This is super cool. I work in NVIDIA as a CPU validation engineer. I have been meaning to understand cheri after stumbling across it in the past. Now I probably have a work related reason to do so.

      3. 1

        Thanks for this excellent post! I’m interested in CHERI and I looked at the announce to see some mouth-watering performance numbers, and didn’t find them. Your perspective post is quite interesting and also has nice numbers. Should be a blog post :-)

        1. 4

          I think Arm is quite nervous about performance numbers because there hasn’t really been any Morello-specific optimisation on the software stack yet. The C++ ABI, for example, is almost a direct transliteration from the Itanium ABI with s/address/capability/. There’s probably quite a bit of headroom for optimisation in the default calling conventions.

          When I did the original LLVM CHERI work, there were a few optimisations that didn’t work and were difficult to fix and so I just disabled them for CHERI targets. Several of these were related to vectorisation and so didn’t matter with the MIPS / RISC-V prototypes, where we didn’t have a vector unit at all, but will make a big difference with Morello where there is one. The Arm and Linaro folks have been doing superb work on these but I don’t know what the status is.

          It’s important to think of Morello as an upper bound on the overhead of CHERI. The software stack hasn’t been heavily optimised, the ISA was a really great bit of engineering work by Richard, Graeme, and friends at Arm but is not based on having any data on instruction mixes for large codebases with a moderately optimised compiler (getting this data is one of the goals of the Morello program), and the microarchitecture is a high-performance core optimised for non-CHERI workloads with a very rapid turn-around to adapt it for CHERI (I am incredibly impressed that the Arm microarchitects managed to retrofit CHERI support to the Neoverse N1 in the incredibly tight timelines that UKRI gave them).

    5. 25

      Thank GOD. With all due respect to the fine folks who work on it, Gnome is a tower of tech debt and I can’t wait to see what System76 produce.

      I really feel for the small handful of devs who are paid by RH and Canonical to work on Gnome. Seeing Daniel Van Vugt’s commend at the end of this bug is a fine example of the staggering debt these folks are working against.

      1. 19

        I get where you’re coming from but a lot of that tech debt is in GTK & friends, and this is supposed to be based on gtk-rs, so it will carry over all that tech debt.

        Case in point: it seems it would still use mutter, so it’s still going to suffer from the bug you linked to.

        Also, gtk-rs is essentially a (very advanced) wrapper – some auto-generated, some manually implemented – over GTK that AFAIK is not actually developed by the GTK team, so it involves all the compatibility fun of working with GTK and obviously solves none of the problems System76 had with GTK in the past. It’s also the most mature GUI toolkit available for Rust, as someone else has pointed out here, but that’s a pretty low threshold to exceed. It still has a long way to go before it’s anything near stable.

        I don’t want to say this is bad and it’s gonna be a waste of time – quite the contrary! First, at the end of the day, this is FOSS so whatever makes these guys happy is what’s important. Second, if anything, it will at the very least give gtk-rs a lot more real-world exposure and more testing, which the whole Rust world will definitely benefit from. And, indeed, as someone else pointed out, it’s really cool that we’d see a desktop developed primarily by a company with paying customers, for its paying customers, something we haven’t seen in the Linux world in a very long time and which, IMHO, has significantly contributed to the decline in functionality and stability, in spite of the growth of “polish”, whatever that is. So I think this is really good news, I’m just… moderating my expectations a bit :-).

        1. 3

          Case in point: it seems it would still use mutter, so it’s still going to suffer from the bug you linked to.

          Boo :( So I guess my lovely System76 Thelio will continue running Windows 11 :)

          (I’m trying to run Fedora 35 KDE spin as well but that’s got some rather sincere teething pain happening)

      2. 4

        Gnome already has a dependency on Rust via librsvg and GStreamer. gtk-rs is the most mature GUI toolkit available for Rust.

        I wonder if Gnome could capitalize on this and fully embrace Rewrite-it-in-Rust with the help of System 76.

        1. 5

          That would be nice but I doubt it. Gnome has SOOOO much C code floating around.

          I could maybe see them do something like “All new development in Rust” but I seriously, SERIOUSLY doubt it :)

          1. 9

            Gnome is a tower of babel application-wise; everything from C to Python to JS.

              1. 1

                Hah! And nice seeing you here! 😊

    6. 4

      If you’re setting up i3, scratchpads will be a life-changer. Alt+c to show/hide calendar, Alt+f to show/hide file manager, Alt+` to show/hide temporary console, etc. Also, look into file-manager driven workspaces -vs- console driven workspaces.

      1. 2

        Unable to find any links by quickly searching for file-manager driven workspaces/console driven workspaces - could you point me to something?

        1. 1

          I don’t know if there are any posts about it. Basic idea is Alt+enter brings up ranger rather than console and you quickly traverse (via t, f, or g commands) where you need, then open the file you want to edit, the file manager closes, and file opens. So you don’t do any cd, or ls but interact with everything via a file manager. If you need a shell, you press s. Second approach is you Alt+enter to open a console, then use cd, ls, z or asdf to navigate where you need, then vim file and work this way. This is the default console driven approach. And third way is actually vim driven workspaces. When you do everything via vim. Alt+enter opens vim and not shell. If you need shell, you :sh or :terminal. In the first case file manager is the first class object, in the second case it’s shell/bash, and in third it’s vim. My personal approach is file manager driven workspaces.

          1. 1

            That is an interesting way to think about it - will give it a shot - thanks!

    7. 5

      Kate is the one editor that I always thought could-have-been. It is the most elegant and modern looking of all editors I used back between 2010-2015. But it neither had the hackability of Emacs (my daily go-to) or the plugin ecosystem of atom/sublime/vscode that came later. Even now, it ticks most of the boxes - looks great, runs natively on Windows, Mac and Linux. But tthe community behind it just does not exist

      1. 2

        and it’s close enough to vi[m] to be useful, but sufficiently different to be annoying.

    8. 1

      I have tried to get into Emacs several times, it agrees with me in many ways.

      However, each time I am driven away by the keyboard bindings system.

      I find it tedious to keep hitting Ctrl and all those multi-key shortcuts.

      I also find myself dead in the water, barely able to even open a file, let alone do things like:

      • Global file search for a term (ctrl+shift+f usually)
      • Switch between open buffers (ctrl+tab)

      I’ve also lately been spoiled by IntelliJ’s handling of all these things with keyboard bindings I’m familiar with since VB in the 90’s, not to mention it having GUI wrappers for Git.

      Any advice?

      1. 2

        You could try something like doom or spacemacs or even prelude + evil-mode if you don’t like hitting the control key. These are built atop evil-mode which gives you vim-like keybindings. I genuinely believe knowing Vim and Emacs keybindings is useful (for instance readline - and by extension most CLI tools use Emacs keybindings like C-a/C-e for begining/end of line and C-k/C-y for killing (cutting) and yanking (pasting).

        At the same time, why do you want to learn Emacs? If you are productive in IntelliJ, stick to it. You cannot grok the utility of Emacs unless you have been using it for a few months at least. I have been using it since 2005 and I still uncover features that I am unaware of that are older than me.

        YMMV, but I prefer the way emacs opens files to the Intellij/code way since it doesn’t open a file-open dialog box - I use ivy and that gives me fuzzy search on file opening.

        Global search is a different beast - Emacs doesn’t have a built-in notion of projects - but projectile-mode adds this in - and there are bindings in projectile to do global search in the current project. I also use swiper for searches and avy for on-screen jumps but as I said before - emacs is a gift that just keeps on giving - you could never have tried everything that is possible with it.

        C-Tab would be C-x <right> and C-S-Tab would be C-x <left> in Emacs by default Buffers are ubiquitous in emacs - as a newbie you might also want to C-x C-b and pick the buffer you want. I personally use C-x b remapped to ivy-switch-buffer which gives me fuzzy search on buffer names.

        1. 1

          BTW, https://github.com/bbatsov/guru-mode and https://github.com/justbur/emacs-which-key could also help a lot in discovering emacs keybindings

        2. 1

          Global search is a different beast - Emacs doesn’t have a built-in notion of projects

          Actually, this isn’t true anymore. Since Emacs 26 project.el has been bundled with Emacs, and depending on the version you’re using, C-x p f should be bound to project-find-file. If not, you can install project.el from ELPA. I’ve been using it for the last few months, and it does everything projectile used to do, just without the need for a minor mode and an external package.

          1. 1

            C-x p f doesn’t seem bound to anything in Emacs 26, perhaps that happened in Emacs 27? (I’m still on 26) I’ve been monitoring the development of project.el for a few years and it’s great that Emacs has some built-in project support these days, but there are still many things that Projectile does and project.el doesn’t. Of course, I’m obviously biased, being the author of Projectile. :D

            1. 1

              I’m not sure, but updating project includes

              ;;;###autoload (define-key ctl-x-map "p" project-prefix-map)
              

              so it will probably only be bound from 28 onwards :/

              there are still many things that Projectile does and project.el doesn’t. Of course, I’m obviously biased, being the author of Projectile. :D

              True, but it has been improving recently, with a lot of inspiration form Projectile. It would be interesting to see if projectile would evolve to become an “extension package” for project.el.

              1. 1

                The approaches of project.el and Projectile are somewhat different, so rebasing Projectile on top of project.el is not something I ever plan to do. I’m happy that Emacs users are getting something out-of-the-box, but I don’t plan to change my vision or goals for my project just because of that. Even if I’m the only Projectile user at the end of the day that’s be fine by me, as it covers my needs perfectly. :-)

        3. 1

          At the same time, why do you want to learn Emacs? If you are productive in IntelliJ, stick to it. You cannot grok the utility of Emacs unless you have been using it for a few months at least. I have been using it since 2005 and I still uncover features that I am unaware of that are older than me.

          Because it is FOSS, and because I’ll be able to have it work the way I want instead of being at the whim and mercy of JetBrains, as cool as they are. I don’t mind a learning curve to gain productivity, but so far it’s been more of a wall than a curve for me. Also, IntelliJ CE has missing features like CSS support.

          I also think there may be other features in Emacs which I have not even imagined yet, but would improve my productivity.

          I’m also enticed by Emacs’ console mode and universality.

          YMMV, but I prefer the way emacs opens files to the Intellij/code way since it doesn’t open a file-open dialog box - I use ivy and that gives me fuzzy search on file opening.

          I use Ctrl+Shift+N for opening files, which lets me type, e.g. “green css” to select ~/project/default/theme/green/style.css

          1. 1

            I also think there may be other features in Emacs which I have not even imagined yet, but would improve my productivity.

            Yes! There definitely is - welcome aboard. Org mode alone makes it worth it.

            Just don’t expect keybindings to work like they do elsewhere since emacs predates most of them and is significantly more powerful - Emacs has sub-maps For instance C-x is not the final shortcut - C-x C-s is. Similarly, when you use counsel, C-x p will still not execute anything - it will wait for the next keystroke. At this point p will switch project, f will fuzzy find files, d will find a directory, s would wait for a further keystroke to pick the search backend. And the best thing is most commands take a prefix C-u, For instance C-f goes forward one character but C-u 30 C-f goes forward 30 characters.

            There is no need to learn this all in one shot - you can simply pick things up as you need them.

            I use Ctrl+Shift+N for opening files, which lets me type, e.g. “green css” to select ~/project/default/theme/green/style.css

            That only works within the current project - what if I want to make a quick edit to my ~/.bashrc? If you want to search or files within the current project, C-x p f exactly that (including incremental filtering) if you use https://github.com/ericdanan/counsel-projectile which wraps around projectile which I mentioned before.

      2. 2

        not to mention it having GUI wrappers for Git.

        Emacs has either the built-in vc commands, that provide a generic UI for various version control systems, but that might be cumbersome to use in some cases. For Git specifically, there’s Magit, that’s often praised as a very flexible UI for working with Git.

      3. 1

        I’ve been using Emacs for, I don’t know, 20 or 25 years? I’ve never liked the default keybindings and that is the power of Emacs: you mold it to your likings.

        (For the longest time I had my own keybindings but since I was also familiar with Vim I’ve been using evil-mode for 8+ years now.)

        My suggestion for you: configure the keybindings to IntelliJ’s since that’s what you’re used to.

        1. 1

          They’re not really IntelliJ’s keyboard bindings, more like Windows-derived ones which I’ve grown used to since using VB3 also 20-25 years ago…

          I’m not even sure how to begin changingl the bindings.

          1. 1

            I would just begin with the global-set-key command, although some modes can overwrite it.

            Once you’ve used Emacs for a little longer you can start looking into mode-specific keybindings (local-set-key, but also the :bind option in use-package) but I would skip that for now.

            1. 1

              looking into mode-specific keybindings (local-set-key, but also the :bind option in use-package)

              define-key would be the easier way to define mode-local keybindings, local-set-key would require invoking it in a hook.

    9. 1

      A couple of suggestions 1, it isn’t obvious that https://chauhankiran.github.io/w.html is the index for your blog so I couldn’t see where Chapter 0..3 were. It would be good if you have a link to the index (and it might be better to have a tag so that I can only look at your GTK book chapters

      1. Screenshots of work-in-progress?
      1. 1

        Thanks for your feedback. I’ll surely updated these changes.

    10. 14

      There is a reason for the proliferation of Electron apps. There is a huge ecosystem, and the time to ship is fairly low. There are tonnes of FOSS IDEs in electron that you could take inspiration from as well. Don’t worry about the size of binary - the intersection of people interested in IDEs/notebooks and people interested in minimal memory footprint is tiny.

      1. 37

        No, just.. no. The whole idea of making individual applications that each depend on their own copy of a fully featured web browser that, get this, will almost never be updated to patch future security issues is an extremely flawed and dangerous practice. You do not need an entire copy of chromium to edit text.

        1. 11

          you’re right that you don’t need it, but that analysis is only considering the user’s perspective, and is only considering it from a narrow frame of reference.

          For one thing: most Electron apps in most situations are being deployed to users who will run just a few applications at a time; less than ten. I agree that you don’t need to run a web browser to edit text. The reality is that the vast majority of users will only run one instance of VS Code (or Atom). The question is not whether or not you need it, it’s whether or not you can get away with it.

          For the majority of orgs, staffing is significantly simplified with Electron, because it has significant overlap with the web as a platform. You can’t seriously consider the merits of Electron without acknowledging how much Electron lowers the barrier to entry.

          With that said: I absolutely despise building Electron apps personally, and I loathe using them. It is, in my opinion, a terrible platform. It does, however, solve real problems that are not solved by the alternatives.

          My hope is that the proliferation of Electron will give Microsoft pause, and will encourage innovation in the desktop application development space. (I don’t think this is likely, but that’s a topic for another day.) It’s an absolute embarrassment that Slack takes about 3x as much memory to run as Blender, when the former is just a glorified IRC client and the latter is literally a world-building tool. But at the end of the day, Slack is taking 350mb of memory in an age where entry-level machines have 4 or 8gb of memory. For most users in most situations, the bloat just doesn’t actually matter. The irony is that the people most affected by this bloat are software people, who are the exact people that have the power to stop it.

          1. 14

            The irony is that the people most affected by this bloat are software people, who are the exact people that have the power to stop it.

            This is a pretty shallow analysis. The people most affected by this bloat are the people with the least capable hardware, which is not usually people in software engineering positions, and certainly not the people choosing to write Electron apps in the first place.

            1. 3

              I think that’s broadly true but I left it off because I’m having a very hard time imagining a user persona that describes this problem in a way where it really is a problem, and where there are realistic alternatives.

              A big sector of the low-end PC market now is Chromebooks (you can get a Chromebook with 4gb of memory for under a hundred dollars), but that’s a circular issue since Chromebooks can’t run Electron apps directly anyway, they have to run Chrome Apps, which are … themselves Chromium contexts. That user persona only increases the utility of Electron, inasmuch as that entire market is only capable of running the execution context that Electron is already using: the web. By targeting that execution context, you’re lowering the barrier to entry for serving that market since much of what you write for Electron will be portable to a Chrome App. The existence of Electron is probably a net positive for that market, even if, yes, as I said before, it’s a very wasteful foundation on which to build your software.

              The Raspberry Pi userbase is particularly notable here. Electron is probably a net harm to RPi 3 and Pi Zero users. Electron is a problem for the RPi userbase, but that’s a highly specialized market to begin with, and newer models of the RPi are fast enough that Electron’s bloat stops being as punitive. (when I say specialized here I don’t mean unimportant or rare, I mean that it’s notably different from other desktop environments both in terms of technical constraints and user needs.)

              It’s easy to say “Electron is bloated, therefore harmful to people with slow computers”, but as a decision-making tool, that conclusion is too blunt to be useful. Which users, on which hardware, in which situations, attempting to access which software?

              And besides, it’s not like Electron has cornered the market on writing bloated software. Adobe Photoshop is written in C++ and uses a cool 1gb of memory without a single document open. The reality is that Electron empowers beginner developers to create cross-platform desktop apps in a way that is absolutely dominating the space because it focuses on solving problems that actually exist, instead of problems that are only believed to exist. The path to getting people away from Electron is not to say “don’t use Electron because it’s bloated”, it’s for other tools to figure out what needs Electron is satisfying that are not satisfied by the alternatives.

              1. 4

                And besides, it’s not like Electron has cornered the market on writing bloated software. Adobe Photoshop is written in C++ and uses a cool 1gb of memory without a single document open. The reality is that Electron empowers beginner developers to create cross-platform desktop apps in a way that is absolutely dominating the space because it focuses on solving problems that actually exist, instead of problems that are only believed to exist. The path to getting people away from Electron is not to say “don’t use Electron because it’s bloated”, it’s for other tools to figure out what needs Electron is satisfying that are not satisfied by the alternatives.

                That’s not a fair comparison given how many plugins and features out of the box Photoshop has.

              2. 1

                The path to getting people away from Electron is not to say “don’t use Electron because it’s bloated”, it’s for other tools to figure out what needs Electron is satisfying that are not satisfied by the alternatives

                We need basically the Flash Player, without the legacy timeline or embedded VM. A cross-platform, high-performance scene graph with a small but complete API surface that developers can mate to the language of their choice, be it a VM like JS or Lua, or Python, or with D / Rust or C++ code.

                1. 2

                  the timeline and the AS3 VM are … kinda the core of Flash, so I’m not really sure what would be left. Without that stuff isn’t it basically just Cairo?

                  anyway, you know about Scaleform, right? Not clear by your answer if it’s already on your radar, but it was a licensed implementation of Flash, significantly more performant that Adobe’s implementation, that supported C++ interoperability, that was in its later years owned and run by AutoDesk. Using Scaleform to build the 2D UI for 3D games was a dominant trend in the games industry for over 15 years. Some people still use it today but it was cancelled years ago. https://en.wikipedia.org/wiki/Scaleform_GFx

                  1. 1

                    Not at all. Most (complex) software written in Flash over the last few years of its meaningful existence completely ignored the timeline, and consisted of only 2 frames, once being the preloader and the other being the application. The reason I think it should separate out the VM and provide only an API is to share the load / interest across people coming from different language communities who all want to show something on-screen without Electron.

                    Flash 2d was a lot more than Cairo, which is comparable to the flash.display.Graphics API used to draw each individual item on the stage. It’s a proper retained-mode scene graph with events and a pretty good text API built in. And Flex, which was built entirely in AS3 on top of the basic scene graph was and still is the most well-thought-out, well-documented, and easy-to-use (both for beginners and advanced cases) UI framework I’ve ever had the pleasure of working with, and I’ve messed about with bunch of them over the years. Adobe Corporate (and Jobs’ pique) screwed over a great team of talented people who built and maintained Flex, robbed us hackers of an excellent cross-platform… platform, and created billions of dollars of waste heat running multiple redundant entire copies of Chromium on desktops everywhere for years.

          2. 3

            My hope is that the proliferation of Electron will give Microsoft pause, and will encourage innovation in the desktop application development space. (I don’t think this is likely, but that’s a topic for another day.)

            Nope, they’re huge users of it.

            1. 1

              I mean, my first example was VS Code and I said I thought this result was highly unlikely so … I feel like I’ve already demonstrated an awareness of that fact and I’m not sure what you’re getting at.

              1. 2

                Eep, I read it but didn’t catch that line. Sorry.

      2. 3

        the intersection of people interested in IDEs/notebooks and people interested in minimal memory footprint is tiny.

        In my experience the more someone crafts code, the more they care about memory footprint- even if it’s in the sense of “I’ll have less memory for testing my application”.

        1. 1

          In my experience the more someone crafts code, the more they care about memory footprint- even if it’s in the sense of “I’ll have less memory for testing my application”.

          I am with you to some degree - but we are talking about Notebook/repl-style applications here. These aren’t traditional applications in the sense that they are never daemonized, are always foreground applications, and are tested by manually tweaking and observing (or at least that is how I use notebooks). Also, I probably should clarify - if the application actually ships and people feel it is slow, then the effort involved in porting over to Qt or something might be justified. Most of the times, in crowded spaces, getting things to ship is more important than quibbling about memory use.

    11. 1

      I created https://github.com/ScriptDevil/eltbus-theme several years ago on the same principles. https://i.imgur.com/bCyoytv.png is a screenshot. I have since switched back to the default emacs theme but some people may like it.

    12. 5

      I recently spent an hour root-causing and an hour face-palming after we “simplified” some code by replacing a struct with a single integer.

      #include <iostream>
      #include <vector>
      
      class C {};
      
      int main() {
        std::vector<C> m_cvals{100}; // old
        std::vector<int> m_ivals{100}; //new
      
        // Something non-trivial but captured by this.
        std::cout << "m_cvals.capacity " << m_cvals.capacity() << "; m_ivals.capacity " << m_ivals.capacity() << std::endl;
      }
      
      $ ./inits
      m_cvals.capacity 100; m_ivals.capacity 1
      

      In retrospect, this is obvious - initializer list constructor takes lower priority than a copy constructor. This is covered in C++ Guru of the week - 1 . Still, when it hits, it takes ages to narrow down to a line.

    13. 3

      I like this, but I wonder if it is healthy for lobste.rs if everyone starts sharing TILs and devlogs.

    14. 2

      As the second slide shows… the number of papers on fuzzing is exploding.

      What drives me nuts is you try use one of these new fancy fuzzers…. and there is an explosion of dependencies and you have days of work to (maybe) get one of them working.

      Currently, there are only two I’d say are “production ready”, where you can just “apt install” and away you go… and they are afl and libfuzzer

      Sadly, although afl is the basis for many next gen fuzzers, it has gone unmaintained. (Last release 2017 and the author has taken to woodworking to soothe his nerves.)

      I wish more fuzzing researchers would work on making their tools “apt install and go” instead of adding one more conference paper to their trophy cabinet.

      Sadly conferences and academia have some very perverse incentives.

      1. 2

        You’re right about the incentives. On top of it, I don’t know that this paper really compared to state-of-the-art fuzzers. I mean, a few are given they were made in past year or so. I submitted those, too. I do remember submitting some that outperformed many on that list more recently. I think I read Trail of Bits was making use of the binary fuzzer. It would be interesting to see how this tool fairs again the more recent ones that beat the competition.

        Personally, I’d also just drop the weaker ones off new comparisons, too, unless they caught stuff the better ones missed. Only do truly the top tools with reproducible results via same benchmark and good packaging. Otherwise, the comparisons are at least partly staged given we know some of them are obsolete.

      2. 1

        AFL still does its job well - it is rock solid as a fuzzer and never crashes. I also have written multiple instrumenters to prioritize paths not often hit by AFL (incentivizing paths to such functions) using the LLVM-pass infra that it has (the afl-gcc afl-as based code injector is too messy for my liking).

        1. 2

          True, my only sorrow is there is a steady stream of fuzzing papers going by claiming to improve on afl…..

          …but oh my, what an immense pain to try actually get any of them up and going….

          I wish they’d upstream their improvements (including updating the packaging).

        2. 1

          AFL still does its job well

          Depends on what one’s goal is. AFL will do its job well if the goal is to find some problems with a large expenditure of time and resources. It’s far from well if you’re wanting to find maximum problems with small amount of time and resources. The latter is what the newer tools claim to be doing. They’re doing it across all or most benchmarks, too, depending on the tool. That means one or more of them should be the new default, replacing the obsolete AFL.

    15. 2

      I discovered the insane power of filters and macros in Tiddlywiki. Moving stuff over from my poorly synced org mode directory over to a self hosted tiddlywiki on my VPS.

      I wonder what took me so long. The last several times I used tiddlywiki pretty like a set of pages that linked to other pages. I am planning to write some articles or a ebook about what one is missing when they think of tiddlywiki as a simple personal wiki. The only equivalent I can think of is a vanilla emacs setup to what a power user can make emacs do.

    16. 11

      This is why I respect Mozilla as a company. They voluntarily delete data and genuinely feel bad that they had to make their users enable telemetry for the fix. They care about privacy and an open Internet and imho struck a great balance of ideology with pragmatism (DRM is the only reason I have been able to keep my parents and wife still on Firefox). I still continue using Firefox for as long as their core values are not compromised.

      1. 1

        Caring means a lot these days.

    17. 0

      No mention of Rust? Considering how many large tech companies are now using it in production, I’m surprised it wasn’t mentioned along-side Go as a popular approach. I suppose it’s somewhat fair, though, considering Rust’s async story is still incomplete. Once async/await is stabilized, I think we’ll see even more rapid adoption of Rust for network architecture.

      1. 10

        As a person who has been writing Rust recently, the agility of writing programs in Elixir/Erlang and to a lesser extent Go is something one cannot achieve in Rust. Rust by itself makes you think explicitly about objects being copied/moved, types of stuff etc. This is a good thing. I think about design before I code in Rust. With Erlang, I pick a more incremental approach, I just push stuff into a record and match on tags and as the system matures, I incrementally use better design. They feel like 2 orthogonal approaches to programming suited for completely different domains.

        1. 1

          What are you using Erlang for if I May ask?

          1. 8

            I had an idea for a messenger system centred around throw-away IDs all associated with a fixed ID - think whatsapp where you don’t share your phone number but instead generate a throw-away ID for each person you share your contact with. The idea was that people cannot spam you using other phone numbers (The target audience was for girls/LGBT people who share contacts with a person they consider cool in a social circumstance, but are harassed till they change their phone number/ID). My app’s end-goal was that you simply “block” that throw-away ID and still get to retain your base account which no one can directly message. I even thought of broadcast channels where you generate and share a throw-away ID with multiple people using word-based-urls like gfycat.

            I picked up Erlang because it was considered great for such applications and worked my way through Programming in Erlang. I gave up half-way through since I switched jobs - but I may get back to it at some point.

            1. 4

              Neat idea!
              I could also see that being useful for things like job hunting, or sharing a contact with businesses, and filtering it if you start getting spam (like with email aliases).

            2. 2

              Ah sounds like a cool project. I’ve had Programming In Erlang on my shelf for years but haven’t read it yet, hence my interest.

              Thanks for the reply!

        2. 1

          For what it’s worth this is why for quick hacks and spikes I reach for JS/Node before Elixir, despite having used both professionally.

          1. 1

            I used to do this but have since switched away from TypeScript/Node to Elixir itself since I find it much more flexible and that it has a quicker flash to bang.

        3. 1

          Give it some time. Yes, Rust enforces proper typing and ownership at compile time. But that doesn’t mean it slows down writing (non-throw-away) programs, especially if performance matters, and especially once you get used to working in it.

      2. 0

        Once async/await is stabilized, I think we’ll see even more rapid adoption of Rust for network architecture.

        Please also don’t forget about the possibility of having a new story to tell around channel and its semantics. I think the future looks pretty bright for Rust.

        As a side note, have you messed around with Actix at all?

        1. 2

          Yes, I’m using it to write a multiplayer game match making server right now. It’s probably better than writing the same thing from scratch with mio, but it’s still a pain. The new Future trait and await/async would make it much much more ergonomic.

          1. 1

            Looking forward to hearing more as things stabilize.

    18. 2

      I read the article and I couldn’t help but wonder whether this leads to fragmentation in the Racket ecosystem. Do people need to know all these DSLs that look nothing like the base language? How do these mini-languages interact with each other?

      1. 5

        The examples are DSL’s for authoring books (pollen), specifying language grammars (brag), writing a text adventure game (txtadv), generating certain fractals (lindenmayer), writing json (jsonic, but is part of a language writing tutorial, not of Racket itself), a lan­guage for test­ing JSON-based HTTP APIs (riposte). So no, you don’t need to know all of them: you just need to know the ones you need. And the language is just the API, which you would otherwise need to know.

        Interaction of mini languages is an interesting challenge, but the lindenmayer example shows how you can scope a language fragment.

      2. 2

        I’ll add to what Confusion said that a learning a well-done DSL isn’t much different from learning a library API. There names, parameters, and behaviors. You call them. Stuff happens. DSL’s make the syntax better suited to the domain. Can also make it more concise by getting rid of library-related syntax. That’s not always a problem. I recall C++ and Java having some verbosity on some of that which DSL’s usually don’t.

    19. 2

      When I saw jrnl the other time, I started to journal into a pass entry by merely appending date prefixed lines to it (similar to the twtxt format ;). Current script with some notes is in this paste.sh.

      1. 1

        That link just shows this with nothing else:

        This is an encrypted paste site. Simply type or paste code here and share the URL.

        1. 1

          https://paste.sh/6-8S6Rpv#HzzsAeUhOWsrksbzJFFDvFyR is what OP linked. Your browser probably visited the text of the link which was simply paste.sh

          1. 1

            I guess I had to have JavaScript enabled to view raw, unformatted text (lol)..