Threads for andreypopp

  1. 2

    For those who like LALR parser generators there’s Lemon, the one used in SQLite.

    1. 4

      I wonder if you could support mutability somehow?

      I’m partly imagining torrent websites hosted on bittorrent (because it’s kinda meta) but could be generally useful/interesting perhaps.

      1. 4

        There’s a bittorrent protocol extension where you can distribute a public key that points to a mutable torrent, but I don’t know if it has been ported to webtorrent.

        1. 2

          The reference implementation for BEP0046 is done with webtorrent, don’t know if/how it works in browser though.

          1. 2

            As far as I understand, you can’t use DHT in a web browser, as nodes do not support WebRTC. The Webtorrent project includes a DHT library that works with Node.js (which is used by the desktop application).

      1. 5

        I’ve worked at the company which developed HTSQL.

        It was used by software engineers and by (data/business) analysts.

        In my experience HTSQL was very good, especially for data exploration / creating ad-hoc reports. It is easy to pick up and very concise.

        Now the core developers of HTSQL are working on FunSQL, I’ve posted about it some days ago.


        Besides that piece of tech they have Rex Deploy — an Ansible-style database migration system — https://github.com/prometheusresearch/baseline-codebase/blob/baseline/master/src/rex.deploy/README.rst

        1. You can specify a table as a set of facts (you can even split definition into multiple facts)
        2. Easy to specify column renames
        3. Possible to put some dictionary-style data into the database as a part of migration
        4. Possible to drop down to SQL for something which isn’t supported out of the box by Rex Deploy.
        1. 3

          I see FunSQL.jl is already mentioned in the discussion but I still want to highlight how it works as I think FunSQL represents a very interesting point in design space of “SQL replacements”.

          FunSQL is a Julia library which presents a combinator-based API to generate SQL. The query construction looks like:

          From(:person) |>
          Where(Fun.between(Get.year_of_birth, 1930, 1940)) |>
          Select(Get.person_id)
          

          I’ve myself ported the lib (with some deviations) to Python and OCaml (ended up using this one) and built a concrete syntax on top of it, so at the end it looks very similar to PRQL, albeit I’ve kept it conservative syntax-wise (closer to SQL). Same query as above in the concrete syntax:

          from person
          where year_of_birth >= 1930 and year_of_birth <= 1940
          select person_id
          

          (I’ll use this syntax going forward in this post)

          Now the feature I’ve wanted to highlight (and which I think makes FunSQL very useful) is how it treats group by and aggregate expressions. An example first:

          from users as u
          join (from comments group by user_id) as c on c.user_id = u.id
          select
            u.username,
            c.count() as comment_count,
            c.max(created_date) as comment_last_created_date
          

          Here the subrelation c which is joined to u is being grouped by user_id column. The c itself doesn’t specify any aggregates though as you can see. All aggregates are located in the single select below.

          FunSQL treats grouped relations as a special kind of namespace from which you can either select columns you grouped by or refer to aggregate functions.

          In my mind, this feature is very powerful as it allows one to incrementally construct reusable query fragments and then use them to build final queries. Same query in two steps:

          let users_with_comments =
            from users as u
            join (from comments group by user_id) as c on c.user_id = u.id
          
          from users_with_comments
          select u.username, c.array_agg(text) as comments
          

          In other words: you can build a nested namespace from relations / grouppings and then later decide what you want to select from this namespace.

          This surprisingly works well if you want to implement visual query builders (I have one for my OCaml port).

          Check out JuliaCon 2021 presentaton about FunSQL if interested, the main author Kyrylo (who’s by the way the co-creator of HTSQL, another query language mentioned in the discussion here) talks about motivation and design of FunSQL.

          1. 2

            Doing AoC in BQN!

            1. 6

              They’ve added virtual lines! Would love to see REPL plugins support this to show output inline. Also theorem prover plugins!

              1. 3

                What is a virtual line?

                1. 9

                  It is a line of text visible in the editor but is not present in the actual text buffer. Previously nvim supported virtual text inline, used to show warnings or other information after a line of code for example. Now entire lines can be inserted virtually.

                2. 2

                  I might be poking at this in the future – editor integrations for annotated code.

                1. 2

                  I wish I could buy JetBrains products packaged as LSP servers (or with extensions or maybe using some other protocol) so I can use them with Vim.

                  1. 1

                    Seems like they’re leveraging LSP, so this might not be too far fetched!

                    https://www.jetbrains.com/help/fleet/1.0/architecture-overview.html

                  1. 2

                    I’ve enjoyed how they render Coq proofs with all the prove state extracted from Coq and available on hover.

                    1. 1

                      That is legitimately awesome. Does anyone have any idea what plugin / library enables that? And, if it works for other proof assistants like Isabelle? (I’m not a Coq user)

                        1. 1

                          Awesome, thank you. Wish it worked for Isabelle, but that’s great to have a tool to reference.

                          1. 1

                            Ahh, based on the README it seems like it has preliminary support for Lean 3, which is neat!

                          2. 2

                            You might also enjoy this presentation by the creator, and the paper that goes with it: Untangling mechanized proofs.

                            1. 1

                              Yes, that’s great content, thank you.

                              Proof assistants are amazing tools, and some of them do offer a way to print out the full proof that they end up automating. Showing intermediate proof states is definitely nice, but some tactics still take very large leaps (like tauto used in the paper). In those cases, I want to see insight into how the step was taken in between goals as well.

                        1. 5

                          Planning the next release of https://bupstash.io/ / https://github.com/andrewchambers/bupstash . I am way behind schedule but want to do a closed beta of managed repositories.

                          Also thinking a bit about a new programming language I want to work on for fun - A combination of standard ML drawing heavy inspiration from the https://github.com/janet-lang/janet runtime.

                          I also have some ideas kicking around for my peer to peer package tree - https://github.com/andrewchambers/p2pkgs .

                          So many things to do - clearly I need to avoid spending time on the less important things - I just have trouble reigning in what my mind is currently obsessing over.

                          1. 2

                            Also thinking a bit about a new programming language I want to work on for fun - A combination of standard ML drawing heavy inspiration from the https://github.com/janet-lang/janet runtime.

                            Do you mean you want to reimplement StandardML but on top of Janet’s like runtime? Or is there something specific to Janet which can influence the SML language itself?

                            I’m myself contemplating a compile to LuaJIT ML-like language: the efficiently of LuaJIT and its convenient FFI + ergonomics of ML (though I’d want to experiment with adding modular implicits to the language).

                            I also have some ideas kicking around for my peer to peer package tree - https://github.com/andrewchambers/p2pkgs .

                            Is this related to Hermes (sorry for lots of questions but you have so many interesting projects)? Are you still using/developing it?

                            Some time ago I was working on designing and implementing esy which is a package manager + meta build system (invoking package specific build system in hermetic environments) for compiled languages (OCaml/Reason/C/C++/…). It looks like Nix but has integrated SAT solver for solving deps, we rely on package.json metadata and npm as a registry of package sources (though we can install from git repos as well).

                            Personally, I think there’s a real opportunity to make a “lightweight” version of Nix/Guix which could be used widely and Hermes seems to be aimed to at this exact spot.

                            1. 1

                              Do you mean you want to reimplement StandardML but on top of Janet’s like runtime? Or is there something specific to Janet which can influence the SML language itself?

                              The way janet has great CFFI and a few things like compile time evaluation and the language compilation model. I also enjoy how janet can be distributed as a single amalgamated .c file like sqlite3. My main criticism of janet is perhaps the lack of static types - and standard ML might be one of the simplest ‘real’ languages that incorporates a good type system, so I thought it might be a good place to start for ideas.

                              I’m myself contemplating a compile to LuaJIT ML-like language: the efficiently of LuaJIT and its convenient FFI + ergonomics of ML (though I’d want to experiment with adding modular implicits to the language).

                              Yeah, the way it complements C is something I would love to capture. I am not familiar with modular implicits at all - but it sounds interesting!

                              Is this related to Hermes (sorry for lots of questions but you have so many interesting projects)? Are you still using/developing it?

                              Yes and no - p2pkgs is an experiment to answer the question - ‘what if we combined ideas from Nix with something like homebrew in a simple way?’ I think the answer is something quite compelling but it still has a lot of tweaking to get right. p2pkgs uses a combination of a traditional package model - so far less patching is needed to build packages - it also is conceptually easier to understand than nix/hermes - while providing a large portion (but not all) of the benefits. You could consider p2pkgs like an exploratory search of ideas for ways to improve and simplify hermes. The optional p2p part was kind of an accident that seems to work so well in practice that I feel it is also important in it’s own way.

                              1. 1

                                while providing a large portion (but not all) of the benefits

                                Could you possibly elaborate on which benefits are carried over, and which are not? I’m really interested in your explorations in this area of what I see as “attempts to simplify Nix”, but in this particular case, to the extent I managed to understand the repository, it’s currently very unclear to me what it really brings over just using redo to build whatever packages? Most notably, the core benefits I see in Nix (vs. other/older package managers), seem to be “capturing complete input state” of a build (a.k.a. pure/deterministic build environment), “perfectly clean uninstalls”, and “deterministic dependencies” including the possibility of packages depending on different versions of helper package. Does p2pkgs have any/all of those? It’s ok if not, I understand that this is just a personal exploration! Just would like to try and understand what’s going on there :)

                                1. 2

                                  seem to be “capturing complete input state” of a build (a.k.a. pure/deterministic build environment)

                                  Yes it does, builds are performed in an isolated sandbox and use none of the host system.

                                  “perfectly clean uninstalls”, and “deterministic dependencies” including the possibility of packages depending on different versions of helper package.

                                  Packages are currently used via something I called a venv, this is more like nix shell, so has clean uninstalls - each venv can use different versions of packages, but within a venv you cannot - this is one of the downsides.

                                  it’s currently very unclear to me what it really brings over just using redo to build whatever packages?

                                  It uses redo + isolated build sandboxes + hashing of the dependency tree in order to provide transparent build caching, this is not so far removed from NixOS, which is why i feel nixos might be over engineered.

                                  One thing p2pkgs does not have is atomic upgrades/rollback unless it is paired with something like docker.

                                  All that being said, I think i oversimplified it to the point where the UX is not as good as it should be, so i hope to shift it back a bit to look a bit more like the nix cli - I think that will make things more clear.

                                  1. 1

                                    Thanks! I was confused how redo works (had some wrong assumptions); now I start to understand that the main entry point (or, core logic) seems to be in the pkg/default.pkg.tar.gz.do file. I’ll try to look more into it, though at a first glance it doesn’t seem super trivial to me yet.

                                    As to venv vs. NixOS, does “a linux user container” mean some extra abstraction layer?

                                    Also, I don’t really understand “container with top level directories substituted for those in the requested packages” too well: is it some kind of overlayed or merged filesystem, where binaries running in the container see some extra stuff over the “host’s” filesystem? If yes, where can I read more about the exact semantics? If not, then what does it mean?

                                    Back to “input completeness”: could you help me understand how/where can I exactly see/verify that e.g. a specific Linux kernel version was used to build a particular output? similarly, that a specific set of env variables was used? that a specific hash of a source tarball was used? (Or, can clearly see that changing one of those will result in a different output?) Please note I don’t mean this as an attack; rather still trying to understand better what am I looking at, and also hoping that the “simplicity” goal would maybe mean it’s indeed simple enough that I could inspect and audit those properties myself.

                                    1. 1

                                      As to venv vs. NixOS, does “a linux user container” mean some extra abstraction layer?

                                      Like Nixos, it uses containers to build packages, they are not needed to use packages - but they are helpful

                                      Also, I don’t really understand “container with top level directories substituted for those in the requested packages” too well: is it some kind of overlayed or merged filesystem, where binaries running in the container see some extra stuff over the “host’s” filesystem? If yes, where can I read more about the exact semantics? If not, then what does it mean?

                                      The build inputs are basically put into a chroot with the host system /dev/ added with a bind mount - this is quite similar to nixos - You can see it in default.pkg.tar.do

                                      Back to “input completeness”: could you help me understand how/where can I exactly see/verify that e.g. a specific Linux kernel version was used to build a particular output?

                                      Nixpkgs does not control the build kernel, not sure why you seem to think it does. Regardless - You can run redo pkg/.pkghash to compute the identity of a given package - which is the hash of all the build inputs including build scripts - again, much like nix. I suppose to control the build kernel we could use qemu instead of bwrap to perform the build. To see the inputs for a build you can also inspect the .bclosure file which is the build closure.

                                      similarly, that a specific set of env variables was used? that a specific hash of a source tarball was used?

                                      Env variables are cleared - this can be seen by the invocation of bwrap which is a container - much like nixpkgs. I get the impression you might be misunderstanding the trust model of NixOS - nixos lets you run package builds yourself - but it still relies on signatures/https/trust for the binary package cache - You can’t go back from a given store path and workout the inputs - you can only go forward from an input to verify a store path.

                                      also hoping that the “simplicity” goal would maybe mean it’s indeed simple enough that I could inspect and audit those properties myself.

                                      The entire implementation is probably less than 700 lines of shell, i think you should be able to read them all - especially default.pkghash.do and default.pkg.tar.gz.do .

                                      1. 1

                                        Thank you for your patience and bearing with me! I certainly might misunderstand some things from NixOS/Nixpkgs - I guess it’s part of the allure of p2pkgs that their simplicity may make those things easier to understand :) Though also part of my problem is that I’m having trouble expressing some things I’m thinking about here in precise terms, so I’d be super grateful if you’d still fancy having some more patience to me trying to continue searching for better precision of expression! And sorry if they’re still confused or not precise enough…

                                        Like Nixos, it uses containers to build packages, they are not needed to use packages - but they are helpful

                                        Hm; so does it mean I can run a p2pkgs build output outside venv? In Nixpkgs, AFAIU, this typically requires patchelf to have been run & things like make-wrapper (or what’s the name, I seem to never be able to remember it correctly). (How) does p2pkgs solve/approach this? Or did I misunderstand your answer here?

                                        The build inputs are basically put into a chroot with the host system /dev/ added with a bind mount - this is quite similar to nixos - You can see it in default.pkg.tar.do

                                        What I was asking here about was the “Running packages in venv” section - that’s where the “container with top level directories substituted (…)” sentence is used in p2pkgs readme. In other words: I’m trying to understand how during runtime any “runtime filesystem dependencies” (shared libraries, etc.; IIRC that’d be buildInputs in nixpkgs parlance) are merged with “host filesystem”. I tried reading bwrap’s docs in their repo, but either I couldn’t find the ultimate reference manual, or they’re just heavily underdocumented as to precise details, or they operate on some implicit assumptions (vs. chroot? or what?) that I don’t have.

                                        In other words: IIUC (do I?), p2pkgs puts various FHS files in the final .pkg.tar.gz, which do then get substituted in the chroot when run with venv (that’s the way buildInputs would be made available to the final build output binary, no?). For some directory $X present in .pkg.tar.gz, what would happen if I wanted to use the output binary (say, vim), run via venv, to read and write a file in $X on host machine? How does the mechanism work that would decide whether a read(3) sees bytes from $X/foo/bar packed in .pkg.tar.gz vs. $X/foo/baz on host machine’s filesystem? Or, where would bytes passed to write(3) land? I didn’t manage to find answer to such question in bwrap’s docs that I found till now.

                                        Do I still misunderstand something or miss some crucial information here?

                                        Nixpkgs does not control the build kernel, not sure why you seem to think it does. (…)

                                        Right. I now realize that actually in theory the Linux kernel ABI is stable, so I believe what I’m actually interested here in is libc. I now presume I can be sure of that, because the seed image contains gcc and musl (which I currently need to trust you on, yes?), is that so?

                                        Env variables are cleared (…)

                                        Ah, right: and then any explicitly set env vars result in build script changes, and then because it’s hashed for bclosure (or closure, don’t remember now), which is also included in (b?)closures of all dependees, the final (b?)closure depends on env vars. Cool, thanks!!

                                        1. 1

                                          Hm; so does it mean I can run a p2pkgs build output outside venv? In Nixpkgs, AFAIU, this typically requires patchelf to have been run & things like make-wrapper (or what’s the name, I seem to never be able to remember it correctly). (How) does p2pkgs solve/approach this? Or did I misunderstand your answer here?

                                          It replaces /bin /lib but keeps the rest of the host filesystem when you run the equivalent to a nix shell. This seems to work fine and lets you run programs against the host filesystem. This works because on modern linux kernels you can create containers and do bind mounts without root.

                                          If we designed a package installer tool (and a distro?), it would also be possible to just install them like an alpine linux package.

                                          I now presume I can be sure of that, because the seed image contains gcc and musl (which I currently need to trust you on, yes?), is that so?

                                          You can rebuld the seed image using the package tree itself, the seed image is reproducible so you can check the output seed is the same as the input seed. You need to trust me initially though before you produce your own seed.

                                          Ah, right: and then any explicitly set env vars result in build script changes, and then because it’s hashed for bclosure (or closure, don’t remember now), which is also included in (b?)closures of all dependees, the final (b?)closure depends on env vars. Cool, thanks!!

                                          Thats right :).

                              2. 1

                                Some time ago I was working on designing and implementing esy which is a package manager + meta build system (invoking package specific build system in hermetic environments) for compiled languages (OCaml/Reason/C/C++/…). It looks like Nix but has integrated SAT solver for solving deps, we rely on package.json metadata and npm as a registry of package sources (though we can install from git repos as well).

                                I feel like it should be possible to use the same solver ideas or something like go MVS in order to make a distributed package tree - this is another idea I really want to try to integrate in something simpler than Nix. I agree that it seems like a great thing and I definitely want it to be built.

                                edit: I will investigate esy more - it definitely has most of what I want - The big difference seems to be how p2pkgs simply overrides / using user containers and installs them using DESTDIR.

                                1. 1

                                  I feel like it should be possible to use the same solver ideas or something like go MVS in order to make a distributed package tree

                                  The depsolver is an interesting beast. I’m not satisfied with how it ended up in esy (though we had constraints to operate within, see below) — the feature I miss the most is the ability to create a separate “dependency graph” for packages which only expose executables (you don’t link them into some other apps) — dependencies from those packages shouldn’t impose constraints outside their own “dependency graphs”.

                                  Ideally there should be some “calculus of package dependencies” developed which could be used as an interface between depsolver and a “metabuildsystem”. That way the same depsolver could be used with nix/hermes/esy/… Not sure how doable it is though — people don’t like specifying dependencies properly but then they don’t like to have their builds broken either!

                                  edit: I will investigate esy more - it definitely has most of what I want - The big difference seems to be how p2pkgs simply overrides / using user containers and installs them using DESTDIR.

                                  Keep in mind that we had our own set of constrains/goals to meet:

                                  • esy is usable on Linux/macOS/Windows (for example we ship Cygwin on Windows, this is transparent to the users)
                                  • esy uses npm as a primary package registry (appeal to people who know how to publish things to npm, an open world approach to managing packages)
                                  • esy doesn’t require administrator access to be installed/used (the built artefacts are inside the home directory)
                                  • esy strives to be compatible with OCaml ecosystem thus the depsolver is compatible with opam constraints and esy can install packages from opam
                                  1. 1

                                    The depsolver is an interesting beast. I’m not satisfied with how it ended up in esy (though we had constraints to operate within, see below) — the feature I miss the most is the ability to create a separate “dependency graph” for packages which only expose executables (you don’t link them into some other apps) — dependencies from those packages shouldn’t impose constraints outside their own “dependency graphs”.

                                    This is much like in a general package tree - statically linked programs really don’t care - but some programs don’t support static linking, or provide dynamic libraries.

                                    Another challenge is when you don’t have a monolithic repository of all packages you now have double the versioning problems to tackle - Each version is really two - the package version and the packaged software version.

                                    My goals for a general package tree are currently:

                                    • Linux only (it worked for docker).
                                    • Doesn’t require administrator for building or using packages.
                                    • Allows ‘out of tree packages’ or combining multiple package trees.
                                    • Transparent global build caching from trusted sources (like nixos, I don’t think esy has this).
                            1. 1

                              Experimenting/learning about type inference algorithms. Previous week I’ve implemented Algorithm W for Hindley Milner type inference.

                              Now I’m working on HM(X)-inspired algorithm (separates constraints generation from constraint solving) + adding type classes & polymorphic records to it.

                              1. 6

                                I can feel that… and I actually think there are two separate “barriers” (maybe more, I’m speaking of my experience with trying to read PLT papers):

                                1. Definitely learn the notation used, there’s Crash Course on Notation in Programming Language Theory which gives a good overview.

                                2. Get accustomed in thinking with inductive definitions — Logical Foundation’s chapter Inductively Defined Propositions is very good and also contains exercises in Coq.

                                1. 1

                                  My classes started last week, so I’m getting back into an academic mindset. I’m mainly reading this weekend! More notable readings include:

                                  • Usable Security: History, Themes, and Challenges, a brief introduction to the field of usable privacy+security research.
                                  • First few chapters of the Red Book.
                                  • Browsing through Coq’s documentation to see if there’s anything valuable to a learner. In class, we are currently working through Logical Foundations, but the first few chapters feel very “monkey see monkey do.” I can complete the exercises, but I still lack a deep understanding of what’s going on. Has anyone been through this before?
                                  1. 2

                                    If you want a good understanding of what happens under the hood in a dependently typed language, I highly recommend “The Little Typer” by Friedman and whats-his-name.

                                    1. 2

                                      currently working through Logical Foundations, but the first few chapters feel very “monkey see monkey do.” I can complete the exercises, but I still lack a deep understanding of what’s going on. Has anyone been through this before?

                                      I’ve found out that interactive proving made me try to apply tactics almost in a brute force manner, then you get stuck on non trivial exercises obviously. The solution I’ve found is to think of the problem in an informal way first (something which books advices to do as well) and only then do a mechanisation part.

                                    1. 11

                                      I would try and reimplement Z3 in Rust. It would probably take 10 years and most of my hair, though.

                                      1. 4

                                        I remembered starring on github a SAT solver written in Rust 3 or 4 years ago, it looked really promising. I was about to tell you to look into it, until I realized you were the author of it. :D

                                        I know that Z3 does much more than Minisat, but I’m wondering: is it worth it? (= All the extra Z3 features)

                                        1. 5

                                          There’s a bunch of other SAT solvers in rust (and this one is just mostly a port of minisat done by someone else, that I forked to add a few things, I can’t claim authorship).

                                          I know that Z3 does much more than Minisat, but I’m wondering: is it worth it? (= All the extra Z3 features)

                                          Yes! SMT solvers are more convenient to use than SAT solvers in many situations, and I think it’ll increasingly be the case in the future (e.g. bitvectors are often as powerful as their SAT encoding equivalent). In some cases, you have a clear encoding of your problem into SAT, in which case it might be better. This book has a lot of examples using both a SAT solver and Z3.

                                          Beyond that, SMT solvers are one order of magnitude more advanced than SAT solvers, I’d say. They’re full of features to handle theories (hence the name), and give users a nice high-level language to express their problem, rather than a list of clauses. SAT has its place (see for example Marijn Heule’s work on using clusters for parallel SAT solving to solve really hard math problems), but for most users I think SMT is the easiest to use, because of all these additional features. Amazon AWS has formed a large team of SMT experts and is using cvc5 for crypto or authentication stuff, not sure which exactly.

                                        2. 2

                                          Not in OCaml?!

                                          1. 4

                                            No, rust is better suited to this kind of very high performance tools :-). I do have a SMT solver in OCaml in the works, but you can’t really compete with the C/C++ crowd in terms of performance.

                                        1. 5

                                          It is exciting to see neovim gaining such novel and useful features like first class Lua scriptability (you can even access libuv event loop from the Lua!), LSP client and online syntax parsing (treesitter).

                                          Personally, while I use neovim since 0.1.x versions I’m not eager to take the advantage of those 0.5.0 new features. I want to wait some time before dust settles. There are lots of new Lua plugins pop up for neovim (some brand new, some rewrites of vimscript based plugins into Lua) and I want to wait and see which will become more robust and mature.


                                          Regarding the LSP I still use ALE which, I think, has some of the best UX:

                                          • Support for non-LSP linters and fixes (autoformatters mostly). This means I can use the same UI to see diagnostics from both LSP and non-LSP sources. This is really important for me.

                                          • A huge collection of integrations with LSP servers and linters. Sometime I discover a linter/LSP for a new file type I’m editing by just invoking :ALEInfo and seeing a list of suggested integrations.

                                          I think all of this is possible with neovim’s new LSP + some configuration on top (like using null-ls for non-LSP linters/formatters). But… it all requires some friction for configuring it right while ALE gives you all of that out of the box.

                                          1. 2

                                            +1 for ALE, it’s easy and seamless, and comes with an impressively wide language and tools support.

                                            1. 1

                                              Thanks for the recommendation. I tried neovim but gave up after half an hour of failing to configure its LSP support and after becoming annoyed that they change the undo file format but don’t change the extension so you can’t edit the same file with vim and neovim. I tried ALE and it took about 10 minutes to get it working nicely (well, spread over two days, poking it a bit while I waited for things to build). It does exactly what I want and it’s added a total of under 10 lines to my vimrc to have nice clang-format and clangd integration.

                                            1. 7

                                              See also https://terralang.org which is similar with how it uses Lua as a metalanguage for producing/manipulating terms of a C-like language.

                                              1. 2

                                                Splitting my Ruse Scheme monorepo into multiple repositories; since Chez Scheme is now part of both Debian and Ubuntu, it will be easier to eventually package those project as standalone projects. Last few days, I improved the performance of the JSON parser (see below for graph); and today I am reworking the networking library, possibly on top of libev? any thought regarding libev?

                                                chez-9.5.4/json       : ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 9.48
                                                cpython-3.9.5/simdjson: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 5.98
                                                pypy3-7.3.3/json      : ▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 3.26
                                                c++/simdjson          : ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 11.47
                                                

                                                ref: https://github.com/scheme-ruse/ruse-json/ note: run it with make benchmark

                                                1. 1

                                                  any thought regarding libev?

                                                  Why not libuv? It seems to be much more widespread now (due to its origin as nodejs’s underlying async I/O library).

                                                  1. 1

                                                    I am under the impression that libuv make it hard to interop with POSIX threads.

                                                1. 1

                                                  Continuing reading (= making exercises) through the chapters of Software Foundations, Logical Foundations (the first book in series). I’m on Inductively Defined Propositions chapter now. This is so much fun really.

                                                  1. 1

                                                    Nice, I just finished going through LF recently as well and found it very rewarding. There’s even a jscoq version of the SF books so you can coq on the go :)

                                                  1. 2

                                                    I myself use display-popup bound to C-s which executes a fzf-based switcher which allows to switch/create new sessions:

                                                    unbind -n C-s; bind -n C-s display-popup -x1 -y1 -w40% -h50% -E 'zsh -ic "s"'
                                                    

                                                    I don’t know if it’s possible to add session preview to the fzf-based switcher based on fzf’s preview functionality (don’t know if tmux exposes “previews” somehow externall) though…

                                                    EDIT: ah, I see the author already explored a similar approach… but still, my fzf-switcher is a bit more elaborate (allows to create sessions on the fly / attach if outside of tmux or switch if inside of tmux) which allows me to use it as an entry point to tmux

                                                    1. 2

                                                      This is great, thanks for sharing! Always happy to learn from others. And interested to see how different and how similar our solutions can be.

                                                    1. 4

                                                      Now that esp32 c3 is available it’s much easier to use Rust with esp32 platform as c3 has RISC V CPU. Previously you had to compile LLVM fork which has support for xtensa (the CPU used in other esp32 modules).

                                                      By the way there’s an official Rust example from Espressif — https://github.com/espressif/rust-esp32-example

                                                      1. 5

                                                        This article talks a bit about how representing trees as trees is inefficient (lots of pointers). This reminds of me of a good talk about representing ASTs as arrays which allows parallelise transformations on them (think SIMD or even GPU).

                                                        “Programming Obesity: A Code Health Epidemic” https://www.youtube.com/watch?v=UDqx1afGtQc

                                                        In the talk Aaron Hsu presents a Scheme compiler written in 17 (!) lines of APL which can compile code on GPU.