1. 1

    Could something like this be accomplished in Zig via comptime?

    1. 1

      In theory, you could do it with any static language if you write a verification, condition generator to integrate with Why3 or Boogie. They do the proving. A language with metaprogramming might need an initial pass that does compile-time evaluation. Metaprogramming and dynamic languages are difficult in general. Worst case, you can use subsets and/or annotations to aid the analyses.

      1. 2

        That reminds me of the different approaches to handling declarative dependencies in Nix (in my case that’s mostly Haskell libraries with version bounds):

        • One approach is to have our Nix function (e.g. buildHaskellPackage) implement a constraint solver, which reads in version bounds from each version of each dependency, and picks a mutually-compatible set of dependencies to build.
        • A more practical approach is to just shell-out to an existing solver (cabal in Haskell’s case) and parse its output.

        Whether such build-time analysis is performed “within” the language via macros, or externally as a separate step of the build process, the same solvers can be called and the end result is the same (for checkers like this, there’s also nothing to parse: if a solution is found then we throw it away and carry on to the next step, if not we exit with an error message).

        I used to dislike the thought of performing I/O at compile-time, but I’m seeing more and more compelling use-cases: shelling out to competent solvers is one; “type providers” like F# are another (where types can be generated from some external source of truth, like a database schema; ensuring out-of-date code fails to build). One I’ve used recently was baking data into a binary, where a macro read it from a file (aborting on any error), parsed it, built a datastructure with efficient lookups, and wrote that into the generated AST to be compiled. This reduced the overhead at runtime (this command was called many times), and removed the need for handling parse errors, permission denied, etc.

        1. 2

          Yeah, integration with external tool can help in all kinds of ways. The simplest is static analysis to find code-level issues the compiler can’t find. I really like your idea of baking the data into a binary. It’s like the old idea of precomputing what you can mixed with synthesis of efficient, data structures. That’s pretty awesome.

          Actually, I’ve been collecting, occasionally posting, stuff like that for formal methods. Two examples were letting someone specify a data structure in functional way or modify/improve loops. Then, an external tool does a pass to make an equivalent, high-performance, imperative implementation of either. Dumps it out as code. Loop and data structure examples. Imperative/HOL’s technique, if generalizable, could probably be applied to languages such as Haskell and Rust.

    1. 3

      The only thing I wish about it, is that the syntax is more C-like and less Rust-like. I also wish it didn’t depend on the Rust runtime, other than that it’s nice, and I’ve been thinking about doing something similar for a while, although syntax-wise more in the C-with-ML direction rather than the Rust direction.

      1. 1

        Does it depend on the Rust runtime? The readme states explicitly that it just depends on having a C compiler for the target platform.

        1. 7

          Author should probably use sized types or static asserts for a structure that is supposed to have a concrete size.

            1. 8

              Starting Forth and Thinking Forth.

              1. 2

                Thinking Forth was one of only two books that fundamentally changed how I approach programming, I highly recommend it (and copies can be found on the Internet). The other book was Writing Solid Code (even if it was written by Microsoft).

                1. 5

                  LoRA on this device is super interesting. From what I tested on another SX1276 device, you can control things within ~0.5 km without internet in an urban area, or ~10km if there’s line of sight. No gateway required.

                  1. 5

                    Can you run Secure Scuttlebutt on that?

                    1. 2

                      Can’t see any reason why not.

                      1. 1

                        From what I understand is that LoRA is extremely low bandwidth and SSB is kinda chatty (also from what I understand)

                        1. 2

                          I found someone [1] running … something … over LoRA (with the same LoRA chip as this computer too). It looks like it interfaces with SSB somehow but I’m not sure how.

                        2. 1

                          LoRa has a very low data rate - from 100bps to 27 kbps depending on spreading factor (i.e. faster rates give you shorter range) - and there are duty cycle regulations (what percentage of time you can occupy the network), which vary by region but it’s usually 1%. Worst case your average throughput will be about a bit per second (over a 5 to 10km range), which is about 10kB per day. And that’s if you run your own private LoRa network, shared LoRaWAN networks often have fair use restrictions going as low as 30 seconds of airtime per 24 hours. So I’d say it’s definitely not suited for real time messaging.

                          The main intended use-case for LoRa is transmitting sensor readings from sensors running on a small primary battery and intended to live for decades, so a lot of the trade-offs are made in favor of low energy. If you want to run SSB on a device with the power of a smartphone, you probably just want to use a commercial cellular network. If you want something like a disaster proof, censorship proof, community-owned network, LoRa is a pretty bad choice anyway as it’s really easy to jam (a neat example of selective jamming in LoRaWAN is explained in this paper, it would be harder in a LoRa mesh, but non-selective DOS attacks against any LoRa network are trivial).

                      1. 1

                        I’m going to download this and look at it, but the 32 thread POWER9 under my desk is salivating at this. What are the system dependent portions I would need to write to port it to Power ISA?

                        1. 2

                          Look at forth/k-x86_64.s. You’d need to write those primitives for POWER9 and cross-build the boot image from a supported system.

                          1. 2

                            As jacareda said - the assembly language kernel some information about system calls in sys.f are needed. In fact, I have most of the stuff ported (but completely untested). Please contact me, if you are still interested.

                          1. 23

                            I am against because: 4 of the 6 links you posted are from the creator. If we award tags from that behavior it sets a bad precedent.

                            I am for because: It would allow me to filter Zig out.

                            1. 19

                              I don’t think you want to filter Zig out, since you seem to post in every zig-related submission.

                            1. 4

                              Nice, an odd beast implemented in Forth. Lots of interesting stuff in that site, I loved the Acme clone at http://www.call-with-current-continuation.org/ma/ma.html

                              1. 1

                                The Gopher version linked on that page is a nice touch.

                              1. 2

                                The link doesn’t work for me. Is it a link to the paper? If so, https://www.cs.indiana.edu/~rrnewton/papers/2018-11-location-calc-draft.pdf seems to work.

                                1. 1

                                  That looks like it.

                                1. 14

                                  I don’t like the name. How about “Beginner’s All-purpose Symbolic Instruction Code”?

                                  1. 3

                                    This language is similar to BASIC and has many features of it, but also many differences. And I like BASIC.

                                    1. 2

                                      Please, elaborate on the differences.

                                      1. 8

                                        Arrays have square brackets and start at 0, no GOTOs, blocks end with “end” or with a dot. Variables are integers by default. Functions work differently …

                                        There is the Python-like “for range”, which fits better to 0-based arrays. The syntax is generally shorter.

                                    2. 0

                                      Clearly since you don’t like the name the author should change it!

                                      1. 6

                                        Clearly convincing the author to change the name wasn’t my intention. Maybe you didn’t get it, the OP did.

                                    1. 1

                                      Just a random idea, could a home on a USB stick be handled via unionfs mounts?

                                      1. 16

                                        Problem with USB sticks with that setup is that they really are meant as “save file here time to time” kind of device. Using them like hard disk wears them down unbelievably quickly and they tend to have only one failure mode: catastrophic failure. (I am working on a project where we do exactly this and we found it out hard way)

                                        1. 3

                                          A lot of this can be fixed by using flash filesystems rather than something designed for disks. Built-in wear-leveling goes a long way to keeping blocks from going bad.

                                          1. 1

                                            TBH we didn’t try proper flash filesystems at any point because USB stick wear patterns were so inconsistent. Some of them would take days, even weeks of constant abuse until failure. Others failed in half an hour. I think in total we have burned through probably closer to 100k usb sticks at this point. Curiously Kingston usb sticks were always the crappiest of them all, in endurance and also in read/write performance. Also, there are huge differences between manufacturing batches so we cannot just trust some manufacturer, not even custom orders straight from the factories. At some point we came to conclusion that getting non-IT people to use Vagrant with very rudimentary management gui (https://github.com/digabi/naksu) is more safe than booting from usb medias!

                                        2. 1

                                          Yeah, why not

                                        1. 2

                                          I’m not sure this is a good example of C programming… C requires lots of discipline and leaving uninitialised fields in State doesn’t seem very disciplined.

                                          1. 4

                                            I asked for acme-style mouse chording some time ago but it was closed. Coupled with this and a plumber it could provide a nice experience.

                                            https://github.com/Microsoft/vscode/issues/5367

                                            1. 2

                                              It’s a shame we’ll never get chording, but customized mouse shortcuts might be a nice compromise if/when it gets worked on:

                                              https://github.com/Microsoft/vscode/issues/3130

                                            1. 14

                                              My problem with make is not that there is a bad design. It is not THAT bad when you look at things like CMake (oops, I did not put a troll disclaimer, sorry :P).

                                              But it has only very large implementations that has a lot of extensions that all are not POSIX. So if you want a simple tool to build a simple project, you have to have a complex tool, with even more complexity than the project itself in many cases…

                                              So a simple tool (redo), available with 2 implementations in shell script and 1 implementation in python does a lot of good!

                                              There is also plan 9 mk(1) which support evaluating the output of a script as mk input (with the <| command syntax), which removes the need for a configure script (build ./linux.c on Linux, ./bsd.c on BSD…).

                                              But then again, while we are at re-designing things, let’s simply not limit outself to the shortcomings of existing software.

                                              The interesting part is that you can entirely build redo as a tiny tiny shell script (less than 4kb), that you can then ship along with the project !

                                              There could then be a Makefile with only

                                              all:
                                                  ./redo
                                              

                                              So you would (1) have the simple build-system you want, (2) have it portable as it would be a simple shell portable shell script, (3) still have make build all the project.

                                              You may make me switch to this… ;)

                                                1. 1

                                                  Nice! So 2 shell, 1 python and 1 C implementation.

                                                  1. 5

                                                    There is also an implementation in C++. That site also has a nice Introduction to redo.

                                                    I haven’t used any redo implementation myself, but I’ve been wondering how they would perform on large code bases. They all seem to spawn several process for each file just to check whether it should be remade. The performance cost of that not a particularly fast operation might be prohibitive on larger projects. Does anyone happen to have experience with that?

                                                    1. 1

                                                      The performance cost of that not a particularly fast operation might be prohibitive on larger projects. Does anyone happen to have experience with that?

                                                      No experience, but from the article:

                                                      Dependencies are tracked in a persistent .redo database so that redo can check them later. If a file needs to be rebuilt, it re-executes the whatever.do script and regenerates the dependencies. If a file doesn’t need to be rebuilt, redo can calculate that just using its persistent .redo database, without re-running the script. And it can do that check just once right at the start of your project build.

                                                      Since building the dependencies is usually done as part of building a target, I think this probably isn’t even a significant problem on initial build (where the time is going to be dominated by actual building). OTOH I seem to recall that traditional make variants do some optimisation where they run commands directly, rather than passing them via a shell, if they can determine that they do not actually use shell built-ins (not 100% sure this is correct, memory is fallible etc) - the cost of just launching the shell might be significant if you have to do it a lot, I guess.

                                                  2. 3

                                                    The biggest problem with Make (imo) is that it is almost impossible to write a large correct Makefile. It is too easy for a dependency to exist, but not be tracked by the Make rules, thus making stale artefacts a problem.

                                                    1. 1

                                                      I had given serious thought to using LD_PRELOAD hooks to detect all dependencies dynamically (and identify e.g. dependencies which hit the network), but never got around to trying it.

                                                      Anyone heard of anything trying that approach?

                                                    2. 2

                                                      Why this obsession with “simple tools for simple projects” though? Why not have one scalable tool that works great for any project?

                                                      (Yeah, CMake is not that tool. But Meson definitely is!)

                                                      1. 3

                                                        Because I wish all my projects to be kept simple. Then there is no need for very powerful tool to build them.

                                                        On the other hand, if you already need a complex tool to do some job, having another simple tool sum up the complexity of both as you will now have to understand and maintain both !

                                                        If we aim for the most simple tool that can cover all situations we face, this will end up with different tools according to what we expect.

                                                        1. 3

                                                          Meson isn’t a simple tool, it requires the whole Python runtime in order to even run --help.

                                                          CMake is a lot more lightweight.

                                                          1. 4

                                                            Have you appreciated how huge CMake actually is? I know I had problems compiling it on an old machine since it required something like a gigabyte of memory to build. A two-stage build that took its precious time.

                                                            CMake is not lightweight, and that’s not its strong suit. To the contrary, it’s good in having everything but the kitchen sink and being considerably flexible (unlike Meson, which has simplicity/rigidity as a goal).

                                                            1. 2

                                                              CMake is incredibly heavyweight.

                                                            2. 1

                                                              I would like to see how it would work out with different implementations and how “stable” meson as a language is.

                                                              1. 1

                                                                Meson is nice, but sadly not suitable for every project. It has limitations that prevent some from using it, limitations neither redo nor autotools have. Such as putting generated files in a subdirectory (sounds simple, right?).

                                                            1. 3

                                                              I’m quite happy with my Obins Anne Pro (60%). I have been unable to program it with the official iOS software that simply hangs every time I try to use it. Fortunately, there’re some projects trying to replace both the firmware and the software (I still have to find the time to try them).

                                                              https://github.com/ah-/anne-key

                                                              https://github.com/fcoury/electron-anne-pro

                                                              https://github.com/msvisser/AnnePro-mac

                                                              https://github.com/kprinssu/anne-keyboard-windows