Threads for alex

    1. 1

      I’m not sure if this is the intended behaviour, but unless I explicitly set a style, it changes on every navigation, so every time I click a link I get a flash of white before it loads a new style and it’s quite disruptive and hurts the eyes a bit. I’d suggest applying this change:

      function setStyleOrDefault(def) {
      -  setActiveStyle(loadStyle() || def);
      +  var style = loadStyle() || def;
      +  storeStyle(style);
      +  setActiveStyle(style);
      1. 1

        I’ll try that out, thanks. I think this is browser dependent; I had to fix the same issue when I switched browsers and I remember the fix being pretty finicky. What browser are you using btw?

        1. 1

          This is with Firefox on Linux, let me know if you’d like me to do any additional testing!

          1. 1

            Should be fixed now if you want to give it another go (make sure to hard refresh). Clicking between pages and switching the theme in the dropdown shouldn’t flicker anymore in Firefox.

            Side note: changing the theme on every page load (until you pick a theme) is intentional, but might not stay forever. I just got tired of changing my mind every day. :)

            1. 2

              That’s much better :) Thanks for being so responsive!

    2. 4

      I’m sorry to rant, but it’s a big pet peeve of mine that so many projects invent their own Lisp dialects.

      This could have easily been a Common Lisp or Scheme (or even Clojure) library, using macros to get identical syntax, all without reinventing the wheel for basic language syntax and functionality, with the added benefits of reusing existing tools, libraries, documentation, etc.

      The whole idea of creating a DSL in Lisp is that the DSL gets embedded in the Lisp, so the implementer doesn’t have to waste their time implementing a whole language, and can focus on the domain specific parts.

      A DSL is supposed to make things simpler by re-using tools and the user’s existing knowledge, but creating an entirely new language with all new tooling makes things more complicated.

      1. 4

        This is a project for fun, and I think designing a language is fun. I don’t have a better reason for you. :) My last OSS project left me pretty burnt out from trying to please everyone, so right now I’m just trying to build things for myself, and if other people like it, that’s great.

        FWIW, this project is very dependent on container tech and right now Go is the language for that. The language itself is a tiny Kernel-based dialect which does not take much effort to implement. If someone else wants to steal Bass’s ideas and write it in another language, they’re more than welcome to.

        1. 1

          Sorry, I’m not trying to crap on your project. Obviously you can implement it however you want.

          But I feel so many people miss the point of why Lisp is great for DSLs that sometimes I feel the need to point it out when (IMO) a project gets it wrong.

          1. 5

            No worries, I agree with the principle. I think what makes it hard in practice is when there are critical libraries/software in existing ecosystems that you need to integrate with. Someone could probably put together a similar DSL in Clojure, but they’d need to maintain bindings for Buildkit or invent their own stack. It just depends on your goals, I guess.

            Also: in Bass’s case, its entire purpose is to maintain a strict boundary between Bass and the host machine. There are lots of things you can do in other languages that Bass expressly forbids - like writing to the host filesystem or running arbitrary commands. If it were implemented as a library in a general-purpose language, that guarantee would be lost, and the Bass script ecosystem would no longer be pure.

    3. 1

      This looks cool – I don’t quite get the notation for the evaluation? Why does {} appear on the “right” but not in the expressions?

      And sometimes the {} are JSON syntax with quotes, but sometimes they aren’t?

      1. 2

        {} should always be a mapping {:a 1 :b 2} in the language, and it should only look like JSON when you’re reading the stdout part of an example. Maybe the docs UI was confusing? Not sure where you’re seeing it on the right. (A deep-link will appear if you hover over a paragraph, I can take a look.)

    4. 8

      I like this a lot! It feels like a fresh take in the Nix, Babushka, CF/Chef/Puppet/Ansible/Salt space.

      1. 11

        It may be a fresh take on Chef, Puppet, etc. but it isn’t a fresh take on Nix. Bass is describing things to do, not the ultimate result. You can tell this is true because it starts with something and adds to it, instead of describing the entire system in its entirety.

        Another hint is you see commands like “clone” and “build”:

        (-> (git-checkout bass (git-ls-remote bass "main"))
            (go-build "./cmd/..."))

        and “update” and “install”:

          (from "ubuntu"
            ($ apt update)
            ($ apt -y install git))

        instead of declarations that the set of packages is a specific, exactingly specific set.

        In short it may bring fresh ideas and goodness to Chef and Puppet but ultimately it will suffer many of the same problems of unreproducibility, unpredictability, and surprises around ordering at the root of those tools.

        1. 6

          Bass author here - this is a good observation.

          My goal is to meet devs where they are and make it easy to go further if/when they want. Bass prohibits a dependency on host machine state and requires you to bring your own dependencies, but it stops short of prescribing how to do that. Bass replaces one-off Bash scripts and Dockerfiles, but at the end of the day you’re still just running commands.

          I was lazy and just used apt, but the next user could use Nix, if that seems like a reasonable thing to do.

          A thunk represents a point in time where all the dependencies are declared together and ready to (run), but it’s in a different shape compared to Nix, and doesn’t have the same intrinsic guarantees - it’s up to the user to make sure the roots of their dependency graph are reproducible and stable. It gives you the tools, but it doesn’t enforce the practice.

          Under the hood this is all powered by Buildkit’s internal LLB data structure/API, which handles containerization, caching, mounting assets between commands, resolving the internal dependency graph to run things in parallel, etc. This was a very recent change so it’s not mentioned on the website yet but they deserve a big thanks. (edit: added)

        2. 2

          I assumed the declarative vs. imperative difference was a matter of API design; that the thunks were content derived. (“hermetic, cacheable”) Your comment inspired me to look closer. I see now that thunks are serialised command sequences sent to Docker.

          Less interesting, to me.

          1. 1

            Hmm I’m not sure how that would work, but I’m curious about other ideas.

            The docs are pretty idealist, but that is the intended use case. Really they’re hermetic and cacheable, as long as you hold up your end of the bargain, which should be documented.

            Here’s the going theory - sorry for the wall of text:

            A thunk is just a record describing a command to run, but its inputs (exe, args, stdin, env, dir) may embed more thunks, which may embed more, and so on. A thunk’s image is either a reference to an image in a registry, or another thunk. The theory is that a thunk fully describes how to create the artifact/image it represents - not from scratch, but from whatever images are referenced at the bottom of the graph.

            Thunks are built up functionally in code. Often a Bass script will just emit a thunk path to *stdout* without ever actually running it. Instead it’s lazily evaluated when you pipe it to bass -e, just as it would have been if it were passed to another thunk that needed it.

            When you call (run) or bass --export, the Buildkit runtime recursively translates the thunk to one big LLB definition and Solve()s it with the Buildkit client. Buildkit handles all the deduping and caching: if you pass the same thunk-path around multiple times, it’ll just keep generating the same LLB and do a no-op. That part’s content-derived at least!

            For a thunk to be hermetic, its image reference should be pinned to a tag (or digest if you’re really hardcore), and it should never use an unstable/unversioned input. Instead of running apt-get update && apt install, it should fetch a specific version, or use something like Nix. To use the latest version of something, it should (run) a separate thunk to determine the version first, collect it from the response, and pass the literal version value into the thunk that fetches it.

            So it’s not completely airtight, but I think it’s the best I can do with an CLI / OCI image-driven UX. In practice the range of reproducibility I need varies on a case by case basis - at the start of a project I’d rather aggressively roll forward and later pin things down when I need to. The current approach gives me flexibility to improve things gradually, which is good enough for me. Definitely curious about ways to make it safer though.

            Cheers for the feedback.

            1. 1

              Could the thunk encode the image digest, even if the originating bass script command only specifies an image name?

              1. 2

                Following up: as of this PR, image refs are resolved to digests before being passed to a thunk. Docs have been updated to use it everywhere.

                1. 2

                  Very cool! 👏🏾 I can’t wait to play with it again once I’m back from the holidays.

              2. 1

                Yeah! I think that’d help a lot. Right now the runtime resolves it at (run) time and it never makes it back to Bass, but it’s doable - just need to figure out how it should look in code.

                Alternatively you could treat it like any other version and resolve the tag to a digest using another thunk like (from (image "ubuntu:21.10")), but that might be too noisy.

        3. 1

          It may be a fresh take on Chef, Puppet, etc. but it isn’t a fresh take on Nix. Bass is describing things to do, not the ultimate result.

          I think you might have a misconception about those tools if you think they represent what to do instead of declaring results. They’re mostly declarative, or even fully if you constrict yourself to a subset of their functionality.

          1. 4

            Those tools are declaring the inputs to the Chef/Puppet/etc. “engine” which “converges” to a state. However, they don’t describe the end result. Chef’s “install a package” looks declarative:

            package %w(package1 package2)

            but that isn’t describing a end result, that is describing an action that it will execute just once. You know this is true because in order to have an up to date package, you have to explicitly define a version or the Upgrade action:

            package %w(package1 package2) do
              action :upgrade

            This is very important, because this is a scripting language with a declarative feeling frontend. You can’t know exactly what you’ll have in the end. If you only had the package "foo" declaration and ran this on a new and an old server, you are almost guaranteed to end up with 2 different results.

            If you then want to go and uninstall it you cannot simply remove the “declaration” that you want it installed. No, you have to “declare” that you want it uninstalled:

            package %w(package1 package2) do
              action :remove

            You could write this out in another “declarative” language, where you list the commands you’d declaratively like executed:

            apt-get install package1 package2
            apt-get update package1 package2
            apt-get uninstall package1 package2

            With Nix, however, you don’t need to say “uninstall package1 package2”. Instead you change environment.systemPackages = [ vim package1 package2 ] to environment.systemPackages = [ vim ] and the next system build won’t have it. This is also true of users, services, firewall rules, etc.

            Furthermore, Nix’s declarative definitions are thorough enough that you can build any system on any other system, copy the system over, and it’ll work and be identical to what would have happened if you built it anywhere else.

            1. 1

              and the next system build won’t have it

              Yes, this is a good point. Here Nix is technically clearly better than those alternatives.

              The practical way to have this not be a problem with those more conservative tools is to work on a development server and only deploy to production when the work is done. Or carefully fix your mess if you do something like that on the production server. Or deploy the production server from scratch every time you change something (this is kinda what Nix does, isn’t it? Only with more clever strategies, with all the good and bad connotations of the word ‘clever’). Yes, I know how flaky or clunky that can be if there’s not enough discipline.

              1. 2

                I hesitate to say more practical, I’ve worked with all those options for a long time before finding Nix.