Threads for brnhy

  1. 2

    Unix ignoramus here, but functional programming enthusiast. I was made aware of NixOS a few months ago and really liked the idea of it. If I ever get around to firing up a *nix computer, I think I would like to try this. Has anyone ever installed on a Raspberry Pi 4?

    1. 4

      I seem to have something about NixOS on RPi4 in my bookmarks: https://lobste.rs/s/pdkwcq/nixos_on_raspberry_pi_creating_custom_sd - though I haven’t tried that myself.

      Personally, I only installed Nix on my RPi4 with Raspbian. One thing I had to do to make it work was to switch Raspbian to a 64-bit kernel image. There’s some (theoretically experimental, yet it worked flawlessly for me) flag for this; if I found the right one, I believe you need to put arm_64bit=1 in the RPi4’s /boot/config.txt, then reboot.

      Please note, that if you’ve never used Unix/Linux at all before, starting from NixOS might be a rather deep dive. Say, akin to learning car driving by buying a battered used 4x4 and having it delivered into your backyard. It’s definitely doable, and if you like experimenting you’ll definitely have tons of fun and learn a lot, though don’t be surprised if many people might say it’s not the easiest nor most typical way to do that. Depending on what is your vision of fun, you might want to consider going with Ubuntu/Raspbian first, if you prefer smaller and safer steps (i.e. learning in a Chevvy at a driving school) - or, it might be exactly the path to pick if you do like crazy hacker adventure :D (Personally, somewhat recently I did a for-fun project of assembling Android .apk files by hand from scratch, byte by byte, so I can totally relate if you’d like to choose the crazy way ;P)

      1. 2

        I build SSH-enabled SD images for screen/keyboard-less installation via: https://github.com/Robertof/nixos-docker-sd-image-builder

        Someone helpfully provided step-by-step instructions for the builder here: https://gist.github.com/chrisanthropic/2e6d3645f20da8fd4c1f122113f89c06

        Things I tripped over initially:

        • Increase sdImage.firmwareSize from 128 to something like 1024 if your SD card has the space. If you don’t know what you’re doing you’ll probably want to nixos-rebuild switch a few times which can consume more than the default.
        • Prefer to use the AWS ARM instance support via packer if possible - it’s a lot faster than using QEMU. My (very rough) cost estimate is somewhere between 20-40 cents (Euro).
        • if you’re using a Pi 4 then make sure you read the instructions thoroughly for Pi 4 specifics. Mainly setting DISABLE_ZFS_IN_INSTALLER=y for build speed and NIXPKGS_BRANCH=master so it works and you don’t just get a very smol Christmas tree with blinking lights.
        • If you’re using packer you can directly add access_key and secret_key to the amazon-ebs source, like so:
        source "amazon-ebs" "nixos_sd_image_builder" {
          access_key = "<...>"
          secret_key = "<...>"
          ..
        }
        
        1. 2

          A word of warning on the new 4b with https://github.com/Robertof/nixos-docker-sd-image-builder

          I bought the 8gig model in september, it wouldn’t boot at all. Bought a 4gig model, booted same sd card I created and it worked fine, popped carg back in the 8 and then that booted. You should be ok now as the firmware is more up to date and you can always install a newer firmware but the 8g pi 4 is a bit weird. And not just with nixos i’ve seen lots of reports of similar from other people on the pi forums.

          Other than that caveat it works fine, one thing i have to try out is the usb boot from the latest firmware and ditch sd cards entirely for these little pi’s.

          1. 1

            I’ve used it for two different 8GB Pi 4s in the past month - apart from forgetting to set NIXPKGS_BRANCH=master I’ve had no other problems.

        1. 14

          I guess by now it’s useless to complain about how confusing it is that OCaml has two (three?) “standard” package managers; the ecosystem around the language is kind of infamous for having at least two of everything. I trust the community will eventually settle on the one that works the best. At least it looks like esy is compatible with opam libraries (though the reverse is not true), so it might have a good chance against opam.

          Also this is kind of unrelated, but I’m really salty about ReasonML recommending JS’s camelCase over OCaml’s snake_case. This is one of the few rifts in the ecosystem that can’t really be fixed with time, and now every library that wants to play well with both OCaml and Reason/BS ecosystems will have to export an interface in snake_case and one in camelCase.

          1. 12

            I second the choice to use JS’s camelCase for ReasonML as a salty/trigger point. It seems like a minor syntactic thing to make it more familiar for JS developers making the switch, but as someone who primarily writes Haskell for day job - camelCase is just less readable, IMO. Something I constantly am irritated that I even have to think about is casing acronyms consistently - which is avoided by snake_case or spinal-case - ie. runAWSCommand or runAwsCommand, setHTMLElement vs setHtmlElement - run_aws_command, set_html_element, etc.

            1. 3

              The strangest thing for me is the “hey, there’s two mostly compatible syntaxes for this language we call ReasonML” but it’s mostly the same thing as Bucklescript from which we use the compiler anyway, except this, and this, and … oh and by the way, it’s all ocaml inside. What ?!

              1. 1

                “Oh and also the docs for all these things (which you need) are all in completely different places and formats”

              2. 2

                I think the ReasonML team wanted to match the conventions of JavaScript, where camel case is the norm.

                I can see the annoyance though… and I have to wonder, is ReasonML syntax much better than OCaml’s? Was it really worth the break?

                1. 6

                  It’s not “better.” Yes, there are some cases where they’ve patched up some syntactic oddities in OCaml, but it’s mostly just change for the sake of being near JS.

                  Is it worth it? Depends. ReasonML and its team believe that OCaml failed to catch on because of syntax. If you agree, then yes, it’s worth it. And based on the meteoric rise I’ve seen on ReasonML, they may be right. That said, I believe, with good company, think OCaml didn’t catch on because it had two of everything, had really wonky package managers (and again, two of them), and still lacks a good multithreading story. In that case, no, the syntax just is change for no reason, and the only reason ReasonML is successful is because Facebook is involved.

                  1. 2

                    I’m all for functional alternatives displacing JavaScript but my main frustration with ReasonML is that any niceities you gain from using it are outweighed by the fact that it’s just one more layer on top of an already complex, crufty, and idiosyncratic dev environment. I think that’s what’s holding OCaml back as much as anything else.

                  2. 4

                    Some people seem to think that OCaml’s syntax is really ugly (I quite like it) and unreadable. I’m guessing they’re the same who complain about lisps having too many parenthesis.

                    ReasonML does fix a few pain points with OCaml’s syntax, mostly related semicolons (here, here, here), and supports JSX, but it also introduces some confusion with function call, variant constructor and tuple syntax (here, here, here) so it’s not really a net win IMO.

                    I think ReasonML was more of a rebranding effort than a solution to actual problems, and honestly it’s not even that bad if you disregard the casing. Dune picks up ReasonML files completely transparently so you can have a project with some files in ReasonML syntax and the rest in OCaml syntax. The only net negative part is the casing.

                  3. 1

                    Esy and bsb are build orchestration tools, not package managers.

                    Esy is not OCaml-specific, it can e.g. include C++ projects as build dependencies. This is how Revery ( https://github.com/revery-ui/revery ) is being developed, for example. Esy also solves the problem of having to set up switches and pins for every project, with commensurate redundant rebuilds of everything. Instead, it maintains a warm build cache across all your projects.

                    Bsb specifically supports BuckleScript and lets it use npm packages. It effectively opens up the npm ecosystem to BuckleScript developers, something other OCaml tools don’t do (at least not yet).

                    Having ‘two of everything’ is usually a sign of a growing community, so it’s something I’m personally happy to see.

                    Re: casing, sure it’s a little annoying but if C/C++ developers can survive mixed-case codebases, hey, so can we.

                  1. 6

                    For a unified tool that’s got deep project management, artifact hosting, code review, etc, check out Phabricator. I haven’t used it seriously, but it seems to offer a lot of your asks. Gitlab is also getting close, withintegrated CI tools and now “serverless” … something, large file storage, and more flexible issues.

                    As far as Github goes: why not a Github for code, CI (using Actions), and artifacts, and a separate “tasks” tool for tracking projects, issues, notes, meetings, etc? Jira for instance provides all the desired features listed for issue management. There’s a pretty good integration between Github and Jira which lets you creat references between the two systems, like importing GH issues into Jira, and closing Jira tickets with GH pull requests.

                    1. 5

                      The issue with Jira is that it’s kind of horrible. It loads incredibly slowly, with widgets which continue popping in and moving stuff around even after the main html is loaded, and its UI is confusing and overly complicated and sluggish (even after everything has loaded) imo.

                      1. 1

                        Jira is very customizable and frequently installed “on-prem” by organizations, so your experience with it depend a lot on how your org sets it up. At my most recent job Jira was everything you describe, but I’ve also used a hosted instance with many features and fields removed by the default org configuration that felt light-weight and low mental burden.

                        My point stands however you feel about Jira: sub Request Tracker or any other full-featured “agile” ticketing system above.

                      2. 5

                        I came here to vouch for Phabricator. I’ve used it, on and off, for almost three years and it’s a joy to work with. Using Github was dreadful after I started using Phabricator.

                        Gitlab is, I believe, a “safer” choice over Phabricator, in the sense that there’s a multimillion VC-backed company behind it, dozens or hundreds of employees etc, while Phabricator is currently at a smaller scale. But philosophically speaking Phabricator is truly open source while Gitlab isn’t, and that alone wins me over.

                        A new, more hard-core contender is sr.ht. Unfortunately it’s still too raw for me, but it’s worth keeping a tab.

                        Anyway. Highly recommend Gitlab or Phabricator over Github. It does solve many (not all) of the issues listed in the article.

                        1. 3

                          FWIW GHC (Haskell) is moving from Phabricator to Gitlab

                          1. 1

                            I feel like there are two less-than-ideal options on that one. Phabricator is incredibly fast, but I think the complaints in that email are legitimate (development priorities are odd, and open-source friendliness definitely seems on the decline). GitLab does tons more and is more responsive, but it’s also just so damn slow. This morning, I checked out their about page again, and it reliably took nearly a second to render on my iPad. That’s just ridiculous.

                            I’m optimistic Phabricator will add burndown (or, even better, burn-up) tracking before we really need that kind of view, but if not, GitLab may be the only legit option.

                      1. 5

                        There is an informative series or articles from one of the Midori authors which includes object-capabilities as provided/defined/enforced by the kernel, see Midori: Objects as Secure Capabilities.

                        1. 2

                          Joe’s series is awesome. Highly recommended.

                        1. 26

                          No, this guy is wrong. Protocol buffers are endlessly pragmatic and many of the “bad decisions” he points out have concrete reasons.

                          For instance - he suggests all of the fields should be required. required fields existed in at least proto2 and I assume proto1, but were discovered to be terrible for forwards compatibility. I agree with his footnote that there’s a debate, but one side of it decisively won. If a field is required in one release of your code, that code can never talk with any protocol buffer serializations from future releases without that field being required without blowing up. The most frequent internal advice I saw was “avoid required. required is forever.” As a result, most feedback encouraged everything to be optional or repeated, which was made official in proto3.

                          Second, here’s how he wants to implement repeated:

                          coproduct List<t> {
                            Unit empty = 0;
                            Pair<t, List<t>> cons = 1;
                          }
                          

                          This just reeks of a complete ignorance of a couple of things -

                          1. How is this going to look for serialization/deserialization? Sure, we’ve embedded a list into a data structure, but what matters is being fast. Protocol buffers pragmatically describe useful data structures that also are very close to their native wire format. This is not that, but he says
                          2. “the actual serialization logic is allowed to do something smarter than pushing linked-lists across the network—after all, implementations and semantics don’t need to align one-to-one.” The protocol buffer implementation must be simple, straightforward, bugfree, and implemented in every language anyone wants to use. Static analysis to detect these patterns could work, but good luck maintaining that logic in every language of your lingua franca language interoperability system.

                          Third, as an example of the designers of protobufs being amateurs, he says:

                          It’s impossible to differentiate a field that was missing in a protobuffer from one that was assigned to the default value.

                          headdesk proto2 definitely supported this functionality. It was stripped out in proto3 after literally decades of experience from thousands of engineers said that on balance, the tradeoff wasn’t worth it. You can’t claim that a hard look of the tradeoffs is a result of being amateurs.

                          Fourth:

                          With the one exception of routing software, nothing wants to inspect only some bits of a message and then forward it on unchanged.

                          This is almost entirely the predominant programming pattern at Google, and in many other places too. Protocol buffers sound… perfectly designed for their use case!

                          What a frustrating read.

                          1. 4

                            Thanks for this critique, you’re right on. I do agree with one part though - you need to make application specific non-proto data structures that often mirror the protos themselves, which isn’t exactly DRY.

                            Here’s an example that I’m struggling to find a “nice” solution for. Locally running application has a SQLite database managed via an ORM that it collects structured log entries into. Periodically, it bundles those log entries up into proto, removes them from the local database, and sends them (or an aggregated version of them) up to a collection server.

                            The data structures are the exact same between the protos and the database, yet I need to define the data structures twice.

                            1. 3

                              Hmm, yeah, that’s a tough one. One thing that the protobuf compiler supports though is extensible plugins (e.g., check out all of the stuff gogoproto adds as extensions to the compiler: https://github.com/gogo/protobuf/blob/master/extensions.md)

                              Perhaps the right thing in ORM situations at a certain scale (maybe you’re not at this scale yet) is to write a generator that generates the ORM models from the protobuf definitions?

                              1. 2

                                Yeah, that would seem like the right solution in this case. In any case, what I described isn’t even a problem with the proto way of thinking, it’s just a tooling issue.

                            2. 4

                              Nice critique, better than I could have enunciated. I worked with the author at a company that chose protocol buffers and assume in part that the article is directed at those of us (myself included) who chose to use protocol buffers as our wire serialization format. It was the most viable option given the constraints and the problems a prior system was exhibiting. That said, the author has predilections and they show in the article’s tone and focus.

                              1. 1

                                Were you replacing XML?

                              2. 3

                                This is the best critique of this rant I’ve read, and you didn’t even go into the author’s attitude. Kudos and thank you.

                              1. 2

                                Also check out Akka.NET. I don’t think they’re directly comparable but they both smell like erlang to me…

                                1. 3

                                  The published technical report on Orleans makes direct comparisons against Akka and Erlang:

                                  Actor platforms such as Erlang and Akka are a step forward in simplifying distributed system programming. However, they still burden developers with many distributed system complexities because of the relatively low level of provided abstractions and system services.

                                1. 8

                                  Always a joy to read Conor’s writing :)

                                  Note that Epigram has been dead for a while, Idris is its spiritual successor (I believe it actually evolved from an attempt to build an Epigram compiler). Idris is explicitly aiming to be a “real” programming language; Agda is very similar, but is more often used from a mathematical/logical side of Curry-Howard, rather than the programming side.

                                  Neither Idris or Agda have the 2D syntax of Epigram, but they both have powerful Emacs modes which can fill in pieces of code (Haskell’s “typed holes” is the same idea, but (as this paper demonstrates) Haskell’s types are less informative).

                                  1. 10

                                    Indeed, I suppose it’s that Idris evolved from what was intended to be the back end of Epigram. It certainly owes a lot to Conor McBride and James McKinna’s work on Epigram. I don’t know if “real programming language” is exactly the right way to put it, though, so much as being the language I wanted to have to explore the software development potential of dependent types. Maybe “real” will come one day :).

                                    1. 1

                                      Will we see a longer form post/paper or something that isn’y merely Twitter teasers about Blodwen anytime soon? :)

                                      1. 6

                                        Hopefully! I know I need to get on with writing things so that more people can join in properly. I have plenty of time to work on it for a change in the next few months, so that’s nice…

                                        1. 1

                                          Do you have a writeup about it? I’m wondering why you’re replacing Idris which is somewhat established already, I mean that probably is the reason you’re replacing it, but still I wonder what concretely necessitated a whole new language instead of a 2.0

                                          1. 6

                                            It isn’t a whole new language, it’s a reimplementation in Idris with some changes that experience suggests will be a good idea. So it’s an evolution of Idris 1. I’ll call it Idris 2 at some point, if it’s successful. It’s promising so far - code type checks significantly faster than in Idris 1, and compiled code runs a bit faster too.

                                            Also, I’ve tried to keep the core language (which is internally called ‘TTImp’ for ‘type theory with implicits’) and the surface language cleanly separated. This is because I occasionally have ideas for alternative surface languages (e.g. taking effects seriously, or typestate, or maybe even an imperative language using linear types internally) and it’ll be much easier to try this if I don’t have to reimplement a dependent type checker every time. I don’t know if I’ll ever get around to trying this sort of thing, but maybe someone else will…

                                            I started this because the Idris implementation has a number of annoying problems (I’ll go into this some other time…) that can only be fixed with some pretty serious reengineering of the core. So I thought, rather than reengineer the core, it would be more fun to see (a) if it was good enough to implement itself, and (b) if dependent types would help in any way.

                                            The answer to (a) turned out to be “not really, but at least we can make it good enough” and to (b) very much so, especially when it comes to name manipulation in the core language, which is tricky to get right but much much easier if you have a type system telling you what to do.

                                            I don’t have any writeup on any of this yet. It’ll happen eventually. (It has to, really - firstly because nobody ever made anything worthwhile on their own so a writeup is important for getting people involved, and secondly because it’s kind of what my job is :))

                                            1. 1

                                              I’m so excited by all of this, can’t wait to see what comes out of it, and it can’t come soon enough:D Idris totally reinvigorated my love for programming tbh

                                        2. 2

                                          I just follow along with the commits. Edwinb is usually pretty good with his commit messages, so you can kind of get a story of the development from that! :)

                                          1. 1

                                            I’ve got to admit it’s very weird reading a reply by someone with your identical name/spelling, thanks!

                                          2. 1

                                            What’s Blodwen?

                                            1. 2

                                              An implementation of Idris in Idris: https://github.com/edwinb/Blodwen/

                                              Has a neat implementation of Quantitative Type Theory that I’m hoping to integrate in my own programming language!

                                              1. 1

                                                Nice! What’s your language? Btw your second link is broken

                                                1. 3

                                                  Fixed! This is mine: https://github.com/pikelet-lang/pikelet - scratching my itch of Rust not being enough like Idris, and Idris being not designed with low level systems programming in mind. Probably won’t amount to much (it’s rather ambitious), but it’s been fun playing around, learning how dependent type checkers work! I still need to learn more about what Epigram and Idris do, but it takes passes of deepening to really get a handle on all the stuff they learned. I’m probably making a bunch of mistakes that I don’t know about yet!

                                                  1. 1

                                                    Nice logo. Looks harmless and fun.

                                                    1. 1

                                                      Nice. I’m starting to realize how I wasn’t the only one to have thought “wouldn’t it be nice to have a purely functional systems language with cool types”:D

                                                      What I wanted to make was very similar to Idris, but I would’ve put way more focus on lower-level stuff. Honestly, my way of combining it was likely misguided as I was a total rookie back then (still am, but comparatively, I at least know how much I didn’t know…)

                                                      1. 1

                                                        Oh cool! I’m sure there are others with a similar itch to scratch - it’s just coming into the realm of the adjacent possibility. Pikelet is just my attempt at pushing that process forwards.

                                                        We’ve been growing a nice community over at https://gitter.im/pikelet-lang/Lobby - you are most welcome to drop by if you like!

                                                    2. 2
                                            2. 1

                                              Thanks for putting this in context, that’s really useful.

                                              Also: sounds like I’m missing a (200x) in the title, if you know the correct year.

                                            1. 10

                                              There are numerous packages on Hackage which provide Big O annotations - for example see the Data.Map.Strict module (persistent maps.)

                                              I find it useful as both a rough baseline for my own code as well as a signalling mechanism that the author has thought through some of the higher level API.

                                              Unfortunately it’s not completely pervasive and Haskell documentation tends to vary wildly in quality, but alot of the more popular base-level libraries do have these annotations.

                                              1. 2

                                                This is very well written and is worth reading for the section about prior work alone. I would have liked a closer look at LambdaPi since I am not familiar with it.

                                                1. 3
                                                  1. Henk: A Typed Intermediate Language by Erik Meijer and Simon Peyton Jones contains a tutorial covering the Lambda Cube and can serve as a brief introduction to Pure Type Systems.
                                                  2. Lambda Pi: A Tutorial Implementation of a Dependently Typed Lambda Calculus by Andres Löh, Conor McBride and Wouter Swierstra is a worked introduction to implementing a dependently-typed lambda calculus, and has the resulting implementation available in Haskell which can be useful to follow along.

                                                  I’d recommend reading those in order. The Henk paper in particular is great, and despite being from 1997 I haven’t personally found any better introduction to the topic.

                                                  (That is, I recommend reading Henk before the actual link regarding LambdaPi or the reorganisation.)

                                                1. 10

                                                  But in most large projects you will just write a script to generate these files.

                                                  When I read statements like this, I think of the underpants gnomes business plan wherein one just glosses over the “magic” phase.

                                                  1. 7

                                                    It’s not unusual at all. The de facto architecture of build systems is now to separate them into high level and low level layers.

                                                    This started with autotools and Makefiles. Autoconf and automake are separate high level languages (implemented as m4 macros, blech), which generate shell scripts and Makefiles. GNU Make does the actual execution.

                                                    This architecture has become more codified with Ninja as the low level and GN (Chromium) or CMake (LLVM) as the high level, portable version.

                                                    So he’s basically saying that Fac is a low level build system. It doesn’t have Turing complete features. Lower level languages may lack Turing complete features like Ninja, but higher level build languages need them. Android is moving away from 200K lines of Makefiles that included the GNU Make Standard library, so it’s a build system bigger than most programs people write. The high level language needs to be Turing complete, which isn’t a small undertaking. CMake admitted that their language is kinda horrible.

                                                    1. 3

                                                      I don’t dispute the separation. But the wording (“just write a script”) belies the amount of work needed to make it into a full build system that you can actually use.

                                                      1. 3

                                                        I realise I’m probably being overly pedantic but every time I see arguments about the need (or not) of a language to be Turing complete I wonder if people would be better served by offering a more precise specification of the feature(s) they believe this will enable. For example, the blanket statement of whether something is Turing complete only communicates the ability to simulate a Turing machine, there are very many valid programs that can be written without that particular feature.

                                                        Agda et al. is an example of something that is not Turing complete per-say (Partiality monad aside), yet it would not be mistaken for being in the same power-to-weight category as a Makefile.

                                                        1. 2

                                                          Yeah in this context “Turing complete” is perhaps not very precise or doesn’t capture the whole issue. In particular people often conflate two things: whether you can do arbitrary computation and whether you have side effects.

                                                          Both are design issues in a build system. A side effect would be reading the source files to discover dependencies from .h files, rather than the dependency graph being purely a function of what’s in the Makefile.

                                                          For the case of builds, my assertion is that you need arbitrary computation: memory, loops, and conditionals. And although it doesn’t affect whether it’s Turing complete or not, you need abstraction power like functions and so forth.

                                                          If there are better terms to have this discussion with, I would be interested in hearing them. But my overall point stands: low level build systems deal with fine-grained dependency graphs and parallel execution. High level build systems have “logic” for portability and code reuse. Build systems are real code now.

                                                    1. 42

                                                      My initial unfounded reaction is I’d rather not see such threads here if only because it has the potential to attract attention from tech recruitment circles.

                                                      1. 8

                                                        Ditto. Rather not.

                                                        Edit: Having a #jobs channel has created some problems for a Slack community I’m in. Really attracts people that ignore or don’t care about group norms. Will play dumb when you call them on it.

                                                        1. 1

                                                          I kind of like watching them slink away with increasingly frail excuses for being a recruiter with no interest in the community.

                                                          1. 2

                                                            I don’t enjoy humiliating people.

                                                            1. 2

                                                              Hm, I haven’t seen any humiliation. Usually just a question about how their offering is relevant to the community and an unrelated, dodging answer offered. Maybe I’ve just missed the ones where things got more aggressive.