Threads for ngp

  1. 11

    I didn’t lose anything, I’ve only gained. The issues with vídeo calls can be solved by having fewer of them, especially since most meetings are useless anyway.

    Social isolation? My job is to read and write code, and for that I want to be in flow. I see people after work, at the pub, like $DEITY intended.

    1. 10

      My job is to read and write code

      I firmly disagree with this sentiment for most SWEs. Part of a software engineers job is to read + write code. Well over half of a good engineers job is social, either interacting with team members or stake-holders.

      1. 6

        Nowhere near half of my time is interacting with anyone, and 99% of the time that I do it’s better to do it asynchronously via slack/email than getting interrupted in a noisy office.

        My job is to read and write code. Sometimes I have to interact with other people to do so.

        1. 6

          Then you are in a minority

    1. 6

      I just picked up a (nearly) maxed out M2 MacBook Air. Rust absolutely flies on this thing. I don’t need a ton of IO, or sustained performance, so I opted for the cheaper and lighter model.

      1. 2

        Completely agree! The only thing I didn’t max out on mine was storage (512 is enough for my uses) and the laptop has been amazing! My first Apple laptop in a very long time. Although the extra performance of the Pro laptops was tempting, I really love how thin and light the M2 air is. Haven’t noticed any issues without a fan yet!

        1. 3

          I upgraded to 1TB because I also do photography. 512GB is definitely enough for the vast majority of dev work

      1. 2

        The MBA M2 is developing a reputation for running hot. https://arstechnica.com/gadgets/2022/07/the-new-macbook-air-runs-so-hot-that-it-affects-performance-it-isnt-the-first-time/

        My MBP M2 seems to work great and I’ve heard nothing but good things about the M1 machines.

        1. 5

          To be clear, the thermal throttling folks have found shows up when you out it under sustained “pro”-type workloads, and even then it’s faster than the M1 and has phenomenal battery life. Unless you’re simultaneously taxing all the CPU and GPU cores simultaneously for sustained periods in your daily usage, you’re unlikely to ever even hit those.

          1. 5

            Can confirm. Have a M2 Air and have not had any issues with Rust development

            1. 3

              Yep. My current PC laptop (Asus Zephyrus G14) doesn’t just run hot when it starts using the GPU. It crashes with a hard reboot. I’ll take throttling over that every single time.

            2. 2

              Shrug. The MacBook Air M1 also runs hot and starts throttling. I had a MacBook Air M1 for a while (now Pro 14”) and could get the CPU to heat up to 95 degrees and start dropping from 3.2GHz to 2.3 GHz very quickly (e.g. by building PyTorch).

              But it doesn’t matter. The machine is still perfectly fast even when it throttles. The MacBook Air trades-off a bit of sustained performance for being ultra-thin and fanless. Unless you are doing long builds, you won’t notice, because it won’t throttle with short computation bursts (e.g. your language server doing its work). If you prefer sustained performance, get a Pro 13” or even better, a Pro 14” or 16”. Those models take the other side of the trade-off - they much thicker and are actively cooled, but throttle (less) with sustained workloads.

              The whole M2 throttling saga is just clickbait of a Youtube’er and some news sites. Where were they when the M1 came out?

            1. 29

              Nah. I don’t use dockerfiles. I don’t use nix. I’ll wait another 5 years for the sediment to settle and let everyone else dig through the shit

              1. 22

                But how do you even get anything done without using Kubernetes, Nomad, 3 different service discovery mechanisms, thousands of lines of YAML, and at least two different key/value stores?

                1. 16

                  I am incredibly thankful for the existence of Docker.

                  I have less-than-fond memories of trying to set up local instances of web stacks over the past 20 years, and the pile of different techniques that range from Vagrant boxes (which didn’t ever build properly) to just trying to install all the things – web server, DB server, caching server, all of it – on a single laptop and coordinate running them.

                  Now? Docker + docker-compose, and it works. The meme is that Docker takes “works on my machine” and lets you deploy that to production, but the reality is that it goes the opposite direction: it lets you write a spec for what’s in production and also have it locally with minimal effort.

                  Things like k8s I can take or leave and would probably leave if given the choice, but Docker I will very much keep, thank you.

                  1. 4

                    Containers are absolutely here to stay but I think the parent comment is mostly complaining about the vast ocean of orchestration solutions, which have created whole new categories of complexity.

                    1. 1

                      Oh I like Docker, or more specifically the idea of containers. My issue is more that on one side you have containers, and on the other side you have this incredibly complex stack of services you probably don’t need. And yet people tend to lean towards that side, because some blog post told them you really can’t have containers without at least a dozen other services.

                    2. 2

                      apt, mostly, and some ansible. :-P

                    3. 16

                      docker and nix solve a very particular problem that many but not all developers experience. If you don’t experience that problem then awesome! count yourself lucky. But some amount of us need to use multiple versions of both languages and libraries in our various activities. Some languages provide a way to do this within the language and for specific C libraries. But almost none of them solve it in a cross language polyglot environment. For that you will need Nix, Docker, or something in the Bazel/Pants build family.

                      But as you astutely note those are not pain free. You should really only use them if the pain of not using them is worse than the pain of using them.

                      1. 3

                        Can confirm. Have a Pants managed monorepo. It’s painful.

                    1. 18

                      The post by the author (who I believe is a lobste.rs member) ends on a sad note. I don’t think just the publicity caused this crash in enthusiasm - I’m guessing the internet was the internet and people were unkind to them which I can totally see killing enthusiasm for an endeavor, especially if the spotlight was shone too early.

                      To the author - I hope, once this 15min of hell has passed, your motivation comes back, and you keep working on it, since there must have been interesting problems in that space you wanted to solve.

                      1. 21

                        Generally I’d agree with this sentiment.

                        But the author is known for being rather obnoxious and rude towards other projects he disagrees with, and was even banned from lobsters for this reason. So in this case I don’t feel too bad.

                        1. 28

                          He’s also made significant effort - and improvement! - on those fronts.

                          I have first-hand experience of interacting with him on IRC, as a paying customer requesting with questions about his products. I wish all vendors were as approachable, polite, and direct as he is.

                          Re. the note on his ban - I too find myself disappointed in the world (of software) at times, as do many of my friends and colleagues. I note though that few people take the step of launching our own commercial products as a means of improving it.

                          1. 8

                            commercial products

                            commercial and ethical products

                            They might be opinionated, but they are still free software. That’s really not typical nowadays.

                            I have noticed some introspection, e.g. https://drewdevault.com/2022/07/09/Fediverse-toxicity.html.

                            I too have issues dealing with my frustration and textual interactions don’t make it any easier. Without easily accessible peers to discuss things with, it falls to the online community to help people cultivate their opinions.

                            I am thankful that many people here have the patience.

                            1. 5

                              They might be opinionated, but they are still free software. That’s really not typical nowadays.

                              Agreed; and that’s a large part of the reason I made the switch to sourcehut from GitLab.

                          2. 12

                            In this case you are the one being obnoxious and rude. You don’t know the guy, don’t spread rumors and hate.

                            1. 12

                              I agree that it’s time lobsters moved on from this and stop bringing up DeVault’s past mistakes.

                              However, this isn’t a “rumor” or “hate”. They were simply stating a well-known fact about Drew’s aggressiveness and rudeness, one which I’ve also experienced and seen others experience. (To be fair, I’ve noticed good behavior has improved a lot over the past 12 months.)

                              Jeez, I really look forward to the day when lobsters can discuss Drew’s work before dragging up shit from 1 year ago.

                              1. 15

                                I think it certainly is hate. These comments seem a lot like targeted harassment to me. Most of the commenters don’t seem to have first hand experience with what they are talking about. They also appear whenever drew does something good which just detracts from everything.

                            2. 11

                              The reasons were not made public and it’s bad form to attack someone who can’t respond.

                              1. 7

                                Ah, I am no longer as active on lobste.rs as I used to be and I missed that Drew got banned. I just searched through his history but didn’t find the smoking gun that got him banned. Anyhoo, sad all around.

                                1. 16

                                  There’s some context in this thread, though it doesn’t provide an exact reason.

                                  I had a long response to his Wayland rant because I think the generalizations in that post were simply insulting at best and it drove me crazy.

                                  He is a clever engineer, but he has a tendency to invite controversy and alienate people for no reason. After that rant of his, I lost any desire to ever engage with him again or use his products if I can help it, which may be extreme, but after numerous similar exchanges I think it’s unfortunately necessary.

                                  1. 12

                                    Yeah, I’m surprised and somewhat sad. He’s difficult and abrasive sometimes, but I respect his engineering.

                                    1. 3

                                      im so tired of this sentiment

                                      1. 17

                                        im so tired of this sentiment

                                        Saying you’re tired of another person’s take without giving any reason is a pretty vacuous and unnecessary comment. The button for minimizing threads is there for a reason.

                                        1. 18

                                          I’m also tired of the sentiment that allows someone to be shitty just because they’re good at solving a problem.

                                          1. 2

                                            Unfortunately (?) you can’t disallow someone from being shitty.

                                            1. 7

                                              One can for certain exclude them from a group of friends that you care for.

                                          2. 2

                                            This comment is inappropriate. I am sure that the tone and attitude here is not a fit for the community what we are aiming for on lobsters

                                          3. 2

                                            The opposite leads to bad engineering decisions.

                                            1. 12

                                              Health care and related fields have a concept of the quality-adjusted life year, which is used to measure impacts of various treatments, or policies, by assigning a value to both the quantity and quality of life. There are grounds for critiquing the way the concept is used in those fields, but the idea probably ports well to our own field where we could introduce the concept of the quality-adjusted code unit. Let’s call it QALC to mirror QALY for life-years.

                                              The gist of the argument here is that while there are some people who produce an above-average number of QALCs, if they are sufficiently “abrasive” they may well end up driving away other people who would also have produced some number of QALCs. So suppose that a is the number of QALCs produced by such a person, and l is the number lost by their driving away of other people. The argument, then, is that in many cases l > a or, more simply, that the person’s behavior causes a net loss overall, even when taking quality (or “good engineering” or whatever synonym you prefer) into account.

                                              My own anecdotal experience of involvement in various open-source projects is that we often drastically overestimate the “abrasive” person’s QALCs and underestimate the QALCs of those who are driven away, making it almost always a net loss to tolerate such behavior.

                                              1. 5

                                                I’m 100% OK with bad engineering decisions (within reason) if it means my life is more pleasant. If hanging out with brilliant assholes makes your life more pleasant, then by all means, go for it!

                                                1. 2

                                                  It took me 20 minutes to pay for something on my iPhone today because the app wouldn’t let me scroll down to the “submit” button, and the website wouldn’t either until I looked up how to hide the toolbar on Safari. That doesn’t make my life more pleasant.

                                                  Besides, you aren’t forced to hang out with people just because they are allowed to post.

                                                  1. 2

                                                    By allowing them to post you allow them to hang out in your and the other users’ brains.

                                                2. 6

                                                  there is no tradeoff

                                                  we don’t have to accept abusive or toxic people in our communities

                                                  1. 5

                                                    I think this mindset is what has lead to the success of the Rust project in such a short span of time. It turns out that having a diverse community of respectful individuals invites more of them and leads to better problem solving.

                                                  2. 3

                                                    Are you implying that only difficult and abrasive engineers do good work? Because I have personal experience of the opposite, not to speak of numerous historical accounts.

                                                    1. 2

                                                      No.

                                        1. 7

                                          I’ve been watching the Python community build out pypi of late. I sincerely don’t know how you make a community both welcoming to new contributors and first time module authors and yet safe from this kind of attack.

                                          I’m not sure it’s a solvable problem.

                                          1. 4

                                            Idk, I feel like Linux distros have been doing a pretty good job for decades. It seems it’s far harder to compromise GPG keys than it is to compromise a GH/PyPi/etc. login. The real problem is there’s not a low-barrier to entry way to getting mass adoption of a package system using GPG because the ergonomics are awful.

                                            1. 9

                                              They have! But they’ve done so with a tremendous trade-off in terms of time to release. If that works for your use case, fantastic! Rock on with your bad self! But there are other use cases where getting the very latest code really IS important.

                                              The distro model also relies on the rarefied fairy dust that is the spare time, blood sweat and tears of distro / package maintainers, and thus doesn’t scale well at all.

                                              1. 5

                                                I think a big part of that time trade-off comes from the fact that distro maintainers do a lot more than build and publish packages, they test that they all build together, don’t break distro functions, etc.. IMO the real issue issue with weakly secured package repositories is that it’s a big burden to get package developers to just sign their packages. The ideal package repository for me does the following:

                                                • packages must be cryptographically signed by one of the authors
                                                • signatures are validated by package managers at download/install time
                                                • new versions of an existing package must be signed by a key in the same signature chain(s) as the last published version except in the following scenarios
                                                  • explicit handoff of ownership via a token signed by the previous key that contains the signature of the root of the new chain, subsequent packages can be signed by either key unless the token includes a revoke of signature rights flag that prevents the previous key from being used
                                                  • to support lost keys, the repository administrators can sign the same type of token mentioned above after a verification step (such as verifying ownership over the email attached to the GPG key, signed tag on relevant git repo, etc.)
                                                • packages are namespaced with repo username or group by default. This supports forks and forces an acknowledgement of the owner(s) of a package onto the user. Most git hosts work this way anyways

                                                The only real barrier to doing something like this is adoption due to overhead of creating and maintaining signing keys on the publisher’s end. Part of the reason npm/pypi/etc. are so ubiquitous is there’s basically zero barrier to entry, which is not what I want my software to rely on.

                                                1. 8

                                                  Now factor several other variables into your ideal:

                                                  • Most packaging systems are built by volunteer help on volunteer time
                                                  • They need to operate at crazy bananas pants scale. Pypi had 400K packages at last count I saw.
                                                  • People have legitimate needs for development purposes of being able to get the VERY latest code when they want/need it.

                                                  I think all of what you’re saying here is spot on, I just don’t know how you actually make it real given the above. You’re comparing to the Linux distro model, where the entire universe of packages is in the 30-60K range according to the web page I just saw.

                                                  1. 2

                                                    Most packaging systems are built by volunteer help on volunteer time

                                                    True, but there’s more complex ambitious projects (like Matrix) that are also built by volunteers. Hell, you could probably build a sustainable business model by selling access to such a repository in a b2b fashion.

                                                    They need to operate at crazy bananas pants scale. Pypi had 400K packages at last count I saw

                                                    I mean, yeah? It’s still read-heavy which is easier to scale out that write-heavy systems.

                                                    People have legitimate needs for development purposes of being able to get the VERY latest code when they want/need it

                                                    This requirement isn’t really mutually exclusive with my ideas above. If you’re saying you need to operate on the latest unpublished code, you should just clone master of the code itself and go from there. I’m not saying you have a group of volunteers (or employees) comb through published packages and sign them themselves, I’m saying you force signatures of any package uploaded to the repo from the person who wrote the code and is publishing it. The obvious problem with that being adoption because who wants to go through the bs process of setting up GPG/PGP keys, it’s a pain.

                                                    1. 6

                                                      I hardly think it’s fair to say Matrix is developed by volunteers…

                                                      1. 1

                                                        Who is it developed by then?

                                                        1. 3

                                                          New Vector Limited

                                              2. 8

                                                …GPG because the ergonomics are awful.

                                                I’ve had probably a dozen keys over the years, many of which were created improperly (e.g. no expiration) because I was literally just doing it to satisfy some system that demanded a key.

                                                So, on top of the bad ergonomics around GPG in general, you also have the laziness / apathy / resentment of developers who didn’t actually want to create a key and view it as an annoyance to contend with. Like, how long do we think it would take before people started committing their private keys to avoid losing them or having to deal with weird signature chains to grant access to collaborators?

                                                1. 3

                                                  PyPI already supports PGP-signing your packages, and has supported this for many years. Which should be a big hint as to its effectiveness.

                                                  1. 1

                                                    Not just supporting PGP/GPG-signatures, enforcing signatures. And yeah, that ecosystem sucks.

                                                    1. 5

                                                      Tell me how you’d usefully enforce in an anyone-can-publish package repository like PyPI. Remember that distros only manage it because they have a very small team of trusted package publishers who act as the gatekeepers to the whole thing, and so there’s only a small number of keys and identities to worry about.

                                                      In an anyone-can-publish package repository it’s simply not feasible to try to verify keys for every package publisher, especially since packages can have multiple people with publish permissions and the membership of that group can change over time. All you’d be able to say is “signed with a key that was listed as one of the approved keys for this package”, which then gets you back to square one because an account takeover would let you alter the list of approved keys (and requiring that changes to approved keys be signed by prior approved keys also doesn’t work because at the scale of PyPI the number of lost/expired/etc. keys that will need to do a recovery workflow would be enough to still allow the basic attack vector that worked here — take over an expired domain and do a recovery workflow).

                                                      1. 1

                                                        packages can have multiple people with publish permissions and the membership of that group can change over time

                                                        Yes, I didn’t go into detail because it’s a lobsters comment, not a white paper, but the idea is that only a revoke/removal of a key from the approved keylist of a package can be done without a signed grant from a previously supplied key. What this means is the first person to upload a version of a package will sign it, then that key will have to be used to add any additionally allowed keys via a signed token grant. Allowed keys are explicitly not tied directly to group membership (except maybe an auto-revoke being triggered by a member being removed from a group), or really accounts at all. Handling the recovery workflow is the hardest part to get right. In the case of an expired key, supplying a payload from the email attached to the key and account (should probably also enforce key emails and account emails match) signed with the expired key is significantly better than simply sending a magic link with a temporarily URL. For supporting lost keys, I can’t think of a way to support this safely without basically just making a new “package lineage” that has a new namespaced account or something. Either way, the accounts would still only be as secure as the security practices of the users on the publishing end, so there’s only so much you can do.

                                                2. 2

                                                  I don’t understand why we stick to flat namespaces, or rather, it implies separate authentication. What’s wrong with the Go way of doing things? Why can’t we go directly to GitHub (and friends) for our dependencies, instead of having pypi / npm / cargo inbetween?

                                                  1. 3

                                                    I guess the only problem that solves is typosquatting? Because maintainer account compromise and repojacking will still get you malicious code.

                                                    1. 1

                                                      This topic brings out strong opinions on all fronts :) See @ngp’s eloquent statement of the exact and total opposite opinion that we should have MORE in between, not less.

                                                    2. 2

                                                      An open community is not defined by a single central register of packages where all dependencies are pulled from by default by just adding some sort of identifier in your project.

                                                      It is a solvable problem and it has been solved. We just broke it relatively recently with this horrible idea of pulling a tree of dependencies with hundreds of nodes, whenever we want to left pad a string representation of an integer.

                                                      The solution is: don’t import arbitrary dependencies dozens at a time just because there is a simple way to do it. It was never a good idea. Not that package managers are a bad idea per se. It.s.tge way they’re [ab]used. The means to do it can perfectly be there, just use them reasonably.

                                                      Pearl’s CPAN was probably the first instance of these central package repositories. But it always posed itself as a convenience with no authorative instance. Multiple mirrors existed with different sets of packages available. It was just always an easier way to download code, not an hijacker of a programmings language import routine.

                                                      1. 1

                                                        Guessing you mean “perl” but point taken.

                                                    1. 3

                                                      As someone who’s been digging into building a database, this is a bunch of helpful resources! the UChicago chidb one in particular seems particularly interesting if you’re interested in how relational databases work.

                                                      1. 28

                                                        I think Hare is the gemini of programming languages. It’s trying to return to a simpler time by removing most of the complexity by removing most of the modern features; it tries to build a community of people who enjoy that (people who like C, maybe Go, plan9, etc.); it doesn’t aim at being popular; and it probably won’t be because all its competitors are more appealing to “modern” tastes. Having union types is the pinch of modernity, just like gemini’s use of TLS in an otherwise decidely-1980s tech stack.

                                                        1. 7

                                                          I would argue it’s error handling is quite modern too. IMO it’s quite a bit better than Go’s (primarily because of the tagged union types).

                                                        1. 3

                                                          I’ve been digging into DataScript (and Datomic) recently and there’s some very unique ideas that both have for representing data and relationships. I strongly encourage people to read into these tools, they’re very interesting!

                                                          I have a mostly incomplete experimentation of a similar datastore using Python and SQLite, primarily as a means of determining if building a Datomic-clone on top of SQL stores (such as PostgreSQL) is feasible.

                                                          1. 3

                                                            Datomic itself can use eg Postgres as it’s “data service”. It’s delegating transactions and persistence to existing databases.

                                                            https://docs.datomic.com/on-prem/overview/storage.html#sql-database

                                                            1. 1

                                                              Oh, I had no idea! I was under the impression they built their own storage layer.

                                                          1. 6

                                                            Regarding the issue with the “shape of data”, you may want to have a look at Clojure Spec. But if you ask me, you can skip it and just take the incredible library malli. Not only is it a fantastic schema engine where types are arbitrary predicates instead a narrow selection of hard-coded categories like “string” and much better composability than in e.g. JSON Schema. It is also programatically extensible and has some great features like: generating data pased on a schema, conversion from and to JSON schema, function schemas and so on.

                                                            1. 4

                                                              Thank you! I will have to check out malli. I’m not sure specs are exactly the solution that I’m looking for. In many languages, it’s pretty easy to determine the exact return type as it’s declared as part of a function definition. In Lisps, this is not the case, and Clojure is no different in this regard. Functions in Lisps don’t explicitly define a return (via a statement) or return type, which can also change depending on logic. The tradeoffs for this are interesting, because it means the code is less cluttered and potentially easier to just read, but understanding the interface to a function (ie. how to use it, and what it’s used for) effectively requires either good docs or reading and understanding the function itself. Other languages, such as heavily type-hinted Python, Java, C, Go, etc. all have functions explicitly return a value of a defined shape, so just glancing at a function and having a rough understanding of what it does is generally easier. Whether that’s a good thing or not I think is up for debate, there is value in forcing the user of an interface to actually understand the function, but it may slow progress down unnecessarily in cases. This also could be interpreted as a general problem dynamically typed languages have, but I think the implicit return construct makes dealing with it a little worse.

                                                              1. 4

                                                                In my experience as a Scheme (and back in the day, Ruby) programmer, it’s not so much the implicit return that makes it difficult to figure out the types, but the way everything is super generic. There’s the sequence abstraction which works for lists, vectors, lazy sequences and even maps and sometimes strings. So if you’re reading a method’s code that only uses these abstract methods, you have no idea what goes in and only a vague idea of what comes out of that method. With Scheme, for example, you’d see (string-ref x 1) and immediately know that argument x must be a string. With Clojure, you’d see (nth x 1) and be none the wiser. Of course, it allows for code that’s more generic so you could use the same code with different types of inputs, but in most many cases that genericness isn’t important and actively hindering your understanding.

                                                                Couple this with the practice of using maps everywhere (which can have any ad-hoc set of attributes), it gets pretty murky what’s going on at any point in the code. When you’re reading a method, you have to know what the map looks like, but you don’t know unless you trace it back (or put in a printf). Compare this with user-defined records (which Clojure has but isn’t typically used as much), where the slots are all known in advance and are always present, it’s much easier to read code that operates on them, because whenever a value is extracted, you can derive the type simply from the fact that an accessor method is called.

                                                                Malli or spec are a good way to introduce “sync points” in your code at which maps are checked for their exact contents, but I’ve found that more useful for constraint checking at the input/output boundaries. Doesn’t help that much while you’re writing the main code that actually operates on your types. Especially with Malli, I’ve had to remove some internal checks due to performance issues when using validate.

                                                                1. 2

                                                                  I totally see where you’re coming from. Personally I came from Java when I discovered Clojure and I also sorely missed the type system.

                                                                  Clojure is much more about abstracting behavior and it actually matters a lot less what exactly the shape of the data is in a certain context as long as you know you have the guarantees you need in your current context. It’s not considered good style to write operations that only work with a super specific data structure. It’s actually the same in Java where this can be done using interfaces.. with the drawback of being limited to one interface at a time. Actually there is something like an inverse of drawbacks of dynamic typing in the static typing world too, and that is the global scope of type names. If you limit yourself to a narrow set of global types, you usually pass around way too much data and/or behavior. The more specific you get, the harder it becomes to properly name things, because you need to differentiate everything from everything and you end up with a zoo of poorly named stuff. Dynamic typing on the other hand allows you to be terse and contextual with the drawback of having to be quite disciplined about making the context understandable.

                                                                  What I love about malli is that you can actually defer some pretty tedious-to-model logic that would otherwise bloat your code base to the schema engine. Let’s say you have a medical questionnaire where the gender is asked for and if the gender is “female”, then the data should contain the answer to the question “pregant yes/no”. And you need to validate the data in the back-end and update the database. Malli allows you to write a schema in which contextual dependencies between single data points can be captured. Being able to let the schema allows you to write much better code that doesn’t need to know such details. Maybe the actual logic left is then just to apply a JSON merge patch, completely independent of the specifics of the data at hand.

                                                              1. 8

                                                                Author here, I don’t really blog much, but I thought I’d share some brief thoughts about clojure as I’ve been learning it lately. Feedback/complaints/thoughts/etc. welcome

                                                                1. 2

                                                                  Thanks for the write up! I wouldn’t mind reading more on this topic. I’m a Clojure neophyte myself, but I can definitely see the issue with understanding what shape of data is being sent to a given function

                                                                  1. 2

                                                                    It really just emphasizes the need for good docs. There’s a type-hinting mechanism via the ^ metadata construct, but I’ve found it to be… lacking, and this is coming from someone who’s very used to Python and it’s type-hinting bandage solution to this problem.

                                                                    What kind of write-ups would you like to see on the topic? I’m by no means an adept author, but I’ll take a stab as I have time and motivation :)

                                                                    1. 2

                                                                      Some things that come to mind:

                                                                      • It would be interesting if you kept exploring the data shape issue; docs help I suppose, but did you try something like spec or malli? Does using records help? (Personally I’ve found them kind of anemic when it comes to describing data) Also, I’m curious about real-world use cases, i.e. situations where this becomes an issue. How much of a difference does lack of fixed data shapes make when coding “in anger”?
                                                                      • Rich Hickey often talks about the benefits of using maps for everything; by not inventing new data structures you can use the same functions for records and regular associative maps. My question is: Is it worth the tradeoff of not having a clear idea of the data shape when you’re coding? I can see how it’s useful for serialization but I would be interested in seeing some more compelling examples of when this would be useful.
                                                                      • A follow-up post to this one in a year or so would be really interesting, i.e. to identify what issues were learning hurdles (if any) and which ones are chronic for Clojure programmers
                                                                  2. 2

                                                                    What do you mean by “keyword types”? Do you mean :foo keywords?

                                                                    1. 1

                                                                      Yeah, a keyword is a type in Clojure (though not every case of a keyword is it’s own type, they’re just identifiers in the same way that variables are).

                                                                      1. 3

                                                                        To be clear, I know what a Clojure keyword is. I care a lot about helping Clojure grow as a language and it’s fun to see folks new to the language talk about their experiences; I think my experience in Clojure has blinded me to the idea you’re pointing at and I want to understand it better.

                                                                        In the post you said, “They’re like fancy namespaced enumerated types”. and here you’ve said, “not every case of a keyword is it’s own type”. What do you mean by “type”? To me, “type” is about the data type, so from that perspective all keywords are the same type: keyword. And while they’re interned (making them super fast for equality), they can be generated on the fly and aren’t bound to any pre-generated list, which is what I associate “enumerated” with.

                                                                        1. 2

                                                                          Edit: oh I see what you’re talking about. I updated the post to be correct :)

                                                                  1. 11

                                                                    Company: Tailscale

                                                                    Company site: https://tailscale.com

                                                                    Position(s): Software Engineer: Control Plane Brand Design Lead Marketing Associate Software Engineer: Front-end Software Engineer: Networking Software Engineer: Security

                                                                    Location: Remote, US and Canada

                                                                    Description: Tailscale is the secure network that just works. A peer to peer VPN built on WireGuard that punches through firewalls and helps teams and professionals secure access to the things most important to them. Lots of backend work, VPN engine stuff and linker crimes.

                                                                    Tech stack: Go, SQLite, WireGuard, net/http

                                                                    Compensation: I don’t have compensation numbers, but full benefits (medical/dental/vision) and one month of vacation per year (with a mandatory minimum of 2 weeks per year or you get vacation scheduled for you)

                                                                    Contact: Please go to the careers page and apply on the individual listings or email me at xe at tailscale dot com if you have any questions about what it’s like to work there.

                                                                    1. 1

                                                                      Oh man, I’d love to work for Tailscale. Or really, I’d live to apprentice under Avery and David Crawshaw. Call me a mild fan of their work :)

                                                                      1. 2

                                                                        Apply! It’s easily the best place I’ve ever worked.

                                                                        1. 1

                                                                          I feel like my iptables is a little too rusty and my nftables experience is… near zero, but I’ll think about it.

                                                                    1. 2

                                                                      Great article, this makes me want to go out and buy a PineWatch! Too bad they’re out of stock :(

                                                                      1. 1

                                                                        The async stuff in C++11 seems like a wrong direction to me because its futures are blocking. That makes them easier to understand but introduces lots of opportunities for deadlocks and/or and ever-growing numbers of threads.

                                                                        Meanwhile since 2011 the world seems to be settling on the truly-async model of completion callbacks and the “await” operator. Which has its own issues but seems generally superior, especially on less-powerful hardware.

                                                                        I haven’t had the opportunity to upgrade to C++20 yet, but I’m looking forward to trying out the coroutine functionality. (Meanwhile I’ve implemented parts of it myself, similarly to Folly or Cap’n Proto.)

                                                                        1. 1

                                                                          Can you expand on what you mean by the completion callbacks and await operators? Afaik it’s not common to use call-backs with Python’s asyncio or JavaScripts promise + await syntax.

                                                                          1. 1

                                                                            Callbacks and await are different ways of doing the same thing. The latter is mostly syntactic sugar over the former.

                                                                            1. 1

                                                                              To expand on this a bit, I think the idea is that when you await, you can think of everything after the await as being like the callback you would otherwise be passing to the asynchronous operation. It’s a way to express “when you’re done, do this next.”

                                                                              1. 2

                                                                                Yes, basically. In more detail, the function is a coroutine: every “await” is a point where it yields after registering a callback on the promise, and the callback is what resumes it again.

                                                                        1. 24

                                                                          I realize that Docker (or, I suppose, “containers”) provides fewer guarantees and comes with a host of problems, but as a practical matter, it has delivered on (more or less) the same promise, but in a way that is accessible and easy to understand.

                                                                          The lab I work in uses Docker for basically everything (even HPC cluster jobs) because scientific software is notoriously finicky and containers make life so, so much simpler. Could we use Nix? Sure, but it takes 5 minutes to teach someone how to use Docker well enough to be productive, while it would take… I don’t even know how long with Nix.

                                                                          In fact, I don’t even know how to use Nix well enough to teach someone else, and I actually built a Nix package a few years back! The language was so inscrutable and poorly-documented that I basically just copy-pasted from other packages until it worked, then decided I never wanted to do that again.

                                                                          1. 6

                                                                            pkgs.dockerTools.buildImage might be a good way to get started with Nix then. In my (admittedly very limited) experience the Nix code ends up being easier to understand than the equivalent Dockerfile. Probably because as long as you keep to simple Nix expressions there are far fewer gotchas. No need to clean up after installing packages, no need for multi-stage builds, no need to worry about putting “volatile” statements later to minimise build times, and much less shell scripting.

                                                                            1. 5

                                                                              I wonder how feasible it would be to write, say, a python library that lets you take advantage of Nix without having to do the full buy-in of the special language.

                                                                              I have similar gripes about Bazel, with “almost-Python” and documentation that really doesn’t want you to do hacky things (despite hacky things being necessary for taking an existing project and bundling it!).

                                                                              Docker is a ball of mud generator, but it is also a ball-of-mud capturer. Nix demands you to throw your mud into a centrifuge while you still need the ball to roll around. Whoever can figure out how to give all the powers of both of these will be my savior

                                                                              1. 10

                                                                                FWIW:

                                                                                • @ac is occasionally experimenting with trying to capture the core idea of Nix in a simpler form;
                                                                                • the authors of Nix are actually also experimenting with a “simpler” language;
                                                                                • not strictly related, but CUE is a configuration language I find very interesting, and I sometimes wonder if it could be used to build some Nix-like system on, and if that could make things more attractive and usable (if still not as easy as docker to learn; but at least giving a static typing system in exchange).
                                                                                1. 5

                                                                                  FWIW I wonder if https://earthly.dev/ is doing this … It’s a container based distributed build system, which in my mind is sort of the middleground between Docker and Nix.

                                                                                  I mentioned it here in the “curse of NixOS thread” https://lobste.rs/s/psfsfo/curse_nixos#c_cqc27k

                                                                                  I haven’t used it but I’d be interested in a comparison / evaluation

                                                                                  We need a “meta-build” system that solves the reproducibility/parallelism/distribution/incrementality problem for everything at once without requiring O(N) rewrites of upstream build systems. I don’t really have any doubt that this will be based on containers.


                                                                                  I also collected my past comments on this subject pointing to some ideas on composable Unix-y mechanisms to solve this problem:

                                                                                  https://oilshell.zulipchat.com/#narrow/stream/266575-blog-ideas/topic/Idea.3A.20A.20Middleground.20between.20Docker.20and.20Nix (login with Github)

                                                                                  • Allow both horizontal layers (like Docker) and vertical slices (like Nix)
                                                                                  • A hashing HTTP proxy for package managers that don’t have lockfiles and hashes (which is a lot of them).
                                                                                  • Storage on top of git to separate metadata from data (layers and big blobs)
                                                                                    • following the “git ops” philosophy but you need something for big files like layers; I have tried git annex
                                                                                  • A remote process execution abstraction on top of containers, with data dependencies (this is sort of where a distributed shell comes in, i.e. you want to take some shell command, package it up with dependencies, execute it somewhere else, and name the output without necessarily retrieving it)
                                                                                  • which leads to a coarse grained dependency graph (not fine-grained like Bazel)
                                                                                  • haven’t figured this part out, but it appears that the static dependency graph and up-front evaluation is a fairly big constraint that makes it harder to write package definitions; a more dynamic notion dependencies might be useful (but also introduces problems I’m sure)

                                                                                  But now that I list it out, this is a huge multi-year project, even though it is trying to reuse a lot of stuff (git, container runtimes like podman and bubblewrap, HTTP) and combine it in a Unix-style toolkit … The scope is not that surprising since it’s tantamount to developing a distributed OS :-/ (i.e. task execution and storage management on distributed hardware; building any application)

                                                                                  I’ll also add that this system is “decentralized in the sense of git, BitTorrent, and the web” – i.e. it’s trivial to set up your own disconnected instances. Docker, Nix, and Bazel don’t feel that way; you kind of have buy into a big ecosystem


                                                                                  But I’d like to hear from anyone who is interested. @akavel I did chat with @ac about this a few years ago … I think my main feedback is that you have to go further to “boil the ocean”.

                                                                                  That is, it is not enough to come up with a “clean model” … (which Nix and Bazel already provide). You have to provide a bunch of mechanisms that will help people write correct and clean package definitions for a wide array of diverse software (and config language that’s better than a bunch of macros and dynamic parsing on top of shell or Python, which I think Oil is incidentally :-) ). And “kick start” that ecosystem with software that people want to run. (Docker Hub sort of does this; I think there is a lot of room for improvement)

                                                                                  And you have to avoid the “rewriting upstream” problem which I mentioned in a comment above. For example, that is why I mentioned the HTTP proxy to help making existing package managers more reproducible. A good test case I always bring up is R code, since it has a very deep and functional ecosystem that you don’t want to rewrite

                                                                                  1. 1

                                                                                    Nix already (mostly) doesn’t require rewriting the world of everyone’s build systems (in the way that Bazel does require). In fact it requires substantially less rewriting than systems like Bazel or Python wheels or any other non-containerized package manager AFAIK.

                                                                                    It’s not clear to me that there needs to be any fundamental change in philosophy to make the Nix idea widely usable - it seems the ideas you listed are ultimately things which Nix already does. You might like reading the original Nix thesis from 2006: https://nixos.org/~eelco/pubs/phd-thesis.pdf

                                                                                    The need, I think, is more just UX and polish.

                                                                                    1. 1

                                                                                      It’s true that Nix requires / encourages fewer rewrites, because it’s more coarse grained while Bazel is fine-grained. But in the linked threads multiple users talk about the duplication of Cargo in the Rust ecosystem. So there still is some.

                                                                                      And for every package you do have to do all the --prefix stuff because of the unconventional /nix/store layout. This leads to the RPATH hacks too. However I tried to find this in the .nix derivations and it seems hidden. So I feel like there is a lot of magic and that’s where people get tripped up, compared to shell where everything is explicit.

                                                                                      (And I appreciate the huge benefit that /nix/store gives you – containers will help but not eliminate that problem, since most packages aren’t relocatable.)


                                                                                      Echoing some previous threads, I don’t really have any problem with Nix having its own language per se. I’m surprised that people continue to complain about the syntax – as others pointed out, the language is just lazy “JSON (tuples) with functions”. It looks pretty conventional to me.

                                                                                      I think the more substantive complaint is lack of documentation about common library functions. And maybe the lazy evaluation rules.

                                                                                      The main issue I see is the leaky abstraction. At the end of the day Nix is just running a bunch of shell commands, so I think it’s more natural to build a system around shell. The package defs still need a bunch of inline shell to fix up various issues, just like in every other Linux distro.

                                                                                      FWIW I think you can make a staged execution model in shell like Make / Bazel, etc.:

                                                                                      http://www.oilshell.org/blog/2021/04/build-ci-comments.html#language-design-staged-execution-models

                                                                                      Performance is apparently a problem, and I think it’s more natural to debug performance issues in a model like that than a lazy functional language.

                                                                                      (Yes I read the thesis over 10 years ago and was surprised how similar it was to Bazel!)

                                                                                    2. 1

                                                                                      I’d love to read through what you linked and wrote above and to try to process it as it deserves, but unfortunately I’m extremely busy recently. That’s also big part of the reason why practically all my side projects (including the attempts at a Nix wrapper for the Oil build env) are on a definite hiatus now, and I don’t expect this to change much in conceivable future if I’m still employed where I want to be (tradeoffs…). But for sure still generally interested in all of that, so scanning lobster.rs regularly and trying to stay more or less on top of what’s written here, to the limited extent that my brain still has some minimal capacity to consume!

                                                                                      1. 2

                                                                                        Yeah I feel sort of the same way… I made an attempt to write a package manager ~8 years ago, and got an appreciation for how hard a problem it is! It still feels out of reach, since just the shell and the language are a huge project. (Obviously I am envisioning something based around a better shell)

                                                                                        So I put those ideas out there in hopes that someone else has 5 or 10 person-years to make it happen :)

                                                                                    3. 3

                                                                                      I’d really like that. I played with nix for a bit and I feel like it’s a great experiment where we learned some approaches that could be moved from r&d to a real product. Getting rid of a lot of the flexibility and putting some real boundaries in place would really help.

                                                                                      I’d love it if we could transform the “here’s a trace through a stack of meta functions, none of which you wrote - deal with it” situation into “this is an expectation of this well supported pattern, you broke it in this place”. Basically make RuntimeException(haha_fail) into DerivationNeedsToBeWithinSystem(package_name). I know there’s dhall, but that doesn’t go far enough.

                                                                                      Alternatively nix could get LOTS of asserts with “to get past this point, these contracts need to be satisfied”.

                                                                                      1. 2

                                                                                        That’s our approach with anysnake2 (link omitted, I don’t want to shill my pet project here). End-users write a toml file defining python and R ecosystem date and the list of packages, and it creates the necessary flake.nix for a container or a dev shell.

                                                                                        Works well, except for days where the ecosystem is broken for the packages you care about. But that’s full reproduciblity for you.

                                                                                      2. 5

                                                                                        Container images are not automatically reproducible; builders must have special support for reproducibility, and Docker does not, but nixpkgs’ dockerTools does. As a practical matter, I have been bitten by the non-reproducibility of Docker containers.

                                                                                        1. 2

                                                                                          I also work in HPC and most of our core services are Dockerized (even our hard RT service!). There’s been some attempts to use Nix, and one of our guys is a GUIX advocate, but I don’t really see it gaining any traction due to the complexity, lack of good documentation, and general usability woes. It’s hard enough getting physicists to learn basic docker commands…

                                                                                          1. 1

                                                                                            Nix / Guix are quite orthogonal to Docker. Actually, you can use them to generate a Docker image.

                                                                                            In any case, I’m glad HPCs are at least moving to Docker. I’ve used some fairly large European HPCs, and the policy was basically that you should compile whatever you needed! I had some colleagues that would spend a good fortnight just to get things running. A regular company would panic at the amount of employee work, but academia on the other hand…

                                                                                            1. 1

                                                                                              Sure, you can use Nix/Guix in Docker too. The point is that the usability of those platforms is awful and I wouldn’t encourage any company to use them in production as they are now.

                                                                                              1. 1

                                                                                                I meant not only inside Docker, but also to generate Docker images.

                                                                                                I think Nix is much more usable that what is generally discussed, thanks to the new nix command line, which is not often advertised. Obviously, it’s not an easy language or toolset, just like Haskell is not easy either.

                                                                                                But simple usecases are remarkably straightforward once you grasp the basic principles. I find some tasks much easier than Arch or Alpine, which are my goto distributions (and pretty barebones).

                                                                                        1. 18

                                                                                          While it doesn’t exist to my knowledge, taking something that’s basically just the “good” subset of HTML5 and ECMAScript 12 and writing a browser around it would be cool. Or maybe drop ECMAScript and take a really small subset of it to handle binding to DOM and have WASM for everything else.

                                                                                          Regardless, finger is old but still interesting.

                                                                                          1. 8

                                                                                            While it doesn’t exist to my knowledge, taking something that’s basically just the “good” subset of HTML5 and ECMAScript 12 and writing a browser around it would be cool. Or maybe drop ECMAScript and take a really small subset of it to handle binding to DOM and have WASM for everything else.

                                                                                            Well, that is beyond my pay grade rsrsrs ;-)

                                                                                            Regardless, finger is old but still interesting.

                                                                                            I’m gonna try to implement finger. Thanks!

                                                                                            1. 1

                                                                                              Finger is probably the oldest of the “single command, single transaction” protocols, and some are mutually interoperable. Gopher clients can speak finger (to port 79), and many gopher servers would make cogent replies to finger queries if redirected to port 70. I would see URLs like gopher://finger.server:79/user turn up now and then. Naturally Gopher clients don’t have the /w token, but that’s an implementation detail.

                                                                                            2. 4

                                                                                              There’s not a whole ton you’d need to add to HTML 2.0; it already specifies use of unicode and CSS. It omits scripting, but IMO that’s a strength, not a weakness.

                                                                                              1. 1

                                                                                                There’s things that could be nice like object embedding (don’t redo the ActiveX/applet hellscape, but use it as an extensible means to deal with i.e. multimedia), ARIA, and semantic elements (added in 4 and 5 mostly).

                                                                                              2. 2

                                                                                                Or maybe drop ECMAScript and take a really small subset of it to handle binding to DOM and have WASM for everything else.

                                                                                                I know it’s not as straightforward as it sounds, but I really wish we could move to a future where JS (or some equivalent that was amendable to this arrangement) was implemented with WASM. Maybe it’s a blessed implementation that exists in the browser, but it compiles down to a public API that other languages could target.

                                                                                                1. 3

                                                                                                  I’d love to do the same with the DOM. There’s now a decent JavaScript PDF implementation, which significantly reduces the need for browsers to have native implementations and ended up with something more secure (JavaScript is already sandboxed). With canvas, it’s probably possible to move all of the HTML rendering into JavaScript as well.

                                                                                                  1. 3

                                                                                                    With canvas, it’s probably possible to move all of the HTML rendering into JavaScript as well.

                                                                                                    You have to watch out for a11y, but Google Docs basically does this; there’s a bunch of divs with aria roles and display: none for the benefit of screen readers, and the actual rendering is done with canvas.

                                                                                                    1. 1

                                                                                                      This is a non-starter for things unlike PDF, because you can’t do layout in canvas, because it won’t let you measure text.

                                                                                                      1. 3

                                                                                                        I guess that depends on how much you want to move into the sandbox. If you want to depend on a native implementation of the font renderer and layout engine, that’s true. If you want to use a WAsm version of Harfbuzz / FreeType, then you’re only going to be throwing lists of beziers at the canvas.

                                                                                                          1. 2
                                                                                                            • It uses a global state like all canvas APIs
                                                                                                            • Almost every field in the results object is optional, and they’re mostly not implemented, at least last time I checked.
                                                                                                            • It’s very basic and still requires you to do a whole bunch of work yourself if you want any information that’s useful for layout, and shitloads more work if you want the kind of information that’s required for editable text.

                                                                                                            Compare it to a well-designed text API from a company that built its empire on rendering text: https://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/text/engine/package-detail.html

                                                                                                    2. 1

                                                                                                      I wish something like Jessie or these other EcmaScript subsets would get actually standardized as I see a lot of potential in a simple ES subset as an embeddable language.

                                                                                                    1. 3

                                                                                                      I would love to learn a Scheme but man is it daunting.

                                                                                                      1. 8

                                                                                                        R7RS is simultaneously much larger and smaller than R5RS (which is the version I’m most familiar with). The addition of a module system made things more, well, modular, but might increase perceived complexity.

                                                                                                        Scheme is nice, though. To me, Common Lisp is big in a lot of the wrong places and too small in a lot of the others, and has one too many namespaces. Scheme has the right number of namespaces, hygienic macros, and is consistently small.

                                                                                                        If you wanna do Lisp, Scheme might be a good choice. The multiplicity of implementations can be an issue, but there are some really high-quality ones out there. I haven’t played with it in years but I remember Chicken being especially beautiful (and I still remember the paper describing how Chicken does thunks and garbage collection, though it’s probably long out of date by now).

                                                                                                        1. 9

                                                                                                          paper describing how Chicken does thunks and garbage collection

                                                                                                          https://www.more-magic.net/posts/internals-gc.html ?

                                                                                                          1. 1

                                                                                                            Yep, that’s it.

                                                                                                            1. 7

                                                                                                              I wrote that, and it’s not out of date. The core algorithm hasn’t changed (and probably won’t). Besides, the article abstracts away some technical details that don’t matter to the algorithm and has therefore aged well. We’ve since changed the way procedure calls compile to C in CHICKEN itself, but that doesn’t matter to the algorithm.

                                                                                                              1. 1

                                                                                                                It’s a beautiful piece of work.

                                                                                                          2. 2

                                                                                                            Is there an implementation you would recommend?

                                                                                                            1. 4

                                                                                                              Depending on what you’re trying to do, I’d go with Chicken (for application development) or Guile (for scripting). Guile is the one I’m most familiar with, but honestly it’s been a decade since I’ve written any Scheme.

                                                                                                              1. 2

                                                                                                                Thank you!

                                                                                                            2. 2

                                                                                                              To me, Common Lisp is […] too small in a lot of [places]

                                                                                                              Curious, where do you find it lacking? A couple of things are outright missing (like threads), but that is a case of ‘nothing at all’, not ‘too small’.

                                                                                                              and has one too many namespaces. Scheme has the right number of namespaces

                                                                                                              That is … an interesting take, considering CL has a multiplicity of namespaces and most schemes I know of have just one :)

                                                                                                              (S7 scheme is the other lisp-n, sort of.)

                                                                                                              Personally, my take is that CL has the right number of namespaces for CL, but that that is probably the wrong number of namespaces for not-cl.

                                                                                                              1. 3

                                                                                                                Curious, where do you find it lacking? A couple of things are outright missing (like threads), but that is a case of ‘nothing at all’, not ‘too small’.

                                                                                                                CL has a lot of standardized functionality that would be considered niche in other languages (even if it’s very useful), while completely missing things like threads and networking (I realize it’s a product of its time). It’s “too small” in that it doesn’t specify some things it needs to for it to be competitive in the modern world (I realize there are plenty of workarounds, other language specs are similarly small, etc). I just remember the last time I seriously considered using CL (and this was many years ago), some of the things we needed were not available for the CL implementation we had chosen (clisp, IIRC). I’m sure there was a way around it, but we were either unaware of it or considered it not worth the effort.

                                                                                                                That is … an interesting take, considering CL has a multiplicity of namespaces and most schemes I know of have just one :)

                                                                                                                I was making a joke about the whole Lisp-1 vs Lisp-2 debate. If we’re going to have first-class functions, they should have the same namespace as first-class data! :)

                                                                                                              2. 1

                                                                                                                These days, I’m becoming even less sure about having separate namespaces for terms and types…

                                                                                                              3. 5

                                                                                                                Just dive into Racket. It’s great fun!

                                                                                                                1. 2

                                                                                                                  What about it do you find daunting?

                                                                                                                  Sure, all the parenthesis are kind of weird compared to other commonly used PLs, but you can get used to it, and paren matching in modern editors helps.

                                                                                                                  The only strange bit is that how much of the language is based around the linked list. Like Forth is based around the stack, and Lua is based around the table.

                                                                                                                  Otherwise, it is GC language with functions, arguments, return values and such. Macros, which you don’t need to worry about when starting out, can be more useful than, for example, the C preprocessor.

                                                                                                                  1. 2

                                                                                                                    I find the syntax of Lisps to be especially weird, and I’m not even talking about the () stuff, but things like assignment and even just understanding what type something is.

                                                                                                                1. 3

                                                                                                                  I think you could use importlib instead of import and predeclare the module variable with the appropriate type-hinting. I don’t know if that’s preferable, the HAS_MARKDOWN flag is reminiscent of C “include once” macros. I haven’t tried this, but it might “look” nicer.

                                                                                                                  1. 2

                                                                                                                    Unfortunately this prevents Mypy from loading the type hints for the imported module.

                                                                                                                    1. 1

                                                                                                                      Ah, darn.

                                                                                                                  1. 4

                                                                                                                    I really like Crystal, mostly. It’s an interesting language with a lot of nice things in it. So many nice things that it’s actually too much IMO. There’s so many language constructs that it’s difficult to wrap your head around and I find myself quickly forgetting entire things even exist.

                                                                                                                    1. 4

                                                                                                                      I’ve written a few thousands lines of Crystal for fun, and non-profit. It does feel like Ruby most of the time, and gets the job done nicely. I think that if you come from this language, then Crystal won’t feel so large.

                                                                                                                      Still, coming from years of Ruby, I’ve enjoyed languages that shifted from its paradigms more.

                                                                                                                      1. 2

                                                                                                                        I’ve never seriously used Ruby, but when I did toy with it I didn’t like it much so that explains a lot.

                                                                                                                    1. 4

                                                                                                                      I cannot stress enough how important type hinting is to large-scale python applications. If you use it properly you can have high guarantees that your code will not have issues like this.

                                                                                                                      Source: building a quantum computer OS/control system in python.