Threads for LeahNeukirchen

  1. 2

    I use Emacs, but without many frills. I think having “intelligent” indentation is the only language-sensitive feature that I really depend on, as it catches many syntax errors immediately.

    1. 3

      Participating at the enigame.de puzzle hunt!

      1. 1

        Kinda curious the pages aren’t served from Git?

        1. 2

          You can upload the pages from anywhere with either a HTTP request or the hut CLI program. If you want it to be automatically uploaded from your Git repository, you “just” add a build task for it.

        1. 2

          Let me reiterate: Certain HP laptops have three power buttons that launch different OSes.

          Half my life ago, the last time I was dual-booting Linux and Windows, I kept thinking about having different “Windows” and “Linux” buttons instead of a single power button. I was far too inexperienced to know if that was even possible, never mind how.

          Turns out instead of building my own desktop I should have just bought a cheap HP netbook.

            1. 2

              I had a machine with a “button” to boot a different OS… well it was the floppy disk with LILO.

              1. 1

                I recall a tool that spat out floppy disk boot sectors that would chain boot a specific partition on an IDE disk. The intended use case was precisely this: you could set up your computer to boot Windows from the disk and boot from A: before C:, so if the floppy disk was in the drive you’d boot Linux / *BSD, if it wasn’t then you’d boot Windows. When a Windows update overwrote your boot sector, you didn’t lose the ability to boot your other OS.

            1. 12

              There’s a lot of good stuff here, effects, transactional memory, a concurrency story without colored functions (don’t be fooled by the documentation mentioning async over and over, it’s different stuff), a completely novel way of dealing with failure, planet-scale repl (not yet available), etc.

              But I feel like they had a list of features and a specific syntax that they wanted to give to game developers, and they then had to shoehorn “the good bits” (the semantics) into these arbitrary features. Some things are truly bafling, like it has both significant indentation and {}, at programmer’s choice. Wat.

              Also I think the documentation is not great. It says reference in the title, but as a reference is not very thorough, in fact it’s more like a tutorial, but the sequencing is not very pedagogical to be a tutorial either. In general the concepts are explained at a very basic level, but then sometimes it throws very technical concepts at you out of the blue, like type polarity. Also a lot of the links are 404 and they hijacked middle click. Ugh. They also publish memes in the documentation. In fact all images published as part of the documentation are kinda pointless.

              I suppose I’m not the target user here, but I really wish there was a way to play with the language outside the Unreal editor. Surely this has to be possible, if anything, just for their own developing team’s sanity. (Edit: after watching the video linked at the bottom of my post, apparently this is coming, they are going to release it as open source!)

              In any case, I am really excited about this new language, quirks at all, and I am really waiting for the next papers the team will publish. The first paper was really great!

              Edit: there’s a quick video presentation: https://youtu.be/teTroOAGZjM?t=22512

              1. 5

                More thoughts.

                Notice how the naming convention is for values to be Capitalized, while types and functions are not. My prediction is that this is because types in Verse will be just (failable?) functions that validate values.

                I said failable above, but they might be some other kind of functions. They have a total effect type (functions that are guaranteed to terminate), which could be safely called by the type checker.

                In fact I think that class and other constructs are some sort of compile-time macro. I suspect some syntactic idiosyncrasies are about making the language homoiconic.

                Verse has effects, but not algebraic effects. The set of effects is finite and fixed, and there’s no way to define arbitrary handlers. All effect processing happens in language constructs like if and for (which might be macros, so maybe there will still be some user-level support for handlers after all). Because it does not have a full blown effect system, concurrency and monadic composition had to be baked into the language, with an effect system these features could have been implemented in a library instead.

                Notice how effects are negative types. They are a promise about what a function will not do, not an indication about what a function might do. This is the opposite of other languages with effects.

                defer is not needed if you have linear types. I suspect they considered linear types too difficult for the average programmer, so they went with what they consider a good-enough solution.

                The type system is probably very novel, I suspect it is the most interesting part of Verse but so far has not been explained or talked about at all. I suspect it might have some form of dependent types.

                1. 1

                  A curious thing I found in the API reference: https://dev.epicgames.com/documentation/en-us/uefn/verse-api/versedotorg/verse

                  As far as I can tell, specifiers like <abstract> and <transacts> are all real types. Weird, huh? Almost as if they planned arbitrarily defined effects from the start, just don’t document them.

                  1. 1

                    Very interesting. Maybe there are arbitrary effects after all. Seeing them documented this way kind of reminds me of Haskell type classes. Maybe what you put in the function specifier are instances of a specific type class, and the compiler somehow calls into it at compile time to verify your code.

                2. 2

                  like it has both significant indentation and {}, at programmer’s choice. Wat.

                  Isn’t that what Haskell has?

                  1. 3

                    Yes. Also wat.

                  2. 2

                    Some things are truly bafling, like it has both significant indentation and {}, at programmer’s choice. Wat.

                    Not surprising, since Haskell has the same feature, and Verse is designed by some of the original designers of Haskell.

                    1. 2

                      And they added commenting by indentation!

                  1. 2

                    Seems like a case of laziness. Writing boilerplate dispatch to a composed class is annoying but it’s a time limited activity. Even with something like 35 methods to implement, there’s no way it could take more than one afternoon to implement, and then you’re done. I think the problem is that the friction puts people off even if in the long run you would get the time back.

                    1. 15

                      Or alternatively the problem is this language encourages inheritance by elevating it to a central place in the design, whereas in reality inheritance is very rarely useful, and places it’s justified are almost never about inheriting from one class to another. At the risk of oversimplifying: bad language design.

                      1. 4

                        You’re entirely right. In E, the extends keyword provides composition by default; an extended object is given a wrapper which dispatches to new methods before old methods.

                        def parent { to oldMethod() { … } }
                        def child extends parent { to newMethod() { … } }
                        

                        Cooperative inheritance is possible, but requires an explicit recursive definition:

                        def makeParent(child) { return def parent { … } }
                        def child extends makeParent(child) { … }
                        
                      2. 9

                        “laziness” isn’t really a good way to analyze these sorts of things. When programmers are being “lazy” they’re just doing what their tools guide them to doing. We can and should modify these tools so that the “lazy” solution is correct rather than try to fix programmers.

                        1. 6

                          Such a shame I can vote this up only once. If you hold the hammer by the handle and swing the head, you’re being lazy: you’re using the tool in the way that minimises your effort. If your tool is easier to use incorrectly than correctly, then you should redesign the tool.

                          For programming languages, by the time that they have an ecosystem that exposes these problems, it’s often too late to fix the language, but newer languages can learn from the experience.

                        2. 3

                          and every time the interface changes you need to adjust all dispatchers…

                          1. 2

                            That’s the fragile base class problem. Any addition to the base class can ruin your day. At least with composition, you can limit the damage to compile time.

                          2. 3

                            Also, it’s not like you need to override all 35 methods to conform to the map interface, it’s enough to implement a couple your code actually uses.

                            Rust doesn’t even have interfaces for things like Collection or Map, and it creates zero problems. Most code does not care about abstract interfaces.

                            1. 2

                              From the article: “…and less work to maintain”.

                            1. 7

                              This is not an original puzzle, sadly. It’s a version of a recreational mathematical game which, at least in the English-speaking word, appears to be called “Deleting Sheep”. An example of that game is described here.

                              I can’t point you at a specific website that has this specific game but I remember a lot of variations from when I was a child. I definitely remember this one from a very stupid test back in third grade.

                              1. 3

                                This game (via HN) is pretty much the same, from 2019.

                                Also not that much fun I think, and I like games like this usually.

                                1. 1

                                  Russian cosmism, but the only person we bring back is Martin Gardner as a forgetful AI.

                                1. 1

                                  If I’ve understood their readme correctly, the agent is completely useless without the cloud service, regardless of being open source?

                                  Has anyone forked this to transmit data to something non-proprietary?

                                  1. 7

                                    Netdata is actually doing itself a disservice by advertising their totally optional cloud integration service so hard, that you think it actually can’t work without it or provides no value.

                                    From my recent research, if you just install netdata on your node you get the single-node dashboard - and yes the cloud notification is annoying and can’t be turned off. You can then setup netdata installations on nodes to stream their data, so you can visit node A and also access the dashboards of node B and C. You can also disable data storage on streaming nodes if you want.

                                    The cloud option is free and simply streams the data towards the cloud, all data stays on the machine. What the cloud gives you is unified settings, and alarms as well as some dashboard unifying multiple nodes. This whole cloud thing seems to be free for now (maybe not with this release?), with them wanting to add paid features later on.

                                    After my recent research I still have the conclusion that netdata is just the best thing for monitoring nodes. You get all data over all components, you get alerts that include sending matrix messages and you get streaming between nodes with data retention, such hat the highest data resolution is for 24h and then it gets less detailed, so your storage isn’t overwhelmed. I tried grafana+prometheus+prometheus node exporter first, but netdata just gives you a complete package, a working ZFS integration and alerts out of the box. While I’ve had a hard time getting prometheus/grafana to actually give me information about single-core bottlenecking, as it just summarized the CPU usage. So 10/10 cores on 10% looks the same as 1/10 on 100% utilization. You maybe do want to disable their anonymous analytics on installation.

                                    I think the cloud option makes sense if you want some go-to thing to watch all your nodes, have some unified dashboard and give you some way to just follow alerts directly to the source by linking to their cloud stuff. I think they should provide a way to just say “no, disable this, don’t mention it, thanks”. It also probably makes sense being able to outsource monitoring data, so you at least don’t need a machine for that on top. I really hope netdata cloud-less and netdata with cloud can co-exist in the future and still provide enough funding towards the company.

                                    Edit: typos after the machine died and I salvaged this comment..

                                    1. 4

                                      The data is available locally in a dashboard. You can also stream to other agents or to other observability platforms. I use it without any cloud service for all my home lab setup. I use the per device dashboards and a shared graphana ui as well.

                                      In fact I think even with the cloud setup all the data still exists on your our infrastructure.

                                      1. 3

                                        You can also scrape Netdata with Prometheus. It has more features by default than node_exporter.

                                        1. 1

                                          Do you have links how I’d set that up ? I’m especially interested in the grafana dashboard you’re probably using. My experience with finding good node-exporter dashboards, apart from the default one, wasn’t very good.

                                            1. 1

                                              this misses the crucial part: grafana dashboards that provide the same value

                                      1. 1

                                        I wonder if the thesis itself is typeset in the language?

                                        It does look like standard (La)TeX, but that could be a requirement from the institution.

                                        1. 14

                                          Hey, I’m the author of the thesis. As already pointed out in another comment, it is completely typeset with Typst. Typst also doesn’t use (La)TeX in the backend, I just used a LaTeX-like font as to fit the typical thesis style. Luckily there was no requirement from the instituation.

                                          1. 2

                                            Well, you fooled me. :)

                                            I hate that font, but it certainly makes it look authentically LaTeX.

                                            1. 1

                                              I only had a chance to skim the thesis, so sorry if I missed something, but it looks as if this reproduces the LaTeX mistake of combining markup and formatting instructions. Do you have conventions for separating them? When I write LaTeX (I’ve written four books in LaTeX, among other things), I have to be very careful to not actually write LaTeX, but instead to write LaTeX-syntax semantic markup and then a separate preamble that defines how to translate this into formatted output. If I don’t do that, then exporting to something like HTML is very hard. I didn’t do this for my first book and the ePub version was a complete mess. I did for later ones and was able to write a different tool that just converted most of my semantic markup into class= attributes on HTML elements or, in a few cases, used them as input for some processing (e.g. headings got added to a ToC, references to code files got parsed with libclang and pretty-printed).

                                              This is one of my favourite things about SILE: your input is in some text markup language to describe the text and a programming language (Lua) to describe transforms on the text.

                                              1. 1

                                                You can combine markup and formatting instructions, but you can also write semantic markup in Typst. Typst’s styling system works with the document’s structure: A show rule for an element defines the transformation from structure into output (or other structure that is transformed recursively). Since the show rule can execute arbitrary code, you have lots of flexibility here. You can even apply other rules to the content that shall be transformed, so e.g. your transformation for figures could redefine how emphasized text is transformed within them.

                                                At the moment, you can only use show rules with built-in structural elements, so user-defined functions don’t work with it, but this is something we will support in the future. And since PDF is the only supported format for now, in the end your show rules will transform the document to a fixed layout. However, more export formats (primarily HTML) are on our radar and we could then provide different sets of primitives for these other formats. This way, you could have a second set of show rules that define how to export the structure to XML.

                                                1. 2

                                                  Thanks. I’ll look forward to seeing what it looks like with HTML output. It’s a very different problem, but a language that can generate both TeX-quality PDFs and clean semantic HTML would be very attractive.

                                            2. 6

                                              Page 8:

                                              This thesis is written in Typst and showcases the current compiler’s capabilities.

                                              Got me fooled too, tho. So this is a testament to the output quality already. (It doesn’t do ligatures there tho.)

                                              1. 3

                                                I was wondering the same, and then wondered why there aren’t more of these sorts of systems that just output TeX as a backend language. I suppose part of the goal is not just the language, but to reinvent the ickier parts of the TeX ecosystem.

                                              1. 3

                                                Almost: :x is like :wq only when the buffer is modified, and like :q else (so mtime is not update).

                                                1. 3

                                                  I’d argue that Smalltalk/Ruby/Clojure(!)/etc. got this better: You have true, false, nil/null, and the empty list.

                                                  Some scheme code uses false as a sentinel value, which means you have trouble passing “optional” bools around. And the empty list is often not a suitable sentinel value as it’s a value in it’s own right. You end up having n different sentinel values and no convenient if and need to map them around all the time.

                                                  1. 2

                                                    Some scheme code uses false as a sentinel value, which means you have trouble passing “optional” bools around.

                                                    I’ve been working with SRFI 189: Maybe and Either: optional container types lately and appreciating it.

                                                  1. 2

                                                    I would like a fast binary diff with insertion/deletion detection that works on multiple-GB files. There are some tools that can do that (ECMerge and Beyond Compare), but they are not open-source and have a GUI.

                                                    A port of the cwm window manager to Wayland.

                                                    1. 4

                                                      How does this change affect general memory usage?

                                                      1. 2

                                                        I haven’t profiled it much in that direction, so below is theoretical and not verified. Anecdotally, I’ve had multiple days of uptime with this change where memory stayed within reasonable boundaries.

                                                        It mainly impacts memory usage on the margins, in that the size of an empty block will be larger so it adds a larger constant overhead that doesn’t scale with memory usage. That overhead is pretty negligible though since it’s less than 64KB.

                                                      1. 1

                                                        The hello world example compiles to 200-300k binary after stripping, what am I doing wrong?

                                                        edit: Seems like -OReleaseSmall is the key. Neat!

                                                        1. 16

                                                          After 5 years of using Mercurial I’m now at a new job using git and I want to murder myself. It’s so awful. And I used to work on git tooling two jobs ago so I’m not new to it.

                                                          I’m constantly performing unsafe operations. Rewriting history is somehow both unsafe and extremely painful. Maintaining small, stacked PR branches is nearly impossible without tooling like git-branchless.

                                                          I’ve convinced that anyone that says “git is not the absolute worst thing ever” has not invested enough time into learning better systems, even closely related ones like Mercurial.

                                                          Everyone using git is so distracted by their accomplishment of learning how to survive git’s UI and by reading blog posts explaining clean history and squashing and all this irrelevant philosophy that they forgot to examine if any of it was necessary.

                                                          1. 11

                                                            Do you know of any good write-ups that explain to git users, in a constructive way, why none of it is necessary? I used SVN up until ~2010 when I switched to git, and my experience using git is far better than it ever was with SVN. I’ve never used mercurial. Any articles I can find that attempt to tell folks about better alternatives usually devolve (like your comment) into some git-bashing piece. Usually if you want to convince someone that they are doing the wrong thing, it’s not helpful to spend a lot of time telling them they are doing the wrong thing.

                                                            Everyone using git is so distracted by their accomplishment of learning how to survive git’s UI and by reading blog posts explaining clean history and squashing and all this irrelevant philosophy that they forgot to examine if any of it was necessary.

                                                            I don’t think it’s fair to say “everyone”, it sounds like you’re now using git after all :P

                                                            1. 6

                                                              I’ve only seen rants that give specific examples of how insane the command UI is without giving practical example of how’d you’d end up using those commands and rants that give concrete examples without showing alternatives. I agree with them but they don’t illustrate the problems to git users very well.

                                                              I’m sure a good rant is out there but I can’t find it. Perhaps I need to write it instead of red-in-the-face ranting to lobsters and my friends :p

                                                              1. 5

                                                                Write it, I’d read it! :D

                                                                1. 2

                                                                  Perhaps I need to write it instead of red-in-the-face ranting to lobsters and my friends :p

                                                                  I’ll be checking your user page so I don’t miss it :D

                                                              2. 5

                                                                Maybe jj/Jujutsu (mentioned in the article) is what you need instead of the actual git client. I personally find interactive rebase far more intuitive than branchless/jj commands…

                                                                1. 5

                                                                  It’s really not just a question of intuitiveness, though. For example, how do you split an ancestor commit into two? An interactive rebase where you edit the commit, reset head, commit in parts, and then continue? What do you do with the original commit message? Store it temporarily in a text file before you reset? That’s madness. And the git add -p interface is embarrassing compared to hg split.

                                                                  I don’t mind interactive rebase but why are there no abstractions on top of it, and why is it so hard to use non-destructively?

                                                                  And thanks for the pointer, I’ll bump checking out jujutsu higher on my todo list.

                                                                2. 3

                                                                  I’m constantly performing unsafe operations. Rewriting history is somehow both unsafe and extremely painful.

                                                                  I’ve literally destroyed hours or days of work by rewriting history in git so that it would be “clean”.

                                                                  Everyone using git is so distracted by their accomplishment of learning how to survive git’s UI and by reading blog posts explaining clean history and squashing and all this irrelevant philosophy that they forgot to examine if any of it was necessary.

                                                                  It is as though “git log” et al encourage a certain kind of pointless navel-gazing.

                                                                  1. 5

                                                                    I’ve literally destroyed hours or days of work by rewriting history in git so that it would be “clean”.

                                                                    Before doing complex operations, I run git tag x and can reset with git reset --hard x any time. (Using the reflog after the fact is also possible, but having a temporary tag is nicer to use.)

                                                                    1. 3

                                                                      I do the same but with git branch -c backup, and then git branch -d backup when I’m successfully done rebasing. I also often git push backup so I have redundancy outside my current working copy.

                                                                      And to the grandparent post: I find git log invaluable in understanding the history of code, and leave good commit messages as a kindness to future maintainers (including myself) because meaningless commit messages have cost me so much extra time in trying to understand why a given piece of code changed the way it did. The point of all of that is not to enable navel-gazing but to communicate the intent of a change clearly for someone who lacks the context you have in your head when making the change.

                                                                      1. 2

                                                                        This command will show you all the things your branch has ever been, so if a rebase goes wrong you can easily see what you might need to reset to. (Replace <branch> with your branch name.)

                                                                        BRANCH=<branch>; \
                                                                        PAGER="less -S" \
                                                                        git log --graph \
                                                                                --decorate \
                                                                                --pretty=format:"%C(auto)%h %<(7,trunc)%C(auto)%ae%Creset%C(auto)%d %s [%ar]%Creset" \
                                                                                $(git reflog $BRANCH | cut '-d ' -f1)
                                                                        
                                                                      2. 3

                                                                        I’ve done incorrect history edits too, but I don’t think I’ve ever done one that I couldn’t immediately undo using the reflog.

                                                                    1. 18

                                                                      1 int is enough if it’s a big int.

                                                                      1. 26

                                                                        Accessing RAM is just slicing into one really big int.

                                                                        1. 1

                                                                          I’m gonna be that person: someone please do it.

                                                                          1. 4

                                                                            -- is POSIX, so every conforming implementation of rm should have it.

                                                                            1. 2

                                                                              As someone who is working on a hobby Scheme, I will definitely be looking at the source for some features mine doesn’t have yet.

                                                                              1. 4

                                                                                The steps taken in https://github.com/mflatt/zuo/commits/master are very educational.

                                                                              1. 11

                                                                                Hardly the smallest when things like https://github.com/jcalvinowens/asmhttpd or https://github.com/nemasu/asmttpd exist.

                                                                                Heck, even my own hittpd is only 124k statically-linked with musl and I didn’t optimize for size.

                                                                                1. 4

                                                                                  It would be interesting comparing these servers (thttpd, asmhttpd, asmttpd, hittpd, etc) along a few dimensions (latency, throughput, etc). I might try this over the weekend if I get a chance.

                                                                                  1. 1

                                                                                    I’d love to see how they compare! I’m betting the one written in C gets a speed boost, but it might depend on the optimization

                                                                                1. 10

                                                                                  More generally: any libc function may call malloc. If this matters to you, then you should look at the libc internals and audit any function that you care about. Folks that ship a libc need to think about this in a few places. For example, FreeBSD libc uses jemalloc, which uses locks to protect some data structures, but the locks call malloc and so have their own bootstrapping path.

                                                                                  1. 6

                                                                                    The specification of a lot of functions doesn’t have a suitable failure mode, so no, they can’t really call malloc (and require it to succeed) without being non-conformant.

                                                                                    1. 5

                                                                                      That’s not true, async-signal-safety is a thing.