Threads for xguerin

  1. 11

    Helix, with almost default config, just some custom keybindings:

    [editor]
    true-color = true
    color-modes = true
    idle-timeout = 75
    
    [editor.indent-guides]
    render = true
    
    [editor.cursor-shape]
    insert = "bar"
    normal = "block"
    select = "underline"
    
    [keys.normal]
    g = { a = "code_action" }
    0 = "goto_line_start"
    "$" = "goto_line_end"
    S = "surround_add"
    
    [keys.select]
    0 = "goto_line_start"
    "$" = "goto_line_end"
    
    [keys.insert]
    j = { k = "normal_mode" }
    
    1. 6

      But a confession: I use IntelliJ for work projects :P

      1. 2

        which terminal emulators do you use? On various computers/platforms?

        1. 4

          On macOS, I use Kitty, mainly because the split and tab functions.

          I also use the same helix config on github codespace. It works really well.

          1. 1

            I find it works really well in Windows Terminal.

          2. 2

            Yep pretty much like me and for the opposite reason of @eBPF using Neovim above: because it really does not need any plugins for my use cases.

            [editor]
            bufferline = "multiple"
            color-modes = true
            cursorline = true
            
            [editor.cursor-shape]
            insert = "bar"
            
            [keys.normal]
            "S-left" = ":bp"
            "S-right" = ":bn"
            
            1. 2

              Same config, give or take a line 🤘

            1. 9

              Transparent superpage support was added in FreeBSD 7.0 (2008) and has been enabled by default since then, without the performance issues that the Linux version seems to have had. I am not sure why the Linux version has had so many problems, but it looks as if they’re not demoting pages back after promoting them. For example, as I recall, if you fork in FreeBSD and then take a CoW fault in a page, the vm layer will instruct the pmap to fragment the page in the child and then copy a single 4 KiB page, so you don’t end up copying the whole 2 MiB. There’s also support for defragmentation via the pager, though I don’t think there’s anything if memory is not swapped out, which can help recombine pages later.

              1. 3

                Huge page page faults are absolute hogs. The fault handler emit TLB shootdowns for every single 4k page in the huge page region, which takes about 200us for every fault for a 2MB HP.

                Because of this, THP are usually bad news for latency critical applications as this behavior will cause absurd tail latencies. It’s even worse when using them with allocators that do not natively support them. Even those that supposedly do (eg jemalloc) show iffy tail behavior.

                From my experience, the best use case for HP are ring buffers (either SW/SW or SW/HW) where the capacity is known in advance and the pages can be pre-faulted. But that’s a very tailored situation that doesn’t broadly apply.

                1. 3

                  Huge page page faults are absolute hogs. The fault handler emit TLB shootdowns for every single 4k page in the huge page region, which takes about 200us for every fault for a 2MB HP.

                  I’m not sure I understand why you need to do all of the shootdowns, but at least in FreeBSD situations that need to shoot down more than one page are batched. The set of addresses is written to a shared buffer and then the pmap calls smp_rendezvous on the set of cores that is using this pmap and does an INVLPG on each one. Hyper-V also has a hypercall that does the same sort of batched shootdown. I’m not sure how this changes on AMD Milan with the broadcast TLB shootdown.

                  FreeBSD’s superpage support on x86 takes advantage of the fact that all AMD, Intel, and Centaur CPUs handle conflicts in the TLB, even if they architecture says that they don’t. This means that promotion does not have to do shootdowns, it just installs the new page table entries. As I recall (and I’m probably mixing up Intel and AMD here), on Intel architectures the two entries coexist in different TLBs, on AMD the newer one evicts the older, and on Centaur the cores detect the conflict, invalidate both and walk the page table again.

                  1. 2

                    I did not dig any deeper and my root-cause analysis could be wrong. Here is a relevant ftrace if you are curious: https://gist.github.com/xguerin/c9d97ef50701bd247a219191cb37ec8a. Total latency is 271us. Largest cost centers are: 1/ get_page_from_freelist takes a whoopy 120us; 2/ clear_huge_page takes another 135us (admitedly 2/ is not stricly required as part of the overall operation).

              1. 1

                Desk. I use the iPad with the Planck as a mobile workstation.

                1. 16

                  GATs are pretty huge right? I feel like I’ve seen ā€œwe could do X if we had GATsā€ all over the place.

                  1. 9

                    It will allow us to specialize our callbacks with their owning type and therefore rely on static dispatch instead of dynamic dispatch.

                    1. 4

                      That’s pretty sick, in what context if you can share?

                      1. 1

                        Callback evaluation for asynchronous I/O through a bespoke I/O runtime. Think something like Socket<Delegate> where Delegate is your callback-handling trait, in places where you have a HttpProtocol that specializes on Socket and needs to self register as the Delegate. Impossible without HKT, but GATs enable this with a little trait tomfoolery.

                    2. 7

                      The funny thing is at last job I needed GATs to do something tricky. Now for the life of me I can’t remember the details, but it’s a pretty big deal to have associated types that are easily generic. Just the lending iterator alone can allow things that are rather simple in scripting languages but restricted in earlier versions of rust.

                      1. 7

                        I need GATs so the Presto client library can go stable instead of nightly only.

                        1. 3

                          This is the client I’m talking about: https://github.com/nooberfsh/prusto

                    1. 5

                      I find building something not only alone but for one’s own benefit to be an incredible source of creative freedom. Free from external constraints, one can really explore designs and architectures that fit one’s requirements and expectations. I find myself doing that a lot when healing from bouts of burnout from time to time, and it has always been of tremendous help.

                      1. 1

                        No need to make the rules optional. It’s an incentive to not understand them in the first place. Just remember that the rules are made for Man, not Man for the rules.

                        1. 3

                          A terse RISC is all we need. Everything else can reduce to it through composition.

                          1. 2

                            Or… don’t perform a generative stabilization loop in your destructor. This action is sufficiently meaningful to deserve its own method.

                            1. 6

                              [[tangential]] what’s with all the swearing nowadays? Has it become impossible to drive a technical point without swearing? I’m maybe becoming one hell of a grizzli bear but I find that to be a major turn-off.

                              1. 6

                                There’s no swearing here by typical standards. There is a Bowdlerized expletive in the title, but that word has explicitly been neutered. The author committed a crime against words in order to avoid swearing in front of you.

                                1. 3

                                  If you declare yourself a fan of hobos, swearing is de rigueur.

                                1. 6

                                  If someone comes to you and ask you to make things sloppier, your answer should be no.

                                  Words to live by.

                                  1. 5

                                    I am recurrently trying to build a minimalist LISP. I use it as a platform to try a few things like continuation-passing style, GC vs ARC, async vs MT, etc. It’s also a place of my own where I can be as anal about the code as I want to, some kind of engineering-oriented mental bachelor pad.

                                    1. 12

                                      Discussing the future of operating systems is fine, but not while ignoring the past. There had been at least 20 years of OS research before linux with a lot of interesting ideas wildly different from the ā€œeverything is a fileā€ world view.

                                      For instance, z/OS. It has been powering large mainframes for 40 years. Every user run in its own VM (z/VM) where he has access to a full OS. Everything is not a file but a database. It’s the grand-daddy of cloud OSes.

                                      Symbolic operating systems, like those on Lisp machines, also are a lot of fun to read about.

                                      1. 4

                                        That’s VM/CMS (CMS, Conversational Monitor System, being the per-user VM). z/OS is a batch processing system historically, still used for that role.

                                        1. 1

                                          I read somewhere that VM/CMS was one of the few hotbeds of ā€œhacker cultureā€ outside of UNIX.

                                      1. 4

                                        Server-side I would bet that in this day and age the POWER fleet is probably the largest after x86/arm. Z might not be weighing much in terms of raw population count but it’s powering a lot of large, critical systems. In the deep-embedded world STM chips are pretty widespread.

                                        1. 1

                                          deep-embedded world STM chips

                                          This is ARM (Cortex-M) though.

                                          1. 1

                                            The larger STM32, yes. Not the 8-bit/16-bit micro-controllers. That being said, the generations I was used to are now marked ā€œlegacyā€ (ST7/ST10), so I might very well be wrong.

                                            1. 4

                                              I get the impression new designs aren’t using the 8 or especially 16-bit MCUs anymore. Cortex-M0 (and soon RISC-V) ate their lunch. Of course, they’ll be around forever, as embedded designs tend to be.

                                        1. 2

                                          Racket:

                                          (for/fold ([p #f]
                                                     [cnt 0]
                                                     [counts null]
                                                     #:result (cdr (reverse (cons `(,p ,cnt) counts))))
                                                    ([c (in-string "aaaabbbcca")])
                                            (if (equal? p c)
                                                (values c (add1 cnt) counts)
                                                (values c 1 (cons `(,p ,cnt) counts))))
                                          
                                          1. 1

                                            Also using fold, with mnm.l :

                                            (def challenge (IN)
                                                (foldr (\ (C ACC)
                                                         (let ((((V . N) . TL) . ACC))
                                                           (if (= C V)
                                                             (cons (cons C (+ N 1)) TL)
                                                             (cons (cons C 1) ACC))))
                                                  IN NIL))
                                            
                                            1. 1

                                              Racket’s group-by is wonderful but I usually want to group consecutive equal items into clumps as they arise rather than a single monolithic group.

                                              (define (group xs)
                                                (match xs
                                                  [(cons x _)
                                                   (define-values (ys zs) (splitf-at xs (curry equal? x)))
                                                   (cons ys (group zs))]
                                                  [_ null]))
                                              
                                              (define (encode xs)
                                                (for/list ([x xs])
                                                  (list (first x) (length x))))
                                              
                                              (encode (group (string->list "aaaabbbcca")))
                                              
                                            1. 1

                                              I never understood the value proposition of ReasonML. Was it really to make ML more palatable to a corpus of engineers broken to imperative-style languages (and more specifically JS)?

                                              1. 3

                                                The slogan I heard was ā€œReact as a languageā€, which makes sense.

                                                React is a framework that encourages use of a functional style for your apps. And React is often written in dynamically typed JS. So it makes sense to write React-style apps in a statically typed functional programming language.

                                                1. 1

                                                  I see, thanks. The fact that the two were intertwined was lost on me.

                                              1. 9

                                                OCaml is arguably small (it’s definitely not easy to find a job for it), but I’ve been using it for web development, hardware development, and even to play around some graph algebra. I love ML in general, but I am particularly fond of OCaml for the above-average quality of the ecosystem (and PPX extensions!).

                                                1. 8

                                                  At 10 years in, I find myself agreeing with everything. It’s sufficiently rare an occasion to warrant this shamelessly me-too-ing comment.

                                                  1. 3

                                                    There is no point in using std::vector if the size remains contant. There is std::array for that, which has a (mostly) compile-time interface that is comparable to primitive C arrays.

                                                    1. 1

                                                      Two things to note though are that you have to know the size at compile time, and std::array has automatic storage duration, so depending on size and environment that may be prohibitive.

                                                      I think the problem is the intersection between needing a large fixed size array and not being able to pay the overhead of storing the one extra pointer in a std::vector is fairly small.

                                                    1. 3

                                                      The year of the mainframe, as a computing model if not as a technology: modern personal computers as dumb terminals, large batched jobs, large data sets stored remotely.

                                                      1. 7

                                                        It’s the same architecture as the Infiniband verbs (WR+CQ, a primer), one of the (if not the) best asynchronous I/O interface I’ve ever used. It definitely looks like a promising piece of work.