Threads for sleibrock

  1. 1

    Regardless of arguments about whether or not the Go runtime should be CFS-aware, I guess the one subject that comes to my mind instantly is: how many Go devs are writing applications expecting the Go runtime to be contextually aware of the CPU limit, and are thus deploying less efficient solutions to the cloud?

    My understanding of Go is that it shouldn’t be such a complex language to master, hence why the syntax is easy to learn. But maintaining Go applications is seemingly more complex when people create write-ups about how they have to tune the Go GC or even do things like this where you check if it acknowledges the host’s CPU limitations.

    I guess this would file under a leaky abstraction of some sort.

    1. 1

      GOMAXPROCS is very often used to decide the upper bound on the number of worker goroutines to spawn. That way each goroutine is guaranteed a full CPU core.

      1. 1

        How does the code that spawns GOMAXPROCS goroutines ensure that each of them are guaranteed a core? Can other goroutines not be spawned elsewhere, which would compete for OS threads within the scheduler?

        1. 5

          I don’t think “each goroutine is guaranteed a full CPU core” is accurate at all, but GOMAXPROCS is still (probably) the right number of workers. Dealing efficiently with (many) more goroutines than there are threads is the point of the scheduler ­— to some extent it’s the point of the language. Having more than GOMAXPROCS goroutines is a normal condition, and you should expect them to get scheduled fairly, and not to contend with each other significantly unless you’re actually using nearly all of the available CPU (which usually isn’t the case).

          But if you have fewer than GOMAXPROCS goroutines in your worker pool, then if a bunch of work comes in and nothing else is going on, you’re leaving performance on the table by not bursting onto all of the available cores immediately.

          And if you follow that logic, then you can see why, although setting GOMAXPROCS equal to your quota share is a somewhat reasonable thing to do, it’s not the only reasonable thing to do, or even the obviously correct thing to do. Setting GOMAXPROCS equal to your quota share means that every Go thread can use 100% CPU all the time and you won’t get throttled by the OS scheduler. But if you set GOMAXPROCS to the number of actual cores, you can potentially use all of those cores at once… as long as you do it in a burst that’s short enough that you don’t get throttled.

          When you’re dealing with a “web” sort of workload (requests arrive randomly, you do a tiny bit of work to parse them, and then you quickly block on a database server or an API or something), it’s very much possible that you will get faster response time numbers by not lowering GOMAXPROCS, or by setting it to some intermediate value. The best answer really does depend on the nature of your code, what other loads it’s sharing the machine with, and whether you’re trying to optimize for latency or throughput.

          1. 1

            Dealing efficiently with (many) more goroutines than there are threads is the point of the scheduler ­— to some extent it’s the point of the language.

            So why even use GOMAXPROCS as the size for a worker pool?

            1. 2

              A lot of times, you don’t need a worker pool to begin with, you can just do stuff. If an HTTP server uses a goroutine to service each request, and each of those spawns off a dozen goroutines to do this and that, no big deal, even up to high client counts. But somewhere around a million goroutines you may start to notice some scheduling degradation, or feel the pinch of having that many stacks in memory. So if you were doing something like scraping the entire web, you wouldn’t want to write

              func crawlURL(url string) {
                  page := downloadURL(url)
                  links := getLinks(page)
                  for _, link := range(links) {
                      go crawlURL(link)
                  }
              }
              

              because sooner or later it would explode. Instead you would do the downloading in a worker pool sized to the number of parallel requests you want to make (which has nothing to do with GOMAXPROCS; constraints like that come from elsewhere), and the parsing in another pool (which might be GOMAXPROCS-sized), and feed them using channels.

    1. 8

      It is inevitable, and Zig does not offer anywhere near the protections that Rust currently has. The Zig/Rust memory errors blog post thing springs to mind as it was revitalized a few months back, and in it’s current state Zig doesn’t do anything fancy with regards to bounds checks or lifetimes.

      The open source world will start to embrace Rust slowly over time as it’s ecosystem expands, but that story isn’t exactly new; we’ve seen the trend before with Python, with Ruby, with PHP, and with JavaScript. Each have had their glow-ups with new packages/libraries developed and updated each year. Rust will get there, as more dedicated people cluster together to create cool new things to use within Rust. And hey, for us to get more critical software to use Rust for safety purposes, the better.

      Personally, I just don’t really like using Rust. There are simple things, then there are straight up annoying things. Boxing every type, putting Arc everywhere, writing really long and cumbersome match statements, writing really weird looking function signatures that have trait boundaries and you can do basic addition on these trait boundaries. I’ve been using Rust on and off for a few years now following it’s progress and it never feels fun for me. Even due to the nature of lifetimes and the borrow checker, it’s hard to write classical computer science data structures in it without having to use the Pin type to pin pointers to keep linked list nodes. There’s only so many times you can find it cute to have the compiler fight you over something that mentally seems okay to you.

      I started using Zig a while back and I find it a breath of fresh air and in some sense, relieving. I’m not overwhelmed with fancy trait boundaries, the standard library really doesn’t fight you, and it’s easier to tell when you do memory allocation in Zig because you need to share an allocator in your code to do so. I even find it fun in some sense to try and write some old C/C++ programs and convert it over to Zig. Interop with C/C++ is easier because Zig compiles C/C++ and can drop-in as a C/C++ compiler for old projects.

      I wouldn’t expect the world to start suddenly using Zig over Rust, given what Rust can provide large projects at scale. I enjoy that Zig is a small community of people working together on something cool. Rust’s governance is a little too large and feels like that of yet another C/C++ standards committee, while Zig has Andrew overseeing things, and you’re more likely to bump into him on IRC and have chill conversations with him. I like that a lot about the Zig space right now. A lot of time was spent on Zig getting it to the self-hosted compiler, and with that out of the way we can all look forward to what comes next in the Zig era.

      1. 3

        I’m a rust newbie and some of this rings true, at least so far. I finished all the rustlings exercises so was feeling pretty confident until I hit a brick wall trying to read & modify two elements of a vector within the same code block. After struggling for a while and doing a lot of internet searches I finally posted on stackoverflow where I was directed to a duplicate question and learned about the slice::split_at_mut function. I would not say this experience has left me feeling energized. Here I am doing complex index arithmetic and the thing that gives me the most trouble - and that the compiler finds most alarming - is just accessing two different elements within the same scope? I haven’t even thought about writing a data structure with it.

      1. 1

        Very cool stuff, and another good showcase for the power of Zig :)

        Porting it to MacOS shouldn’t prove too troublesome if author fiddles with the build.zig file for a bit. Zig can cross-compile easily provided it has all the Ruby source files available. However, I am the worst build.zig writer in the world and I tend to rely on the CLI compiler flags more than I should.

        1. 12

          WebAssembly in more places.

          AI fears.

          I hope zig starts beating rust as a systems language this year.

          1. 10

            I dont think rust and zig really compete, do they? Zig is a “by hand” memory management language

            1. 3

              I don’t understand under what circumstances I would chose zig if I had already learned Rust, so I see them as competitors.

              1. 2

                Zig’s comptime metaprogramming is very competitive with Rust’s const eval and macros, but feels simpler to write to me. I think once Zig hits 1.0 (sometime in 2025?) and there is a larger ecosystem for it, it will be more compelling for people starting new projects to use it. I know I’d be using it more if there was a good selection of math / statistics packages available. @andrewrk has had a few ideas for memory safety that have yet to be implemented / tried out, and I think if there are real safety options it will give Zig a huge opportunity to grab up market share.

                1. 2

                  This is probably adding on the competition bit: I know Rust and I am looking at Zig. I doubt my ability to write good, fast code in a language that’s as huge as Rust. I also feel that “knowing” Rust isn’t something you do passively, it’s basically a part-time job. It’s not one that I find particularly rewarding, as language design is neither a hobby of mine, nor something I’m professionally interested in, and it takes up a lot of time that I would much rather spend writing useful programs.

                  1. 2

                    On the other hand I doubt my ability to write correct, non-leaky code in a language as hands-off as Zig… For something small and simple, sure, but anything with decent complexity is bound to end in memory management mistakes I think

                    1. 2

                      Oh, zig is fantastic at telling you if you leak addresses. The equivalent of valgrind is baked into the tooling.

                    2. 1

                      I also feel that “knowing” Rust isn’t something you do passively, it’s basically a part-time job.

                      Funny enough I’ve had this same feeling about C++. Conversely, keeping up with C#, Java, and even C doesn’t feel so mentally taxing.

                      1. 3

                        Oh, yeah, I wanted to say “just like C++” but I thought that was going to be a little too inflamatory and I have C++ PTSD from my last big C++ project. It’s driving me nuts. You would expect a language that has so much more expressive power than C, and can better encode so much safer idioms, to have less churn than C, not more.

                        IMHO this is mostly a failure of the C++ commitee though. The complexity of the language and standard library, and the way it was (mis)managed, has spawned a huge, self-feeding machine of evangelists, consultants, language enthusiasts and experts, and a very unpleasant language feature hustle culture. I’ve seen a lot of good, clean, smart, super efficient C++ code, and most of it appears to have been made possible by a) a thorough knowledge of compiler idioms and b) ignoring this machine. Unfortunately, the latter is hard to do unless it’s organizationally enforced.

                2. 6

                  Nah, Zig needs to at least do 1.0 (well, as an alternative, Rust can do 2.0) to start to dream about outcompeting Rust :P

                  1. 3

                    +1 for Zig!

                    1. 2

                      Zig user here, doing all my WebAssembly projects with Zig. It’s quite easy and fun!

                      1. 1

                        WebAssembly in more places.

                        Came here to post exactly this!

                        I think it won’t quite break through to mainstream mainstream but I think the reasons and needs will slowly start to become more apparent. In a world where Google Chrome is the dominant operating system and more stuff is running “at the edge” and the unit of computation is becoming smaller and simpler on the surface, I think WASM and WASI has a strong competitive head start to solving some of these problems.

                        It won’t quite be the “write once run anywhere” of the JVM era, but I reckon it has a pretty fair shot at getting close enough to be useful!

                      1. 8

                        Good writeup, but I wouldn’t recommend using this approach at work.

                        Fwiw, I did a bunch of stuff like this a few years into my ruby career, and saw others do them too, and after working on a medium sized ruby team with normal dev turnover for 4 years or so concluded that refinements, as well as everything else that changes the core language, should be entirely avoided. The small enhancement in readability is not worth the extra cognitive load on newer devs, and the increased chance of someone making a mistake.

                        Better to just embrace the core, boring, vanilla language.

                        1. 6

                          I am inclined to agree, mostly because while it’s cute, it doesn’t really stand to support errors in a meaningful capacity (ie. the “railroad” style of error handling). Unless you really warp and change what kind of data your functions are taking in/returning, you’re only scratching the surface of what Haskell and F# can do. Ruby I feel is more suited to method chaining with something like [1,2,3].map{|x| x+1 }.filter{|x| x.even? } which is probably a better way of concreting complex business logic.

                        1. 1

                          Bought mine last night, first time getting to go!

                          1. 7

                            I’m especially curious about existing implementations anyone uses here on Lobsters. My Lobsters-based POSSEs have been manual so far; the only automated POSSE-ing I do is via a shell script that calls toot to post to the Fediverse.

                            1. 3

                              I use brid.gy a lot

                              1. 3

                                I am using my own[1] to cross-post to mastodon[2], pleroma[3], pinboard[4] and twitter[5] accounts. The last via https://crossposter.masto.donte.com.br/.

                                Currently rewriting to be an activitypub instance on it’s own[6].

                                [1] http://mro.name/shaarligo
                                [2] https://digitalcourage.social/@mro
                                [3] https://pleroma.tilde.zone/@mro
                                [4] https://pinboard.in/u:mro/
                                [5] https://twitter.com/mrohrmoser
                                [6] https://seppo.app/en/

                                1. 2

                                  I use this for my blog. I post to my blog and then have it poke another service that spreads out notifications. I wrote about it in my most recent talk.

                                  1. 1

                                    I personally self-host an espial instance (https://github.com/jonschoning/espial). Then I also self-host a node-red instance (https://nodered.org) where I created 4 workflows:

                                    • blog2twitter
                                    • espialBookmark2twitter
                                    • espialNopte2twitter
                                    • espialBookmark2pinboard

                                    You get the idea, the workflows looks for RSS feed from my blog, my public espial bookmarks and my public espial notes. On any new item, it is published to twitter (and the bookmarks are copied to pinboard).

                                    More details:

                                    https://her.esy.fun/posts/0004-how-i-internet/index.html

                                    1. 1

                                      I did some manual POSSE to Lobsters in the past for comments, and when I’ve posted blog posts I’ve made sure I’ve recorded the syndication / cross post to Lobsters, but would prefer to move to more of a backfeed approach where I can write posts in the Lobsters UI then later they’ll automagically sync back to my site

                                      1. 2
                                        1. 2

                                          The problem with PESOS is that it prevents original-post-discovery, undermining the “own your content” goal of the IndieWeb. POSSE-over-PESOS also fits nicely into the logic behind POSSE before sending Webmentions.

                                          Backfeeding does make sense for aggregating responses, though. It’d be cool if Lobsters gave replies permalinks (rather than anchor-links) so we could enable that functionality…maybe I’ll file a ticket.

                                      2. 1

                                        I can’t say it would be truly automatic, but if I had to spend a lot of time copying and pasting text across multiple sites, that’s when I’d start using some level of copy/paste and browser tab automation. Nothing fancier than echo $text | xclip -sel clipboard and using a firefox $url invocation to spawn each site’s URL to make a new post on. This is what I did for a job with not-so-great automation aspects.

                                      1. 3

                                        I wouldn’t say UML is dead, but maybe not a sexy topic in itself. Mermaid is proof that graphs/diagrams, UML or not, are still pretty handy to have to visualize large-scale projects. If I lived in an ideal world, I would love to use Luna/Enso for practical projects, but I am a bit biased as I do enjoy PureData still.

                                        I’m still working backwards converting Racket code into diagrams for fun over here.

                                        1. 1

                                          I’ve not heard of either of those tools, but they both look very interesting, thanks!

                                        1. 3

                                          It does make sense to use it in some cases, but using it for cheeky one-liners and further code golf seems a bit too much and adds a lot of visual noise to things like list comprehensions. Most of it’s original use-cases in it’s PEP made it sound a lot like an anaphoric if statement from Lisp to me, but beyond that I think adding more noise to the code snippets is a bit too much.

                                          1. 2

                                            This looks like a logic error:

                                            if (mask & (WGPUColorWriteMask_Alpha|WGPUColorWriteMask_Blue)) {
                                                // alpha and blue are set..
                                            }
                                            

                                            Shouldn’t it be this?

                                            if ((mask & WGPUColorWriteMask_Alpha) && (mask & WGPUColorWriteMask_Blue)) {
                                                // alpha and blue are set..
                                            }
                                            
                                            1. 2

                                              No, because that requires that both flags are set. The equivalent condition to s & (A|B), which works no different, would be (s & A) || (s & B).

                                              Notice the commonality in how the bitmasks are joined: always with some form of the disjunctive (“or”) operator, either bitwise | or boolean ||. In any case, the bitmask A or B must be applied to the operand s using the & bitwise “and” operator: s & A, s & B, s & (A|B).

                                              The equivalent operation to your “correction” (s & A) && (s & B), using the technique of joining the bitmasks first, would be s & (A|B) == A|B. This checks that all of the bits are set, rather than that any of the bits are set.

                                              Edit: I got confused 😅 You are right: the original code tests whether either alpha or blue is set. My initial comment above would have been applicable if the commented-out text had read, “// alpha OR blue is set..”. I think that’s as good a case as any for “tagged” bit-fields over “untagged” bit-masks.

                                              Note for any lurkers who have read this far and are rather confused: You may want to read up on how bitmasks are used.

                                              1. 2

                                                Which makes either the comment or the code wrong. The comment says “and” not “or”. @smaddox was matching the code to the comment

                                              2. 2

                                                As others have pointed out, the comment is a bit misleading. But if you want to check if both are set, this would work:

                                                if (mask & (WGPUColorWriteMask_Alpha | WGPUColorWriteMask_Blue) == (WGPUColorWriteMaskAlpha | WGPUColorWriteMask_Blue))
                                                {
                                                    // alpha and blue are set ...
                                                }
                                                
                                                1. 1

                                                  Wouldn’t the | operator join the two bit masks together to create a new intermediate with both bits set? It’s a matter of A == (B|C) versus (A==B) && (A==C) at this point.

                                                  1. 3

                                                    It does, but you get “or” instead of “and”. If either bit is set, the result is not zero.

                                                    1. 2

                                                      Correction: (A == B) && (A == C) is always false (0) when B != C, due to the transitive property of strict equality. You probably meant (A & B) && (A & C). See my other comment.

                                                  1. 6

                                                    I remember when this came out (first public release seems to be from 2010—now I feel old). I think that docco introduced me (and probably lots of other people) to literate programming. Later I read C Interfaces and Implementations, which is a great book-length example of literate programming.

                                                    There were a lot of versions of docco in other languages. They’re fun to consider in $YOUR_FAVORITE_LANG. (Sadly almost all of the demo pages are now 404. The moment has passed.)

                                                    Looking at the earliest history of docco, I’m surprised to see that it was originally in Ruby. I think I only picked up on it after the rewrite to Coffeescript. Jeremy Ashkenas seems to have had (nearly) unlimited energy and creativity around that time.

                                                    1. 3

                                                      Jeremy Ashkenas seems to have had (nearly) unlimited energy and creativity around that time.

                                                      I’m so glad I’m not the only one who remembers his prolific contributions. What a practitioner!

                                                      1. 3

                                                        Docco inspired me to make one for Racket, but I sort of left it in an unfinished state. I might try to go back and do more work on it now since I’m a bit better with Racket now than I was… 5 years ago? Feels like ages ago now.

                                                      1. 4

                                                        Hopefully working on a Racket-based library for creating bots for ssh-chat.

                                                        1. 3

                                                          One tangential thing that made me pause after reading the update at the end of the article: In a world where we can’t even know for a fact how much memory something actually uses with measurements differing by a factor of 3x depending on tool used, why do we care about memory usage at all? The numbers feels superficial to me anyways.

                                                          What matters in the end is perceived performance and responsiveness during a specific workload. And that one’s very subjective too of course (which would also explain the other observation in the article about xfce being considered lightweight compared to Gnome)

                                                          1. 6

                                                            I can’t say it’s a big deal for everyone, but I work in an office with a lot of older hardware. The computer I use every day only has 8GB of RAM, but system performance starts to drop greatly when it reaches around ~7GB of RAM in usage. When I first came to my current job, heavy browser usage lead to a lot of blue screens on Windows 10.

                                                            I have been using Linux on this same computer, and minimizing the RAM usage helps my performance greatly. I still see struggles when I get to that 7GB threshold, but it’s usually because I am pushing the computer to the limits with GIMP, and I try to actively close out memory-heavy things when it starts to struggle. Because of i3-wm, I am able to do my job a lot better with less computer worries.

                                                            1. 4

                                                              Not a general-purpose solution, but: if you haven’t tried out zram-swap you might give it a look. Obviously using swap memory isn’t preferable to just keeping everything in RAM, but letting programs page out into compressed swap should still be faster than killing + re-launching them by a good margin.

                                                              Where swap gets you into trouble is when you’re continually context-switching and paging entire processes in and out continuously, trying to keep an unsustainable workload going. If you can avoid that death spiral, a bit of paging here and there really shouldn’t be that big a deal.

                                                              Also: browsers. They’re gonna eat substantially more RAM than GIMP in most day-to-day usage. If you can run any dedicated apps (email client, doc browser, native IRC/XMPP/Matrix client vs. all the browser or Electron apps) that could save you several GBs of working memory. (Even if they use a browser engine under the hood for e.g. rich text rendering, they won’t be loading MBs of background worker JS code, assets, etc. just to idle on a background tab.)

                                                              1. 3

                                                                Another suggestion in the exact opposite direction: earlyoom (https://github.com/rfjakob/earlyoom), which starts killing programs before you start hitting swap. Saved my bacon a few times. You can also configure it to preferentially kill certain processes.

                                                                Re browsers, I have made good experiences with OneTab, which lets you, at the click of a button, convert all your open tabs into links in a list. Satisfies my hoarder instinct without consuming RAM.

                                                          1. 2

                                                            Hopefully developing my personal static site generator in Racket some more.

                                                            1. 11

                                                              So this is not a dig at them, but what are they trying to achieve? Mozilla overall has recently issues with both user retention and funding. I’m not sure i understand why they’re pushing for an entirely new thing (which I assume cost them some money to acquire k9) rather than improving the core product situation?

                                                              Guesses: a) those projects so separate in funding that it’s not an issue at all, or b) they’re thinking of an enterprise client with a paid version?

                                                              1. 9

                                                                These things are indeed separate in funding. Thunderbird is under a whole different entity than, say, Firefox

                                                                1. 2

                                                                  Aren’t they both funded by the Mozilla Foundation? How are they separate?

                                                                    1. 7

                                                                      @caleb wrote:

                                                                      Aren’t they both funded by the Mozilla Foundation? How are they separate?

                                                                      Your link’s first sentence:

                                                                      As of today, the Thunderbird project will be operating from a new wholly owned subsidiary of the Mozilla Foundation […]

                                                                      I’m confused…

                                                                      1. 2

                                                                        Seems pretty clear by the usage of the word “subsidiary”

                                                                        Subsidiaries are separate, distinct legal entities for the purposes of taxation, regulation and liability. For this reason, they differ from divisions, which are businesses fully integrated within the main company, and not legally or otherwise distinct from it.[8] In other words, a subsidiary can sue and be sued separately from its parent and its obligations will not normally be the obligations of its parent.

                                                                        The parent and the subsidiary do not necessarily have to operate in the same locations or operate the same businesses. Not only is it possible that they could conceivably be competitors in the marketplace, but such arrangements happen frequently at the end of a hostile takeover or voluntary merger. Also, because a parent company and a subsidiary are separate entities, it is entirely possible for one of them to be involved in legal proceedings, bankruptcy, tax delinquency, indictment or under investigation while the other is not.

                                                                2. 5

                                                                  They’re going to need to work on a lot of things, including a lot of stability improvements as well as better/more standard support for policies and autoconfig/SSO for Thunderbird to really be useful in enterprise.

                                                                  Frankly, Thunderbird is the only real desktop app that I know of that competes with Outlook, and it’s kind of terrible… there really is a market here, and I don’t think that working on android client is what they need

                                                                  1. 2

                                                                    Gnome Evolution works better than Thunderbird in an enterprise. For thunderbird IIUC you need a paid add-on to be able to connect to Office365 Outlook mailboxes (in the past there used to be an EWS plugin that worked with onprem Outlook, but doesn’t seem to work with O365), whereas Evolution supports OAuth out of the box.

                                                                    1. 4

                                                                      Thunderbird supports IMAP/SMTP Oauth2 out of the box, which O365 has if your org has it enabled. What it lacks (and what Evolution has in advantage) is Exchange support.

                                                                      If your org has IMAP/SMTP/activesync enabled then you can even do calendaring and global address completion using TbSync, which I rely heavily on for CalDAV / CardDAV support anyway (though I hear Thunderbird is looking to make these two an OOB experience as well)

                                                                  2. 3

                                                                    I can’t say for certain, but I think maybe they’re looking to provide a similar desktop experience on mobile. I use Firefox and Thunderbird for work, and it is a curious thing to note that Thunderbird did not get any kind of Android version. Firefox already released base and Focus as Android applications, so it would be cool to see a Thunderbird exist in the (F)OSS Android ecosystem.

                                                                    I have been a K-9 user for a number of years but I do think it’s UI could use a bit of an update. I have been using it since Android 5.0 and it has basically had the same interface since the initial Material release. This could be an exciting time for K-9 to get a new coat of paint. I will love K-9 mail even if this doesn’t pan out well.

                                                                    1. 5

                                                                      K-9 mail is almost perfect the way it currently is on Android (at least when it comes to connecting to personal mailboxes). I can’t speak about how well it’d work in an enterprise because I keep work stuff off my phone on purpose.

                                                                      1. 4

                                                                        The biggest functional shortcoming with K-9 is no support for OAuth2 logins, such as GMail and Office365. You can currently use K-9 Mail with an app-specific password in GMail, but Google will be taking that ability away soon. I also have some minor issues with notifications; my home IMAP server supports IDLE, but I still often see notifications being significantly delayed.

                                                                        In terms of interface, there was a Material cleanup a while ago, and the settings got less complicated and cluttered, so it’s very usable and reasonably presentable. But it does look increasingly out of date (though that’s admittedly both subjective and an endless treadmill).

                                                                        1. 3

                                                                          oauth2 was merged a few days ago https://github.com/thundernest/k-9/pull/6082

                                                                          1. 1

                                                                            Ah, yeah, I saw elsewhere that it’s the only priority for the next release.

                                                                  1. 2

                                                                    Eshell was always frustrating so I tend to always use ansi-term when inside Emacs, but even that one has issues because of how wonky the terminal is with evil-mode for me (difficulties escaping insert mode, or difficulties coming back into focus with insert mode).

                                                                    Very cool patch and I hope others get use out of it.

                                                                    1. 10

                                                                      Try https://github.com/akermu/emacs-libvterm, it’s much better than ansi-term.

                                                                      1. 4

                                                                        Makes sense, the interop with evil mode is one of the reasons I use eshell over a different terminal emulator.

                                                                        This change also speeds up ansi-term because that also uses a PTY to communicate with emacs.

                                                                        I probably should’ve emphasized that more in the blog post originally, since it’s really improving subprocess output handling across all of emacs.

                                                                      1. 8

                                                                        My skip-level manager at work expressed concern last week that I’m not taking enough vacation time, so now I’m trying (and not succeeding) to figure out some sort of vacation plan.

                                                                        Before January, I had never taken a vacation before in my entire work career. Instead, I just did long weekends for tech/furry conventions here and there. In January I took a week off because of burnout from conducting too many interviews as an introvert. I also took a few days in March to visit my grandma before she died.

                                                                        I’m very bad at this.

                                                                        1. 4

                                                                          Doesn’t help that current epidemiological events make it hard to justify anything other than sitting at home.

                                                                          1. 3

                                                                            “I’m in this picture and I don’t like it.”

                                                                            1. 2

                                                                              We travelled to a wedding weekend before last which, while fun, has led to a few confirmed cases (and more probables that are refusing to test). I’m going a bit stircrazy.

                                                                              1. 1

                                                                                Extremely true.

                                                                                Maybe I’ll spend a week in VRChat.

                                                                                1. 2

                                                                                  If you do that lemme know so I can join at some point and we can talk memes

                                                                              2. 1

                                                                                Take more long weekends, but not for conventions, just for funsies

                                                                                1. 1

                                                                                  Are there any places you would like to visit during a vacation period? Or maybe you can take a break from work to simply relax and catch up on household duties? I’m not very travel oriented but a week break from work to simply relax and hang out with local friends is a perfectly ideal vacation for me.

                                                                                  1. 3

                                                                                    I’m not a traveler, but staycations tend to lead to me working on side projects since my home lab is right there.

                                                                                    1. 4

                                                                                      If travel is not your thing, consider staying at a hotel nearby to avoid “falling into” the home lab. Bring a suitcase of books (or a kindle) and enjoy not doing any chores for a week.

                                                                                      1. 1

                                                                                        Is there perhaps a (potential) side project that would require you to do some research somewhere else? That might be a reason to travel. Maybe the travel itself can be (part of) a side project. For instance, if you like building hardware you could build a gps tracker. And then travel to the north pole to see if it works.

                                                                                        In the past I had great fun to travel to visit confrences in another country. Book some more days and discover the city, etc.

                                                                                    2. 1

                                                                                      Just wondering what “skip-level” means?

                                                                                      1. 2

                                                                                        Manager’s manager

                                                                                        1. 1

                                                                                          I use first-line manager, second-line manager, etc. The first-line is your direct manager, the second-line manager their manager, and so on. I picked this up at IBM..

                                                                                          1. 1

                                                                                            Skip-level (or just “skip”) is a term I picked up from an Amazonian a few years back.

                                                                                    1. 6
                                                                                      • Haskell, for one, does not have maps in base either.
                                                                                      • Query strings parameter collections are multimaps. There are at least five sensible ways to implement them for this particular use case.
                                                                                      1. 13

                                                                                        Haskell doesn’t have hashmaps in base, but the tools that haskell gives you to implement maps are so powerful that it makes them easy. With hare you have to draw the rest of the owl. Some people do like drawing the rest of the owl though.

                                                                                        1. 2

                                                                                          :shrugs: I cannot imagine using a language without generics. Even in C one frequently ends up emulating vtables or passing comparators manually and so on. But the arguments in the article were lacking.

                                                                                        2. 3
                                                                                          lookup :: Eq a => a -> [(a, b)] -> Maybe b
                                                                                          

                                                                                          Is in the Prelude I think, and works well until you need performance. I mostly use Haskell for research code (read: not performance intensive) and you’d be surprised how far list-based maps take you.

                                                                                          1. 2

                                                                                            Is that like an alist (association list) in Lisp?

                                                                                            1. 3

                                                                                              Yes, it looks like it.

                                                                                              1. 2

                                                                                                Yes, the tuple (a, b) represents a key paired with a value (equal to Lisp’s '(a . b)), and the [ ] brackets imply a Cons list. The alist is linear and insert/append/removes are still very much O(n).

                                                                                                Data.Map implements a hash-map-like balanced tree, and is simple to implement on your own (if you like writing trees in Haskell, that is).

                                                                                          1. 5

                                                                                            Nice to see another contender in the build-script arena, and very happy it’s part of Racket. I use Racket every day for work and love seeing what new developments there are.

                                                                                            I don’t have enough experience in building Racket from scratch or cross-platform compiling, but it seems like a lot of work went into making Zuo very minimal and easy to embed anywhere. Even toying around with it a bit, it doesn’t come with an interpreter, and it seems to have the basic primitives needed to build out some applications (and it seems like the only numbers it supports is 64-bit signed integers).

                                                                                            I think it’s interesting enough to build some very tiny scripts with and explore, but curious to see if it simplifies the build process for projects. I might try incorporating a Zuo file into a project and see how it goes.

                                                                                            1. 4

                                                                                              Would love to know what you use Racket at work for. I knock off small utility scripts for my own use at $DAYJOB but nothing beyond that.

                                                                                              1. 3

                                                                                                Oops I guess my reply never went through yesterday. I’m a solo monkey coder at my job and we don’t have a lot of automatic tooling in places where we can, so I had the freedom to use whatever language I want. I picked Racket because I wanted something that works well out of the box, wide standard library, easy to write functional code, and to even create miniature DSLs where it makes sense.

                                                                                                I deal a lot with spreadsheet/CSV processing, so I use some custom Racket code to be able to define and read sheets of different dimensions and able to generate HTML printable reports out for the upper management folk.

                                                                                            1. 5

                                                                                              Most of my time is now spent using Racket in places where I could use a shell script. It’s easier to write a Racket program that invokes other programs and work with their error codes and re-direct their output to the right places. Truly a joy for me, personally, as I do like writing Lisp.

                                                                                              1. 5

                                                                                                Could you provide a few idiomatic examples of replacements of typical shellscript pipelines featuring grep, sek, sort, etc?

                                                                                                1. 3

                                                                                                  For the most part, a lot of features in the Racket library do not need sub-processes to do those types of jobs.

                                                                                                  • For grep we have regexp objects which employ either racket-match or racket-match? to match across strings or filter.
                                                                                                  • seq can be mimicked by using a range function to iterate combined using expressions like for.
                                                                                                  • sort is done by using the appropriately named Racket function sort and changing the comparison function and input list.

                                                                                                  If you want to sub-process invoke programs, then the output of a subprocess call can only be sent to a file stream like stdout or a plain file. To invoke multiple sub-processes one after another and continuously pass their outputs to one another involves a little bit of trickery which might be a bit complex to talk about in a comment, but it is do-able. The gist is to try to write tasks using the Racket standard library, then use subprocess when you need something not covered by it.

                                                                                                  ; display all files in pwd
                                                                                                  (for-each displayln (map path->string (directory-list)))
                                                                                                  
                                                                                                  ; display all files sorted
                                                                                                  (for-each displayln
                                                                                                    (sort (map path->string (directory-list)) string<?))
                                                                                                  
                                                                                                  ; regexp match over a list of sorted files
                                                                                                  (for-each displayln
                                                                                                    (filter (λ (fname) (regexp-match? #rx".*png" fname))
                                                                                                             (sort (map path->string (directory-list)) string<?)))
                                                                                                  
                                                                                                  1. 2

                                                                                                    As posted in a sibling message, it’s much easier to use built-in functions than to shell out and call another program. Personally, I find Racket more convenient for writing scripts that need to work in parallel. For example, a script gets the load average from several machines in parallel over ssh.

                                                                                                    https://gist.github.com/6c7ab225610bc50a3bb4be35f8e46f18

                                                                                                  2. 1

                                                                                                    Would also love to see examples.

                                                                                                    1. 2

                                                                                                      Best way I can quickly sum it up is clever use of the function subprocess in Racket.

                                                                                                      (define (start-and-run bin . args)
                                                                                                        (define-values (s i o e)
                                                                                                          (apply subprocess
                                                                                                            `(,(current-output-port) ,(current-input-port) stdout
                                                                                                              ,(find-executable-path "seq")
                                                                                                              ,@args)))
                                                                                                        (subprocess-wait s))
                                                                                                      
                                                                                                      (start-and-run "seq" "1" "10")
                                                                                                      

                                                                                                      This outputs the seq command to stdout, and allows for arbitrary commands so you can do zero-arg sub-processes or however many you need/like. The current-output-port and current-input-port calls are parameters that you can adjust by using a parameterize block to control the input/output from the exterior.

                                                                                                      The output port must be set to a file, it cannot be set to an output string like with call-with-output-string, so output is either going to go straight to stdout, or you can use call-with-output-file to control the current-output-port parameter and store the output wherever you please.