Threads for mullr

    1. 5

      I think this is a great idea, but I am anticipating folks explainIng why it isn’t.

      1. 22

        The main argument against is that even if you assume good intentions, it won’t be as close to production as an hosted CI (e.g. database version, OS type and version, etc).

        Lots of developers develop on macOS and deploy on Linux, and there’s tons of subtle difference between the two systems, such as case sensitivity of the filesystem, as well as default ordering just to give an example.

        To me the point of CI isn’t to ensure devs ran the test suite before merging. It’s to provide an environment that will catch as many things as possible that a local run wouldn’t be able to catch.

        1. 6

          To me the point of CI isn’t to ensure devs ran the test suite before merging.

          I’m basically repeating my other comment but I’m amped up about how much I dislike this idea, probably because it would tank my productivity, and this was too good as example to pass up: the point of CI isn’t (just) to ensure I ran the test suite before merging - although that’s part of it, because what if I forgot? The bigger point, though, is to run the test suite so that I don’t have to.

          I have a very, very low threshold for what’s acceptably fast for a test suite. Probably 5-10 seconds or less. If it’s slower than that, I’m simply not going to run the entire thing locally, basically ever. I’m gonna run the tests I care about, and then I’m going to push my changes and let CI either trigger auto-merge, or tell me if there’s other tests I should have cared about (oops!). In the meantime, I’m fully context switched away not even thinking about that PR, because the work is being done for me.

          1. 4

            You’re definitely correct here but I think there are plenty of applications where you can like… just trust the intersection between app and os/arch is gonna work.

            But now that I think about it, this is such a GH-bound project and like… any such app small enough in scope or value for this to be worth using can just use the free Actions minutes. Doubt they’d go over.

            1. 6

              any such app small enough in scope or value for this to be worth using can just use the free Actions minutes.

              Yes, that’s the biggest thing that doesn’t make sense to me.

              I get the argument that hosted runners are quite weak compared to many developer machines, but if your test suite is small enough to be ran on a single machine, it can probably run about as fast if you parallelize your CI just a tiny bit.

            2. 2

              I wonder if those differences are diminished if everything runs on Docker

              1. 5

                With a fully containerized dev environment yes, that pretty much abolish the divergence in software configuration.

                But there are more concern than just that. Does your app relies on some caches? Dependencies?

                Where they in a clean state?

                I know it’s a bit of an extreme example, but I spend a lot of time using bundle open and editing my gems to debug stuff, it’s not rare I forget to gem pristine after an investigation.

                This can lead me to have tests that pass on my machine, and will never work elsewhere. There are millions of scenarios like this one.

                1. 3

                  I was once rejected from a job (partly) because the Dockerfile I wrote for my code assignment didn’t build on the assessor’s Apple Silicon Mac. I had developed and tested on my x86-64 Linux device. Considering how much server software is built with the same pair of configurations just with the roles switched around, I’d say they aren’t diminished enough.

                  1. 1

                    Was just about to point this out. I’ve seen a lot of bugs in aarch64 Linux software that don’t exist in x86-64 Linux software. You can run a container built for a non-native architecture through Docker’s compatibility layer, but it’s a pretty noticeable performance hit.

              2. 13

                One of the things that I like having a CI is the fact that it forces you to declare your dev environment programmatically. It means that you avoid the famous “works in my machine” issue because if tests works in your machine but not in CI, something is missing.

                There are of course ways to avoid this issue, maybe if they enforced that all dev tests also run in a controlled environment (either via Docker or maybe something like testcontainers), but it needs more discipline.

                1. 2

                  This is by far the biggest plus side to CI. Missing external dependencies have bitten me before, but without CI, they’d bite me during deploy, rather than as a failed CI run. I’ve also run into issues specifically with native dependencies on Node, where it’d fetch the correct native dependency on my local machine, but fail to fetch it on CI, which likely means it would’ve failed in prod.

                2. 4

                  Here’s one: if you forget to check in a file, this won’t catch it.

                  1. 3

                    It checks if the repo is not dirty, so it shouldn’t.

                    1. 1

                      This is something “local CI” can check for. I’ve wanted this, so I added it to my build server tool (that normally runs on a remote machine) called ding. I’ll run something like “ding build make build” where “ding build” is the ci command, and “make build” is what it runs. It clones the current git repo into a temporary directory, and runs the command “make build” in it, sandboxed with bubblewrap.

                      The point still stands that you can forget to run the local CI.

                    2. 1

                      What’s to stop me from lying and making the gh api calls manually?

                    3. 2

                      Las Vegas, Nevada and Las Vegas, New Mexico are not the same place. :)

                      1. 1

                        Ditto for Redmond, OR and Redmond, WA

                      2. 1

                        Keeb.io sinc: https://keeb.io/collections/sinc What I really want is a standard tenkeyless layout, just split. Bonus points for some additional thumb keys. Nobody seems to make such a thing, though.

                        1. 1

                          I only knew the old image-based skins (where the layout was entirely fixed). Given Winamp 5 is from 2018, I’m actually kinda surprised its not more web-based (even if not using an actual browser engine)

                          1. 2

                            The original release of v5 was in 2003.

                            1. 1

                              oh, then I misread that, and then it makes sense of course.

                          2. 16

                            I sense some over-optimization going on here. Perhaps slightly OT, but then, not having to write unsafe code at all is certainly one solution to the issues raised here, right?

                            No allocations under steady-state use. Allocations are a source of contention, failure, and overhead, especially when using slow system allocators.

                            In a program where the items being queued are files, and the processing involves generating ML embeddings, memory allocation in the channel is going to be lost in the noise.

                            And the slowness of allocators is pretty OS dependent — from what I’ve heard here, Linux (or glibc) has a particularly slow malloc implementation.

                            The channel cannot know how many tasks are blocked, so we can’t use a fixed-size array. Using a Vec means we allocate memory every time we queue a waker.

                            Unless I’m missing something, this is untrue. A Vec only allocates memory when it has to grow its capacity. Once the queue warms up, there should be no more allocations.

                            1. 8

                              Unless I’m missing something, this is untrue. A Vec only allocates memory when it has to grow its capacity. Once the queue warms up, there should be no more allocations.

                              That’s my understanding as well. It should happen very quickly for most systems, and remain stable for the lifetime of the channel. That fact removes the need for all the other complexity.

                              1. 8

                                It’s also pretty easy to use jemalloc in Rust, just a few lines of boilerplate and you’re all set.

                                1. 9

                                  We also provide Rust bindings in snmalloc. Rust tends to encourage the kind of thing that we optimise for (producer-consumer things with allocation in one thread and deallocation in another are a pathological case for thread-caching allocators like jemalloc and are very fast in message-passing allocators like snmalloc)l

                                  1. 2

                                    We got big gains switching to jemalloc at $JOB, I’ll have to give snmalloc a try when I have some spare time. I’m not sure how much cross-thread message passing we do in practice. I think many of our messages go between Tokio tasks on the same thread, but it’s hard to know without measuring.

                                    1. 3

                                      For single-threaded apps, we typically see a 1-3% speedup relative to jemalloc. For things with high inter-thread communication, we’ve seen 20-100%.

                                    2. 1

                                      Just tried to use snmalloc in Rust. Let me describe two problems. I say all this to help you make snmalloc better.

                                      • First. My first instinct was to cargo add snmalloc. Unfortunately, crate https://crates.io/crates/snmalloc seems to be empty. So I suggest reaching out to its owner and asking for transferring this name
                                      • Second (more important). Building Rust project with snmalloc dependency needs C++ compiler (as opposed to C compiler). This makes building slightly harder. I have no problems with installing C++ compiler, but I don’t want to complicate building instructions for users of my code. Rust has great advantage: nearly all projects can be built with just cargo build. Yet if a project depends on snmalloc you have to apt-get install g++ (or similar) first. Also, snmalloc doesn’t give me any significant benefits over jemalloc or mimalloc, so I will just use jemalloc or mimalloc for my projects. Simpler build is more important than hypothetical speed advantages I don’t care about in my projects. Yes, jemalloc and mimalloc depend on C compiler. So what? Very popular crate reqwest (and many other popular crates) depends on C compiler in its default settings, too. So, most Rust developers already have C compiler installed. But not necessary C++ compiler.

                                      So, my personal conclusion is so: I will use default (system) allocator in all my Rust projects. If it performs badly, I will switch to jemalloc or mimalloc. But never to snmalloc, because it doesn’t give any real advantage over jemalloc or mimalloc, while complicating build, which is important for me (because I care about my users). If, say, snmalloc will make my whole program (which consists not only of allocations) 2 times faster than with jemalloc and mimalloc, then this will be possibly enough reason for switching to snmalloc.

                                      All these is not attempting to insult you. I’m just trying to deliver constructive critique, so you will make snmalloc better

                                      1. 1

                                        If you actually want to improve it, the right place to post this is on the issue tracker. The Rust integration is contributed by someone who uses it, so they’ll see the friction for other people and maybe fix it.

                                        We obviously can’t do anything about the C++ requirement, big given that every platform supported by Rust has a default C compiler that is also a C++ compiler, I have no idea why that would be a problem.

                                  2. 5

                                    Yeah, the original version of the crate did not have any unsafe code. But I previously worked on projects where every malloc() hurt, even with jemalloc, and sometimes you have to use the system allocator. (Certainly library crates cannot dictate choice of allocator to downstream consumers.)

                                    You’re slightly misinterpreting the use case: some of the items being queued are inodes, likely served from kernel cache, and straight-line performance matters sometimes. (Of course, the second you have to hash a large file, then yes, that’s going to dominate.)

                                    I do have benchmarks, but I did not think to include them. Someone on Reddit asked the same question, so I’ll see about publishing something.

                                    On amortized allocation and Vec, if you sit down and try, it’s actually hard to amortize the allocation cost in this particular use case. In the least, you have to introduce a second mutex acquisition to store the backbuffer somewhere. More details here: https://news.ycombinator.com/item?id=41947924

                                    That said, yes, I could have ignored the small allocation cost, but it was worth the exercise to learn the rules, and if Tokio and LILOS can do it, so can I!

                                    1. 5

                                      What is there to amortize? A channel will quickly grow to its steady-state size and usually stay there. And that size is much, much less than the number of items that go through it. So you might have a handful of allocations before the Vec grows to 100 items or whatever, then no more. If that concerns you, you can initialize the Vec with that size.

                                      1. 4

                                        You just described amortization. It would amortize to 0 allocations per task sleep.

                                        Either way, I described why it can’t easily work that way in this use case without an extra lock or some global lock-free pool. (Which I did implement! https://docs.rs/wakerpool/latest/wakerpool/)

                                    2. 4

                                      And the slowness of allocators is pretty OS dependent — from what I’ve heard here, Linux (or glibc) has a particularly slow malloc implementation.

                                      And it’s pretty easy to switch to jemalloc, mimalloc, snmalloc, or probably any number of other allocators, if the concern is that the OS-supplied allocator will be too slow.

                                    3. 37

                                      I have found that nearly everything I have ever studied has, in some way, been of use in my work. Most recently, I have been very glad that I got an amateur radio license when I was in middle school. Being conversant in basic RF and electronics has been very useful. Not something I would expect for a software engineering job, but I’m glad I did it.

                                      To me, studying fundamentals is less a question of “will I use it”, and more a question of “if I learn it, what new doors will it open for me?”

                                      1. 15

                                        That can even go to extreme values of “studied”. About a decade ago a buddy of mine wanted to go SCUBA diving in Honduras and invited me along. I’d never been diving before so did my Open Water and Adv. Open Water training down there. Had a blast!

                                        A couple of months later I’m starting an engagement with a company that did some pretty specialized high accuracy tank volume measurement. Nominally I was there to try to help with software performance problems and not the sensors but I asked them if they could give me a run down of how the sensors worked, just for context while working on the software. One of their engineers offers to draw it out for me but warns me that we’re going to be getting into gas constant and pressures and things pretty quickly. He draws it out, we’re having a good discussion, and at the end he asks “so… does it make sense?”

                                        “I think so. I think this actually operates pretty similarly to a SCUBA regulator? Is that right?”

                                        “Uhhhh… how does a SCUBA regulator work?”

                                        So now it’s my turn drawing and explaining. He sits there and stares at me for a few seconds when I’m done. “Yup… exactly like a SCUBA regulator. And man I wish I had known about those and how they worked when I started here…”

                                        1. 7

                                          I subscribe to this approach as well. For whatever reason, I’ve always loved trivia and other seemingly useless knowledge, and I have gotten into weird interests, reading, and collecting, and otherwise have just pursued knowledge for knowledge’s sake. I follow the learning, because learning begets understanding which begets humility and empathy. It’s a surefire way to avoid Dunning-Kruger, as the more you learn about anything, the more you can be at first surprised, but later expectant of, things always being more complicated than they seem at first glance.

                                          I’d argue that humility about complexity is a top skill in our field. My approach with projects, which oftentimes takes convincing with clients, is to get something on the board, however flawed and simplistic, so that we can get to the real meat of the questions that we don’t yet know we should be asking. The tendency is to plan and what-if scenarios that we might encounter, yet with just a bit more understanding, we can start to realize that there are scenarios that we can’t even imagine yet.

                                        2. 12

                                          To me, studying fundamentals is less a question of “will I use it”, and more a question of “if I learn it, what new doors will it open for me?”

                                          Such a lovely way to put it. Wholeheartedly agree.

                                        3. 1

                                          I have always been curious why Cisco developed Chez in the first place and how they used it. Asking around, I’ve never got a clear answer.

                                          1. 3

                                            The history of Chez Scheme itself is very thoroughly documented: https://legacy.cs.indiana.edu/~dyb/pubs/hocs.pdf

                                            If other portions of the Internet are to be believed, Dybvig joined Cisco in 2012, and they acquired Chez as part of the deal. The specific reasons for this transaction are unclear, but some plausible sounding comments suggest that Cisco was using Chez Scheme inside some routers, and they wanted more control over their destiny or to reduce their ongoing license fees.

                                          2. 9

                                            I’ve been using Zed on Linux for several weeks, and I’m very happy with it. Its keybinding system is flexible enough to accommodate my very idiosyncratic muscle memory. And it’s SO FAST. It works very well with rust-analyzer, in addition to giving a pretty good experience on random other stuff (json, yaml, python) without having to do much of anything.

                                            1. 7

                                              I am still looking for a formalism that allows me to write

                                              Expr = 'number' 
                                                  | '(' Expr ')'
                                                  | Expr ('+' | '*') Expr
                                              
                                              '*' binds tighter than '+'
                                              

                                              And then tells me if my precedence declarations don’t make sense.

                                              I would say that CFG straight out doesn’t work for infix expressions:

                                              • unconstrained CFG is not useful, because who knows whether it is unambiguous
                                              • if you refactor a natural CFG into LL or LR form, it looses most of its readability and declarativeness!
                                              1. 7

                                                (As I am sure you are aware…) The classic textbook answer is LALR with operator declarations, but the tooling tends to suck: the table construction algorithm isn’t good at explaining where shift/reduce and reduce/reduce conflicts come from, and being bottom up the recognizer isn’t good at using context to help with syntax errors.

                                                The best practical advice I have seen is to use an LALR parser generator to debug the grammar and to cross-check a production top-down parser. (differential testing ftw!) Dunno if there is tooling that can use the same source for both.

                                                1. 9

                                                  My answer to this is usually to separate the parse tree and the AST. The parse tree captures the structure, so if you write a + b * c, you get the same shape of parse as if you write a * b + c, but then you apply precedence as a tree transform.

                                                  My better answer to this is: don’t. Operator precedence is a common source of bugs. Pick two random operators from the 20+ in the C operator precedence table and ask a random C programmer what their precedence is and they will probably be wrong. This gets even worse with operator overloading where the ‘natural’ precedence for mathematics or the domain of the overloads may contradict the C rules. In Verona, it is a syntax error to chain different operators. You can do a + b + c and that’s fine, but a + b * c is a syntax error and requires parentheses. I’ve been using that as a style guide rule for almost 20 years and people often complain briefly when they start writing them but then rapidly discover how much lower the cognitive load is when reading someone else’s code and the precedence is explicit with brackets.

                                                  1. 6

                                                    I first saw this with Pony. I wish it was more popular, I think I will be enforcing it in the lil lang I’m working on.

                                                2. 4

                                                  A grammar-like construct combined with some sort of precedence does seem like a kind of sweet spot. I have a nom parser the uses a pratt parser for local operator precedence (with https://github.com/rust-bakery/nom/pull/1362), and it works pretty well.

                                                  I wonder how this kind of hybrid would impact the performance of GLL / GLR. It would certainly reduce the number of required stack frames when parsing expressions. It’s not clear if that would have a substantial effect though, in the context of a larger parser.

                                                  1. 3

                                                    I think it’s more of an issue with C precedence, which has like 17 levels (and POSIX shell copies those rules exactly)

                                                    In that case using the grammar to express precedence is very verbose

                                                    Python has fewer levels of precedence (and YSH copies those rules) … It’s not exactly pretty, but you probably only write it once, and then you don’t need any special features in the grammar. I guess technically it’s a little less efficient, but it’s probably not slow relative to other things

                                                    https://docs.python.org/3/reference/grammar.html

                                                    # Bitwise operators
                                                    # -----------------
                                                    
                                                    bitwise_or:
                                                        | bitwise_or '|' bitwise_xor 
                                                        | bitwise_xor
                                                    
                                                    bitwise_xor:
                                                        | bitwise_xor '^' bitwise_and 
                                                        | bitwise_and
                                                    
                                                    bitwise_and:
                                                        | bitwise_and '&' shift_expr 
                                                        | shift_expr
                                                    
                                                    shift_expr:
                                                        | shift_expr '<<' sum 
                                                        | shift_expr '>>' sum 
                                                        | sum
                                                    
                                                    # Arithmetic operators
                                                    # --------------------
                                                    
                                                    sum:
                                                        | sum '+' term 
                                                        | sum '-' term 
                                                        | term
                                                    
                                                    term:
                                                        | term '*' factor 
                                                        | term '/' factor 
                                                        | term '//' factor 
                                                        | term '%' factor 
                                                        | term '@' factor 
                                                        | factor
                                                    
                                                    1. 3

                                                      Not quite formalism, but pyparsing (peg parser) will generate the right things for you in practice with the infix_notation() helper which defines exactly the thing you’re after. Operator, left/right binding, and precedence. It’s a nice solution.

                                                      1. 1

                                                        I wrote something like that, based on operator precedence grammars: https://lib.rs/crates/panfix

                                                        Calculator example that shows precedence: https://github.com/justinpombrio/panfix/blob/2a6d5f75e0aabfb69725e9621bf400fd727d5768/examples/calc.rs

                                                        Json example that shows it’s not limited to operators: https://github.com/justinpombrio/panfix/blob/2a6d5f75e0aabfb69725e9621bf400fd727d5768/examples/json.rs

                                                        It’s an open question how expressive this approach is, as it’s based on neither CFGs nor PEGs.

                                                      2. 4

                                                        I didn’t know about this until recently, but this must be a reference to https://www-cs-faculty.stanford.edu/~knuth/316.html

                                                        1. 2

                                                          Yup. There’s also this one that’s based on a series of lectures about writing 3:16: https://www-cs-faculty.stanford.edu/~knuth/things.html

                                                          1. 2

                                                            Yep.

                                                            My three main hobbies are computer programming, studying western religion (with a strong focus on Second Temple Judaism and early Christianity though I am neither Jewish nor Christian), and language learning.

                                                            Knuth’s “3:16” and his “Things a Computer Scientist Rarely Talks About” both occupy an interesting intersection of my hobbies and I have had copies of both for many years.

                                                            1. 6

                                                              I don’t want to get too excited, but this is the product I’ve wanted for the longest time. I’m really curious about the technical details here - it almost sounds like they’ve created a custom container runtime that simulates an OS. Other than that, I can’t see how they could ensure determinism in any meaningful way. Or maybe it’s using hermit?

                                                              Anyway, if this was coming from anyone other than the FoundationDB team, I’d be very very skeptical. But this sounds like the real deal.

                                                              1. 9

                                                                I hadn’t read about FoundationDB’s testing process before now, but “We never got tested by Jepsen because Kyle thought our tests were better” is one hell of an endorsement.

                                                                1. 3

                                                                  We thought about this and decided to just go all out and write a hypervisor which emulates a deterministic computer. Consequently, we can force anything inside it to be deterministic.

                                                                  It appears that they also have language level integrations that give them the ability to affect thread/task scheduling as well, at least in some cases. I imagine they could modify the scheduler in the machine hosting the test workload to do the same, for systems which use OS threads.

                                                                  1. 2

                                                                    This is an incredible idea.

                                                                2. 5

                                                                  I highly recommend https://github.com/KDAB/hotspot for this purpose. It gives you a flamegraph, plus all the usual profiler gui data diving tools. It’s fast and reliable and easy to use.

                                                                  1. 12

                                                                    We fork rust crates pretty regularly at work, to fix bugs or add features we need. Our general practice is to stay on upstream as much as possible, submitting an upstream patch immediately. That actually works out about half the time. So we have a small handful of forks.

                                                                    We generally only need to touch these things in order to keep dependencies up to date. This is thanks to Rust having a good compatibility policy, for the most part. Most crates do a pretty good job of keeping api breakage to a minimum as well.

                                                                    …except for one. Not going to name it, but one of our key dependencies seems to take delight in substantially breaking the api at every release. We’ve had that one forked for maybe a year, since they don’t seem to be in a hurry to take the feature patch we submitted. And frankly it’s been less work to maintain our fork than to try to keep up with the real library. We just update its deps when we need to and go about our business. We don’t keep it up to date with upstream, since we don’t really have a reason.

                                                                    1. 4

                                                                      This is how it’s gone for me with forking crates to add patches at work but the last fork I was maintaining has caught up to my work so we’re back on vanilla crates.io now except for the purely internal stuff on the private crate registry.

                                                                      I don’t think forking is a big deal if it’s something small that you expect to be able to drop on the maintainers as a PR and have upstreamed eventually. If it’s a bigger change on a big project you may want to get more involved in the project in public so that you’re not laboring in private on things that may not be upstreamable. That’s pretty rare though, I think programmers are excessively afraid of fixing any problems in their libraries and tend toward the extreme of treating open source libraries & frameworks like frozen black box products.

                                                                    2. 37

                                                                      I’ve noticed a bunch of ad-like stuff creeping into YouTube premium as well. Callouts to join premium services, menus to buy random shit, stuff like that. I can’t turn any of it off. It sure feels like they’re on the “monetize the userbase” plan.

                                                                      1. 19

                                                                        This is one of the reasons I stopped paying for YouTube Premium. It was great that videos started playing right away, but it seems that every other ad surface still existed and still had ads, they were just for YouTube features.

                                                                        Nebula is so nice in comparison. I never feel like I’m being sold anything. I can just show up, watch what I want, and leave.

                                                                        1. 10

                                                                          I basically only use YouTube (with premium subscription) through their iOS and Apple TV/“tvOS” apps, and I don’t know what kind of frequency you’re getting. Maybe once a month if I open the app on my phone I see a “want to beta-test this feature?” thing and dismiss it and move on, but I wouldn’t call that an “ad”.

                                                                          1. 6

                                                                            I use it on my iPad nearly exclusively. Things I see that I consider to be ads:

                                                                            • Some videos have an ad at the upper right-hand corner of the screen, inviting me to purchase the entire season of whatever it is, or to subscribe to some streaming service.
                                                                            • On the “Home” tab, many videos now have a “Products” menu, where I can see whatever they’re hocking in the video, and immediately purchase it. I’m not sure why I would want to do that BEFORE watching the video. It also disrupts the layout of that page. This is the one that really upsets me, because it’s hardest to ignore. My only choice for making it go away seems to be completely blocking the issuing channel. (“Don’t recommend channel”)
                                                                            • Sometimes, when you watch a video with “Products”, they will now show up in the right-hand recommendations list. It seems to be the same list, just expanded.
                                                                            • Sometimes, when I launch the app, I get a full-screen dialog telling me about some promotion they’re doing.

                                                                            These would all be considered extremely minor for a free service. But since I’m paying for the thing, they feel like betrayals.

                                                                            1. 1

                                                                              I watch YouTube with “annotations” turned off for all videos, so I don’t see whatever links the video’s creator has superimposed on it. The “products” stuff I thought was an outgrowth of the mandatory labeling of sponsored content. But I don’t really mind it.

                                                                          2. 2

                                                                            Nebula is so nice in comparison. I never feel like I’m being sold anything. I can just show up, watch what I want, and leave.

                                                                            For now. Don’t forget that youtube started off this way too.

                                                                            1. 6

                                                                              For now. Don’t forget that youtube started off this way too.

                                                                              There’s no free tier with Nebula, at all. So every view is payed for by the users. Seems to work well for them. (Got the lifetime plan there, just for fun)

                                                                              1. 3

                                                                                … just like youtube premium used to be ;)

                                                                        2. 8

                                                                          IME, for server-style applications at least, it’s difficult to avoid async. All the libraries you want to use seem to depend on it. We went way the hell out of our way to segregate the async libraries we just had to bring in, and use plain threads plus channels for everything else, but in the end we found that we were spending a bunch of complexity on just gluing the two worlds together. So we just converted everything over to async, and think back fondly to when we used to have useful stack traces.

                                                                          1. 1

                                                                            Now you’ve got me thinking of some automated way to generate a wrapper for a given async library, which presents a conventional synchronous interface. It is not clear to me if that is possible.

                                                                            I’m also a threads and channels kind of guy, not that I’ve had to deal with that lately.

                                                                            1. 2

                                                                              wrapper for a given async library, which presents a conventional synchronous interface.

                                                                              Interesting thought! There are things like pollster, which could help you with that. Maybe even an attribute macro could work:

                                                                              #[synchronous]
                                                                              pub async fn foo() {
                                                                                // ...
                                                                              }
                                                                              

                                                                              I don’t know of any library that does that yet. If there is a reason for not trying that, I’m not aware of it right now.

                                                                              1. 1

                                                                                Can an attribute macro rename the original function? Supposing you have function foo() as in your example, can the macro rename that to be async_foo() and declare a new pub fn foo() { ... } which will have the synchronous function signature?

                                                                                To accommodate the various async runtimes, maybe the attribute macro would also accept arguments such as tokio and async_std to indicate which one is in use.

                                                                              2. 1

                                                                                Now you’ve got me thinking of some automated way to generate a wrapper for a given async library, which presents a conventional synchronous interface. It is not clear to me if that is possible.

                                                                                Details aside, isn’t this what runtimes like Tokio essentially provide?

                                                                                1. 1

                                                                                  Details aside, isn’t this what runtimes like Tokio essentially provide?

                                                                                  No. Libraries like tokio provide a framework and async task executor to enable async programming.

                                                                                  I’m talking about the API an async library presents. In those cases, the code using the library needs to be async aware. Take for example, the creation of a new server object in Tide:

                                                                                  https://docs.rs/tide/latest/tide/struct.Server.html

                                                                                  The server’s support for the HTTP get operation is an async function (a very simple one in this case).

                                                                                  The more I research, the more it seems clear to me that automatically generating wrappers around async libraries will be very difficult at best.

                                                                                  1. 1

                                                                                    Tokio doesn’t provide a conventional synchronous interface over async code?

                                                                                    1. 2

                                                                                      It depends what you mean “conventional”; It has a block_on method so regular syncronous code can await a future. You might even argue that’s a convention. But I’d usually read “sync wrapper for async library” as meaning that the sync interface presented is isomorphic to the async library API, albeit minus futures.

                                                                            2. 1

                                                                              On a VPS:

                                                                              • FreshRSS
                                                                              • FoundryVTT for running my Pathfinder 2e game
                                                                              • AudioBookshelf for podcasts… which I just installed and am having no end of trouble with. It doesn’t seem to do the most basic job of a podcast client, which is to get new podcasts when they are released. I’m going to replace this.

                                                                              Inside the home network:

                                                                              • JellyFin on my big box
                                                                              • VaultWarden on a raspi 3
                                                                              • Navidrome on the raspi 3, for music only. Synced up using syncthing.
                                                                              1. 2

                                                                                I use ~/devel for stuff I’m working on or contributing to. I use ~/src for other people’s source code, like stuff I’m building from source or code I have on hand for reference.

                                                                                1. 8

                                                                                  I highly recommend Axum: the api is easy to understand, and the code you write is easy to maintain. I ported a warp codebase to Axum, and I’m glad I did it. The warp code was notorious within our team; nobody got in and out of that module in less than a week, no matter how trivial the job.

                                                                                  The biggest sticking point was the fanciness of Warp’s types. It’s kind of OK if everything is in a single function and you can rely solely on type inference. But once things get large enough to decompose, then you need to figure out how to write down those types and function signatures. We found this to be extremely difficult.

                                                                                  Axum, on the other hand, is designed to be easily decomposable into smaller parts. Refactoring is simple and straightforward.

                                                                                  1. 6

                                                                                    At this point in Rust’s lifetime, I personally would be much more interested in how likely a framework is to still be maintained five years down the road than in particular APIs of the framework. But for this metric, Axum I think also scores much higher than alternatives.

                                                                                  2. 3

                                                                                    This is awesome, and seeing how simple the config script is makes me want to pick this up for my own stuff.

                                                                                    Seeing ways to build easy tools like this quickly (one of my favorites is just piping lines into pick to instantly get a mini-GUI) is the real CLI superpower

                                                                                    1. 1

                                                                                      pick referring to https://github.com/mptre/pick right?

                                                                                      Any other keyboard-friendly option picker tools out there? The ones I know of:

                                                                                      1. 2

                                                                                        Hey, this is a kind of tool I didn’t know about. Neat!

                                                                                        A few others:

                                                                                        1. 1

                                                                                          Nice. From what I can see for advanced use fzf is still the best. The various match styles and the ability to combine them within the same filter and the keybinding and UI customization options make the difference for me.