1. 2

    I find the website font astonishing hard to read.

    Edit: Fixed. Awesome.

    1. 4

      I’ve switched to the very minimalistic blog theme, which might now be better.

      1. 3

        It looks more legible to me!

        1. 2

          On your main site (libreserver.org), my mouse cursor doesn’t change to a pointer when I hover over links.

        2. 1

          And Reader View wouldn’t kick in for me, either. Annoying.

        1. 7

          Maintaining “automated” Mac machines is one of the most deeply unpleasant experiences of my career, one I wouldn’t wish on my worst enemy, but […] we do need to build and test it along with our other targets. So maybe cross compiling to Macs will be in a future post.

          I share the feeling, and I’m looking forward to reading the cross compilation on macOS post!

          Too bad the author doesn’t use RSS/ATOM, and I don’t see any obvious way of subscribing to the content.

          1. 3

            Yes! I got a mac mini just for that, so I could hook it into GitLab as CI runner. It’s annoying to set it up properly. It requires attention and manual restarts once in a while. It doesn’t clean up the environment after running like with Linux builds, making it not very secure. It’s really cumbersome!

            1. 1

              Have you tried osxcross? I’m using that in a CI to create macOS builds on Linux. Although, I figure it’s perhaps not complete enough to do anything complicated (am using it for a C library only).

          1. 1

            A weird choice by Apple I think, to handle images differently. I wonder what this means in the future, with new technology, and whether it’ll really start staying behind.

            1. 12

              It’s not that weird; makes it easier for them to implement the codecs in one dylib that’s shared by all applications (saving RAM) and can use whatever hardware-specific stuff they use on various devices to codec the bits without exposing those implementation details to the world.

              1. 7

                Indeed. Image codecs are an attack surface (a few jailbreaks were thanks to TIFF decoder), so it’s better to have fewer, better tested copies.

                1. 1

                  And applications can use it, too. In TenFourFox we used the OS X AltiVec-accelerated built-in JPEG decoder to get faster JPEGs “for free.”

                2. 6

                  From a user perspective I think it would be weirder if they didn’t do this – “oh you can view this image of type X in Safari but not Preview.app, because the decoder statically linked in the former, but Preview.app can render type Y quickly because it leverages the core OS codec dylib but Safari doesn’t include a decoder for that one, or it only has some ultra-slow battery-eating software decoder someone contributed to WebKit”.

                  Doing it in one shared set of libraries for everything means that support is consistent, it’s easier to audit attack surface across the board (which Apple already struggles with, so I’d hardly encourage them to increase that surface), and optimizations only need to happen in exactly one place to leverage GPU features or custom IC blocks on their mobile SOCs. For a mobile browser you really want as few pure-software decoders as you can get away with for battery life reasons (more-so for video than stills, but things like HEIF are starting to be reasonably heavyweight to decode without hardware support on image-heavy pages).

                1. 1

                  I’ve recently implemented support for Tomb in prs (like pass-tomb) in an attempt to minimize metadata leakage by Tomb’ing the underlying password store.

                  It’s a fun tool! It makes working with an encrypted image from the CLI fairly easy. Though I find some commands and output a little weird/hard to work with. It’s essentially a bash tool that dumps output to std{out,err} so it can be tricky to parse and automate with.

                  1. 1

                    I’ve recently implemented support for Tomb in prs (like pass-tomb) in an attempt to minimize metadata leakage by Tomb’ing the underlying password store.

                    I suppose it’s good to mitigate it, but it’s always been a bit of a head-scratcher to me that a password manager with that metadata leakage baked in to its design has apparently become so popular in the first place.

                  1. 7

                    Vacations were supposed to start today. But 5pm Friday we received the call that we’ll have to quarantine due to one of my kids classmate getting covid. Hence, trapped at home with 3 kids under 7 for the next two weeks. I guess I’ll be trying to keep their cabin fever in check. Maybe I’ll get to catch up on my books backlog a bit. Or do some woodworking.

                    1. 1

                      That’s shit! Had something similar a few weeks back. I really hope you’ll be able to take a vacation after this. Good luck.

                    1. 2

                      I wanted to link Google’s library for handling phone numbers, which is quite good. But then I discovered this document is already part of it. Cool!

                      This is one of those things you never want to roll yourself, such as handling time or crypto.

                      1. 20

                        Even if they didn’t make sense as form elements, it sucks when you have a RESTy API and don’t want to involve JS on the frontend. I’d love to make a link or form’s method DELETE, but instead, I have to implement something dumb like GET /object/3/delete or something.

                        1. 9

                          I have to implement something dumb like GET /object/3/delete or something.

                          Please remember that GETs are defined as safe and idempotent methods. A GET that deletes an object violates this.

                          This isn’t a theoretical concern, either. I’ve been professionally burnt by misbehaving middleboxes replaying traffic, web spiders and browsers prefetching endpoints that mutate state… so make your non-read only endpoints accept form POSTs instead.

                          1. 6

                            I understand what you mean, and I agree it would be helpful to have other methods supported, but using a GET is absolutely the wrong work around. A DELETE or PUT request should be handled by most layers (browsers, proxies/caches, app servers) the same way a POST should - the semantic difference is not relevant for those layers.

                            Edit to Add: the only valid reason I can see for a “GET” response to a ‘/delete’ endpoint like that, is to show a prompt to confirm deletion of something (i.e when not relying on JS, or when relying on simple XHR and very little front-end logic to do the confirmation. ie. GET /foo/bar/delete shows a confirmation, which is a form to submit POST /foo/bar/delete.

                            1. 5

                              One way to get around this is adding a hidden method field to your form and then having your backend look for that. This is what Ruby on Rails does for example.

                              It’s kinda silly. I don’t understand the objections against just adding it to the form element.

                              1. 6

                                Best explanation I’ve seen is “browser vendors are why we can’t have nice things”; same as client TLS certs.

                                1. 1

                                  There is no literal objection. It just hasn’t been figured out yet. I just submitted a top-level comment to explain the issues that would need to be resolved. With Rails, the website is obviously opting in by implementing this convention in frontend and backend explicitly. But that Rails only covers the same-origin case AFAIU.

                                  1. 2

                                    Those weren’t the cited reasons though, which were mostly fairly dismissive. There is a specification, which addresses CORS, but I can’t readily find why that wasn’t implemented.

                                2. 2

                                  For precisely that purpose, I always use the Rails UJS library.

                                  I use it without Rails. It’s a minimal amount of JavaScript that will save you a lot of time, as it does exactly what you mentioned in your comment.

                                  1. 2

                                    Frameworks like Laravel use a ‘magic’ _method field, which you can use to override the request method.

                                    <input type="hidden" name="_method" value="DELETE" />

                                    It’s not ideal, but it makes handling it in middleware to rewrite the request okay, without having to mess with other routes.

                                    1. 1

                                      If what you want is the convenience of calling the right method without the pain of dealing it in JavaScript - as opposed to JS being a hard no - then I couldn’t recommend the htmx library enough. I’ve been using it for a few weeks in a new project and it’s ability to get rid of boilerplate JS code is unsurpassed.

                                    1. 2

                                      When I switched to a IMAP/SMTP provider I was seriously surprised how great and fast it worked!

                                      As you said, no prioritizing bullshit. I need to read all mails anyway. Hundreds of clients to choose from. And even a form of instant push notifications works on mobile.

                                      1. 2

                                        What a fantastic list!

                                        I always find if very cumbersome to figure out in what Rust version features are stabilized. You’ll usually end up on their respective GitHub issue pages, but it usually doesn’t clearly state what version it got stabilized for. Does the author have any tips regarding this?

                                        1. 2

                                          Not exactly what you’re asking for but there is The Unstable Book which lists all the features along with their tracking issue. Beyond that I tend to rely on the This Week in Rust posts or Rust podcasts that tend to list the standout features.

                                        1. 2

                                          Should have went on scouts camp this week, which took a lot of preparation. Really hoped to go to take a week off from regular things. But sadly due to COVID I’m the only last-minute one having to stay home. :(

                                          So, I’ll probably be spending the week learning a new programming language, library or technique. Does anybody have a cool suggestion?

                                          1. 1

                                            What languages do you normally use and have played with?

                                          1. 7

                                            It took me a while to get used to HJKL, but now it’s second nature to me.

                                            I actually quite like J/K for down/up. I navigate up and down quite a lot, having these next to each other (instead of on top) feels really nice to me.

                                            1. 3

                                              Cool! I’ve used ASCII art to explain various puzzle concepts in a blog post recently: https://timvisee.com/blog/solving-aoc-2020-in-under-a-second/#day-6-custom-customs

                                              1. 1

                                                I’ve used pass/prs to pipe real mystic secrets to your target. You might consider those lightweight enough.

                                                When putting secrets directly in commands, you may prefix the command with a space to at least keep it out of your shell’s history file.

                                                1. 3

                                                  Less magic, but some might also like Bip Buffers.

                                                  Supports differently sized chunks without splitting (splitting can be costly), and allows reading and writing at the same time with simple atomics.

                                                  1. 3

                                                    This is sad, even though the writer asked about opinions from many he destroys peoples opinions with his own very small sample size.

                                                    For me, I can’t rely on a Windows machine for doing long running or performance tasks due to many reasons. Every once in a while I discover I’ve shot myself in my feet again because I used Windows for a job. That’s what I call unstable. I don’t think this has improved the last years, but I don’t have hard numbers.

                                                    I don’t have this problem, ever, with any Linux distribution I’ve tried.

                                                    1. 2

                                                      This is sad, even though the writer asked about opinions from many he destroys peoples opinions with his own very small sample size.

                                                      If you read the title, you’ll notice it’s just his thoughts, his opinions. Why does he need a larger sample size? He’s not trying to do a thorough study.

                                                      I don’t have this problem, ever, with any Linux distribution I’ve tried.

                                                      That’s wonderful! I’m glad you’ve found a happy place. You should also write an article, about your personal experience.

                                                    1. 1

                                                      This is funny, because the text in the progress bar isn’t properly aligned on my machine.

                                                      1. 5

                                                        I use + signs in my addresses. I come across so many websites that don’t allow it, it’s insane. There are also websites that don’t allow their name in an address, so something+aliexpress@domain won’t work.

                                                        1. 1

                                                          What about your own domain with a catch all?

                                                          It’s what I’m doing, for years. Works perfectly!

                                                          1. 2

                                                            Just want to plug prs here, it is pass but with many annoyances fixed. Compatible with your pass store.

                                                            1. 12

                                                              It’s nice to bring some nuance to the discussion: some languages and ecosystems have it worse than others.

                                                              To add some more nuance, here’s a tradeoff about the “throw it in an executor” solution that I rarely see discussed. How many threads do you create?

                                                              Well, first, you can either have it be bounded or unbounded. Unbounded seems obviously problematic because the whole point of async code is to avoid the heaviness one thread per task, and you may end up hitting that worst case.

                                                              But bounded has a less obvious issue in that it effectively becomes a semaphore. Imaging having two sync tasks, A and B where the result of B ends up unblocking A (think mutex) and a thread pool of size 1. If you attempt to throw both on a thread pool, and A ends up scheduled and B doesn’t, you get a deadlock.

                                                              You don’t even need dependencies between tasks, either. If you have an async task that dispatches a sync task that dispatches an async task that dispatches a sync task, and your threadpool doesn’t have enough room, you can hit it again. Switching between the worlds still comes with edge cases.

                                                              This may seem rare and it probably is, especially for threadpools of any appreciable size, but I’ve hit it in production before (on Twisted Python). It was a relief when I stopped having to think about these issues entirely.

                                                              1. 3

                                                                Imaging having two sync tasks, A and B where the result of B ends up unblocking A (think mutex)

                                                                Isn’t this an antipattern for async in general? Typically you’d either a) make sure to release the mutex before yielding, or b) change he interaction to “B notifies A”, right?

                                                                1. 4

                                                                  Changing the interaction to “B notifies A” doesn’t fix anything because presumably A waits until it is notified, taking up a threadpool slot, making it so that B can never notify A. Additionally, it’s not always obvious when one sync task depends on another, especially if you allow your sync tasks to block on the result of an async task. In my experience, that sort of thing happens when you have to bolt the two worlds together.

                                                                  1. 2

                                                                    It’s a general problem. It can happen whenever you have a threadpool, no matter whether it’s sync or async.

                                                                  2. 3

                                                                    But bounded has a less obvious issue in that it effectively becomes a semaphore. Imaging having two sync tasks, A and B where the result of B ends up unblocking A (think mutex) and a thread pool of size 1. If you attempt to throw both on a thread pool, and A ends up scheduled and B doesn’t, you get a deadlock.

                                                                    I’ve never designed a system like this or worked on a system designed like this before. I’ve never had one task depend on the value of another task while both tasks were scheduled simultaneously. As long as your tasks spawn dependent tasks and transitively one of those eventually dependent tasks does not have to wait on another task, we can ensure that the entire chain of tasks will finish 1. That said, running out of threads in a thread pool is a real problem that plagues lots of thread-based applications. There are multiple strategies here. Sometimes we try to acquire a thread from the pool with a deadline and we retry a few times to grab a thread from the pool, eventually failing the computation if we just cannot grab a thread from the pool. Other times we just spawn a new thread, but this can lead to scheduler thrashing if we end up spawning too many threads. Another common solution is to create multiple thread pools and allocate different pools to different workloads, so that you can make large thread pools for long running threads and make smaller thread pools of short running tasks.

                                                                    Thread-based work scheduling can, imo, be just as complicated as async scheduling. The biggest difference is that async scheduling makes you pay the cost in code complexity (through function coloring, concurrency runtimes, etc) while thread-based scheduling makes you pay the cost in operational and architectural complexity (by deciding how many thread pools to have, which tasks should run on which pools, how large each pool should be, how long we should wait to retry to grab a thread from the pool, etc, etc). While shifting the complexity to operational and architectural complexity might seem to shift the work up to operators or some dedicate operationalizing phase, in practice the context lost by lifting decisions up to this level can make tradeoffs for pools and tasks non-obvious, making it harder to make good decisions. Also, as workloads change over time, new thread pools may need to be created and these new pools necessitate rebalancing of other pools, which requires a lot of churn. Async has none of these drawbacks (though to be clear, it has its own unique drawbacks.)

                                                                    1. 8

                                                                      I’ve never designed a system like this or worked on a system designed like this before. I’ve never had one task depend on the value of another task while both tasks were scheduled simultaneously.

                                                                      Here’s perhaps a not-unreasonable scenario: imagine a cache with an API to retrieve some value for a key if it exists and otherwise compute, store, and return it. The cache exports an async API and the callback it runs to compute the value ends up dispatching a sync task to a threadpool (maybe it’s a database query using a sync library). We want the cache to be able to be accessed from multiple threads, so it is wrapped in a sync mutex.

                                                                      Now imagine that an async task tries to use the cache that is backed by a threadpool of size 1. The task disaptches a thread which acquires the sync mutex, calls to get some value (waiting however on the returned future), and assuming it doesn’t exist, the cache blocks forever because it cannot dispatch the task to produce the value. The size of 1 isn’t special: this can happen with any bounded size thread pool under enough concurrent load.

                                                                      One may object to the sync mutex, but you can have the same issue if the cache is recursive in the sense that producing a value may depend on the cache populating other values. I don’t think that’s very far fetched either. Alternatively, the cache may be a library used as a component of a sync object that is expected to be used concurrently and that is the part that contains the mutex.

                                                                      In my experience, the problem is surprisingly easy to accidentally introduce when you have a code base that frequently mixes async and sync code dispatching to each other. Once I started really looking for it, I found many places where it could have happened in the (admittedly very wacky) code base.

                                                                      1. 3

                                                                        Fair enough that is a situation that can arise. Those situations I would probably reach for either adding an expiry to my threaded tasks or separating thread pools for DB or cache threads from general application threads. (Perhaps an R/W Lock would help over a regular mutex, but I realize that’s orthogonal to the problem at hand here and probably a pedagogical simplification.) The reality is that mixing sync and async code can be pretty fraught if you’re not careful.

                                                                    2. 2

                                                                      I have seen similar scenarios without a user visible mutex: you get deadlocks if a thread on a bounded thread pool waits for another task scheduled on the same thread pool

                                                                      Of course, there are remedies, e.g. never schedule subtasks on the same thread pool. Timeouts help but still lead to abysmall behavior under load because your threads just idle around until the timeout triggers.

                                                                      1. 1

                                                                        Note that you can also run async Rust functions with zero (extra) threads, by polling it on your current thread. A threadpool is not a requirement.

                                                                        1. 3

                                                                          Isn’t that equivalent to either a threadpool of size 1 or going back to epoll style event loops? If it’s the former, you haven’t gained anything, and if it’s the latter, you’ve thrown out the benefits of the async keyword.

                                                                          1. 3

                                                                            Async has always been a syntax sugar for epoll-style event loops. Number of threads has nothing to do with it, e.g. tokio can switch between single and multi-threaded execution, but so can nginx.

                                                                            Async gives you higher-level composability of futures, and the ease of writing imperative-like code to build state machines.