1. 5

    I use + signs in my addresses. I come across so many websites that don’t allow it, it’s insane. There are also websites that don’t allow their name in an address, so something+aliexpress@domain won’t work.

    1. 1

      What about your own domain with a catch all?

      It’s what I’m doing, for years. Works perfectly!

      1. 2

        Just want to plug prs here, it is pass but with many annoyances fixed. Compatible with your pass store.

        1. 12

          It’s nice to bring some nuance to the discussion: some languages and ecosystems have it worse than others.

          To add some more nuance, here’s a tradeoff about the “throw it in an executor” solution that I rarely see discussed. How many threads do you create?

          Well, first, you can either have it be bounded or unbounded. Unbounded seems obviously problematic because the whole point of async code is to avoid the heaviness one thread per task, and you may end up hitting that worst case.

          But bounded has a less obvious issue in that it effectively becomes a semaphore. Imaging having two sync tasks, A and B where the result of B ends up unblocking A (think mutex) and a thread pool of size 1. If you attempt to throw both on a thread pool, and A ends up scheduled and B doesn’t, you get a deadlock.

          You don’t even need dependencies between tasks, either. If you have an async task that dispatches a sync task that dispatches an async task that dispatches a sync task, and your threadpool doesn’t have enough room, you can hit it again. Switching between the worlds still comes with edge cases.

          This may seem rare and it probably is, especially for threadpools of any appreciable size, but I’ve hit it in production before (on Twisted Python). It was a relief when I stopped having to think about these issues entirely.

          1. 3

            Imaging having two sync tasks, A and B where the result of B ends up unblocking A (think mutex)

            Isn’t this an antipattern for async in general? Typically you’d either a) make sure to release the mutex before yielding, or b) change he interaction to “B notifies A”, right?

            1. 4

              Changing the interaction to “B notifies A” doesn’t fix anything because presumably A waits until it is notified, taking up a threadpool slot, making it so that B can never notify A. Additionally, it’s not always obvious when one sync task depends on another, especially if you allow your sync tasks to block on the result of an async task. In my experience, that sort of thing happens when you have to bolt the two worlds together.

              1. 2

                It’s a general problem. It can happen whenever you have a threadpool, no matter whether it’s sync or async.

              2. 3

                But bounded has a less obvious issue in that it effectively becomes a semaphore. Imaging having two sync tasks, A and B where the result of B ends up unblocking A (think mutex) and a thread pool of size 1. If you attempt to throw both on a thread pool, and A ends up scheduled and B doesn’t, you get a deadlock.

                I’ve never designed a system like this or worked on a system designed like this before. I’ve never had one task depend on the value of another task while both tasks were scheduled simultaneously. As long as your tasks spawn dependent tasks and transitively one of those eventually dependent tasks does not have to wait on another task, we can ensure that the entire chain of tasks will finish 1. That said, running out of threads in a thread pool is a real problem that plagues lots of thread-based applications. There are multiple strategies here. Sometimes we try to acquire a thread from the pool with a deadline and we retry a few times to grab a thread from the pool, eventually failing the computation if we just cannot grab a thread from the pool. Other times we just spawn a new thread, but this can lead to scheduler thrashing if we end up spawning too many threads. Another common solution is to create multiple thread pools and allocate different pools to different workloads, so that you can make large thread pools for long running threads and make smaller thread pools of short running tasks.

                Thread-based work scheduling can, imo, be just as complicated as async scheduling. The biggest difference is that async scheduling makes you pay the cost in code complexity (through function coloring, concurrency runtimes, etc) while thread-based scheduling makes you pay the cost in operational and architectural complexity (by deciding how many thread pools to have, which tasks should run on which pools, how large each pool should be, how long we should wait to retry to grab a thread from the pool, etc, etc). While shifting the complexity to operational and architectural complexity might seem to shift the work up to operators or some dedicate operationalizing phase, in practice the context lost by lifting decisions up to this level can make tradeoffs for pools and tasks non-obvious, making it harder to make good decisions. Also, as workloads change over time, new thread pools may need to be created and these new pools necessitate rebalancing of other pools, which requires a lot of churn. Async has none of these drawbacks (though to be clear, it has its own unique drawbacks.)

                1. 8

                  I’ve never designed a system like this or worked on a system designed like this before. I’ve never had one task depend on the value of another task while both tasks were scheduled simultaneously.

                  Here’s perhaps a not-unreasonable scenario: imagine a cache with an API to retrieve some value for a key if it exists and otherwise compute, store, and return it. The cache exports an async API and the callback it runs to compute the value ends up dispatching a sync task to a threadpool (maybe it’s a database query using a sync library). We want the cache to be able to be accessed from multiple threads, so it is wrapped in a sync mutex.

                  Now imagine that an async task tries to use the cache that is backed by a threadpool of size 1. The task disaptches a thread which acquires the sync mutex, calls to get some value (waiting however on the returned future), and assuming it doesn’t exist, the cache blocks forever because it cannot dispatch the task to produce the value. The size of 1 isn’t special: this can happen with any bounded size thread pool under enough concurrent load.

                  One may object to the sync mutex, but you can have the same issue if the cache is recursive in the sense that producing a value may depend on the cache populating other values. I don’t think that’s very far fetched either. Alternatively, the cache may be a library used as a component of a sync object that is expected to be used concurrently and that is the part that contains the mutex.

                  In my experience, the problem is surprisingly easy to accidentally introduce when you have a code base that frequently mixes async and sync code dispatching to each other. Once I started really looking for it, I found many places where it could have happened in the (admittedly very wacky) code base.

                  1. 3

                    Fair enough that is a situation that can arise. Those situations I would probably reach for either adding an expiry to my threaded tasks or separating thread pools for DB or cache threads from general application threads. (Perhaps an R/W Lock would help over a regular mutex, but I realize that’s orthogonal to the problem at hand here and probably a pedagogical simplification.) The reality is that mixing sync and async code can be pretty fraught if you’re not careful.

                2. 2

                  I have seen similar scenarios without a user visible mutex: you get deadlocks if a thread on a bounded thread pool waits for another task scheduled on the same thread pool

                  Of course, there are remedies, e.g. never schedule subtasks on the same thread pool. Timeouts help but still lead to abysmall behavior under load because your threads just idle around until the timeout triggers.

                  1. 1

                    Note that you can also run async Rust functions with zero (extra) threads, by polling it on your current thread. A threadpool is not a requirement.

                    1. 3

                      Isn’t that equivalent to either a threadpool of size 1 or going back to epoll style event loops? If it’s the former, you haven’t gained anything, and if it’s the latter, you’ve thrown out the benefits of the async keyword.

                      1. 3

                        Async has always been a syntax sugar for epoll-style event loops. Number of threads has nothing to do with it, e.g. tokio can switch between single and multi-threaded execution, but so can nginx.

                        Async gives you higher-level composability of futures, and the ease of writing imperative-like code to build state machines.

                  1. 19
                    [ $USER != "root" ] && echo You must be root && exit 1
                    

                    I’ve always felt a bit uneasy about this one. I mean, what if echo fails? :-)

                    So I usually do

                    [ $USER != "root" ] && { echo You must be root; exit 1; }
                    

                    instead… just to be safe.

                    1. 10

                      Indeed, echo can fail. Redirecting stdout to /dev/full is probably the easiest way to make this happen but a named pipe can be used if more control is required. The sentence from the article “The echo command always exists with 0” is untrue (in addition to containing a typo).

                      1. 3

                        Don’t you need set +e; before echo, just to be extra safe?

                        1. 3

                          I had to look that up. set +e disables the -e option:

                                    -e      Exit immediately if a simple command (see SHELL  GRAMMAR
                                            above) exits with a non-zero status
                          

                          That’s not enabled by default, though, and I personally don’t use it.

                          1. 1

                            Or &&true at the end, if it’s okay for this command to fail. EDIT: see replies

                            It’s as much of a kludge as any other, and I’m not sure how to save the return value of a command here, but bash -ec 'false && true; echo $?' will return 0 and not exit from failure. EDIT: it echoes 1 (saving the return value), see replies for why.

                            1. 2

                              You probably mean || true. But yeah, that works!

                              1. 1

                                I did mean || true, but in the process of questioning what was going on I learned that && true appears to also prevent exit from -e and save the return value!

                                E.G.,

                                #!/bin/bash -e
                                f(){
                                return 3
                                }
                                f && true ; echo $?
                                

                                Echoes 3. I used a function and return to prove it isn’t simply a generic 1 from failure (as false would provide). Adding -x will also show you more of what’s going on.

                          2. 2

                            I personally use the following formatting, which flips the logic, uses a builtin, and printd to stderr.

                            [ "${USER}" == "root" ] || {printf "%s\n" "User must be 'root'" 1>&2; exit 1; }

                            When I start doing a larger amount of checks, I wrap the command group within a function, which turns into the following, and can optionally set the exit code.

                            die() { printf "%s\n" "${1}" 1>&2; exit ${2:-1}; }
                            ...
                            [ "${USER}" == "root" ] || die "User must be 'root'"
                            
                            1. 2

                              I also always print to standard out, but I’m pretty sure most shells have echo as a built-in. The form I usually use is

                              err() { echo "$1" 1>&2; exit 1; }
                              
                          1. 2

                            Pretty cool!

                            I started to rewrite some of my solutions (Python 3) into Java also with an eye for performance. I think for most of the problems it’s easy, the python one runs in <30ms incl. startup already, one basically only needs to bring down the 5-10 “slow” ones, using a compiled language will take care of the rest. But I am not sure I’ll bring up the motivation to a) rewrite ALL ~45 solutions I have and b) finish the remaining ~4 and c) then speed them up…

                            1. 1

                              Nice! It might be a fun thing to try next year. Writing quicker solutions in Java from the start I mean. Did you use any cool tricks to speed things up in Python?

                              While I was working on it, I did all in sequence, and didn’t want to revise solutions later. I didn’t know in advance whether a puzzle would be slow. That really motivated me to try and make everything as fast as possible from the beginning.

                              1. 2

                                Smart as I am I changed values in a filed called unoptimized.txt as I went to speed it up..

                                But overall I resolved myself to not spend my whole spare time this year so I mostly went with “if it’s correct, it’s fine” and then whenever I had a few minutes spare I looked at the “slow” ones and gave ‘em a hard look how to improve execution time, but I wouldn’t say I really optimized it.

                            1. 12

                              This article leaves aside the most useful special parameter for elegant conditional: the “most recent argument”, $_.

                              Example of use: test -f /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh && source $_ || echo "zsh-syntax-highlighting not installed" >&2

                              1. 6

                                Even

                                test -f /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh \
                                && source "$_" \
                                || echo "${_##*/} is not installed" >&2
                                

                                😎

                                1. 3

                                  TIL. Thank you for sharing, I never knew that one.

                                  1. 2

                                    Brilliant! Wasn’t aware. Added it in the post.

                                  1. 2

                                    I’m also using an Ergodox, an Infinity to be exact. I love the split, the orthogonal layout, and thumb clusters.

                                    I also have a Ducky One. I like it, but prefer the Ergodox for all-day typing.

                                    Other keyboards I have are not noteworthy.

                                    1. 21

                                      In Neovim/Vim this is not hard at all:

                                      							*:ea* *:earlier*
                                      :earlier {count}	Go to older text state {count} times.
                                      :earlier {N}s		Go to older text state about {N} seconds before.
                                      :earlier {N}m		Go to older text state about {N} minutes before.
                                      :earlier {N}h		Go to older text state about {N} hours before.
                                      :earlier {N}d		Go to older text state about {N} days before.
                                      
                                      :earlier {N}f		Go to older text state {N} file writes before.
                                      			When changes were made since the last write
                                      			":earlier 1f" will revert the text to the state when
                                      			it was written.  Otherwise it will go to the write
                                      			before that.
                                      			When at the state of the first file write, or when
                                      			the file was not written, ":earlier 1f" will go to
                                      			before the first change.
                                      
                                      

                                      There is also a :later command.

                                      1. 9

                                        Yes, Vim’s :earlier/:later commands are nice; and so is Vim’s undo tree (especially with a graphical interface). But in both cases, you still have to flip back and forth between the historical version and your working code. Splitting the buffer doesn’t help, because :earlier affects all splits where that file (buffer) is open.

                                        The author’s Yestercode is a further improvement: it lets you pull up earlier states of your file next to the window where you’re editing it, so looking something up does not break the edit flow.

                                        1. 2

                                          This basically undo’s, so you won’t be able to use forward again when (accidentally) changing history. But yes, it is a nice feature, I use it as well!

                                          1. 4

                                            Vim has branching undo though, so you can go back (forward?) to the original version, just not with :later.

                                        1. 6

                                          But, don’t make things huge and complex because of SOLID. Also see YAGNI.

                                          1. 6

                                            Thank you to the people building Rust. It is the best language I’ve ever used.

                                            1. 2

                                              Mostly hosting on VPSs in the cloud though. Would love to host email myself, but I find blacklisting and overall deliverability issues to much of a hassle.

                                              1. 5

                                                prs

                                                It is basically pass, but with with fast/smart git sync, easy multi-machine/recipient support and with many annoyances fixed. I don’t recommend it for your significant other though, but you might like it.

                                                1. 8

                                                  Last week I shared forgo: a light-weight 4kb React alternative which encourages using plain JS and DOM APIs. https://forgojs.org

                                                  Plan for this week:

                                                  • Add more tests
                                                  • Add support for fragments
                                                  1. 2

                                                    Awesome, thanks for sharing. Will definitely keep an eye on this.

                                                  1. 4

                                                    Finally submitted my thesis. Now it finally is the time to relax (meaning working on my own projects)!

                                                    I’ll probably do some work on prs, a CLI password store. Or put some time in finishing the last days of AoC (with wich I attempt to achieve <1 sec).

                                                    1. 4

                                                      It’s more like a CSS snippet than a framework

                                                      1. 8

                                                        Considering what it tries to achieve that’s a good thing.

                                                        1. 3

                                                          Calling it framework won’t be fair enough, a CSS Library is fine, Frameworks are vast and provide everything.

                                                        2. 6

                                                          I’m totally fine diluting the meaning of framework like this.

                                                          1. 0

                                                            Calling it framework won’t be fair enough, a CSS Library is fine, Frameworks are vast and provide everything.

                                                            1. 3

                                                              attempt to provide everything.

                                                              1. 1

                                                                Except that it doesn’t attempt, it’s a beautifier.

                                                                1. 1

                                                                  I was saying that Frameworks attempt to provide everything, not that this specific thing is a framework or does attempt to provide everything.

                                                              2. 1

                                                                The definition of a framework according to the Cambridge dictionary is:

                                                                a supporting structure around which something can be built

                                                                I think this project satisfies that definition. Yes, software frameworks like Bootstrap are goliath, but that doesn’t mean something small like Simple.css can’t be a framework.

                                                            2. 4

                                                              The linked page doesn’t actually describe Simple.css as a framework, it describes it as a “classless CSS template.” I’ve made a suggestion to change the title of the link here to reflect that.

                                                              1. 1

                                                                In that case, it’s fair, promoting it as framework won’t be fair.

                                                            1. 10

                                                              Nice view on how the email flow works. Though, I don’t agree with some things.

                                                              The only reason that merge button was used on Github was because Github can’t seem to mark the pull request as merged if it isn’t done by the button.

                                                              No, it does. I merge locally all the time, and GitHub instantly marks a PR as merged when I push. In fact, I primarily host repos at GitLab and keep a mirror on GitHub. I accept PRs on GitHub (the mirror) as well to keep things easy for contributors. I manually merge these locally, and push the updated branch to GitLab. GitLab in turn syncs the GitHub mirror, and the PR on GitHub is marked as merged in a matter of seconds.

                                                              …we have to mess with the commits before merging so we’re always force-pushing to the git fork of the author of the pull request (which they have to enable on the merge request, if they don’t we first have to tell them to enable that checkbox)

                                                              Yes, of course you’ve to mess with them. But after doing that, don’t even bother to push to the contributors branch. Just merge it into the target branch yourself and push. Both GitLab and GitHub will instantly mark the PR as merged. It is the contributors job to keep his branch up to date, and he doesn’t even have to for you to be able to do your job.

                                                              I understand that you like the email workflow, which is great. But I don’t agree with some arguments for it that are made here.

                                                              Thanks for sharing though!

                                                              1. 7

                                                                No, it does. I merge locally all the time, and GitHub instantly marks a PR as merged when I push.

                                                                In the article they talk about wanting to rebase first. If you do that locally, GitHub has no way to know that the rebased commits you pushed originally came from the PR, so it can’t close them automatically. It does work when you push outside GitHub without rebasing tho.

                                                                1. 2

                                                                  IIRC, can’t you rebase, (force) push to the PR branch, then merge and push and it’ll close? More work in that case but not impossible. Just if you rebase locally then push to ma(ster|in) then github has no easy way to know the pr is merged without doing fuzzy matching of commit/pr contents which would be a crazy thing to implement in my opinion.

                                                                  1. 3

                                                                    Typically the branch is on someone else’s fork, not yours.

                                                                    1. 2

                                                                      In Github, you can push to anothers branch if they have made it a PR in your project. Not sure if force push works, never tried. But I still feel it’s a hassle, you need to set up a new remote in git.

                                                                      In Gitlab, apparently, you have to ask the branch owner to set some checkbox in Gitlab so that you can push to the branch.

                                                                      1. 3

                                                                        In Gitlab, apparently, you have to ask the branch owner to set some checkbox in Gitlab so that you can push to the branch.

                                                                        That is the case in GitHub as well. (Allow edit from maintainers). It is enabled by default so I’ve never had to ask someone to enable it. Maybe it is not enabled by default on GitLab?

                                                                        1. 1

                                                                          I can confirm that it is disabled by default on GitLab.

                                                              1. 14

                                                                I come across this website every one in a while, usually when working on something gaming related. It has fantastic content, explaining things really well. Go take a look!

                                                                1. 5

                                                                  It’s a treasure trove of great information. I stumbled upon that site when I was working on the 24th problem for the advent of code 2020.

                                                                  1. 4

                                                                    I discovered this site during AoC 2018, which was heavily “game” focused with lots of pathing.

                                                                1. 4

                                                                  This isn’t a very interesting or useful link: project status and a git commit summary. This is better.

                                                                  https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.10-Released

                                                                    1. 1

                                                                      At the same time is just the relevant content, without the megabytes of ads that Phoronix has.

                                                                    1. 1

                                                                      I built prs recently. It supports what OP desires, and is basically a pass client with fast/smart git sync, easy multi-machine/recipient support and with many annoyances fixed.

                                                                      I believe prs’s workflow to be much more convenient, quicker and overall better with its CLI and things like the GTK quick copy widget.

                                                                      https://github.com/timvisee/prs