1. 2

    I do something similar with normal comments at the beginning of the script and usually not as help message but with a separate command called “twoman” because adding option parsing to most scripts doesn’t really make sense.

    https://xn--1xa.duncano.de/twoman.html

    1. 1

      Permit me to connect your comment to @adventureloop’s

    1. 2

      This is not a supervisor in the daemontools, runit and s6 sense.

      It doesn’t actually use signals to notice when a child process dies, is prone to the same issues issues as PID files (sending SIGTERM to wrong processes on pid reuse).

        1. 1

          How do people find these bugs?

          I don’t think it’s just code review. Maybe fuzzing?

          1. 5

            Maybe they have a specialized fuzzer for finding issues with sub process execution (see the other two vulnerabilities they’ve found at the end of the comment.), but I think its more likely that they are doing targeted reviews into this kind of bug class.

            From my experience, I would start by searching for code paths that execute programs with user input and then work backwards to see if the user input is validated at all, if not then you already have your bug, otherwise you might have to do a full review on how the input is validated (which they probably did in this case and that’s how they found the logic error).

            https://www.qualys.com/2019/12/11/cve-2019-19726/local-privilege-escalation-openbsd-dynamic-loader.txt https://www.qualys.com/2019/12/04/cve-2019-19521/authentication-vulnerabilities-openbsd.txt

            1. 3

              Agree. For my own code, I’d just grep for system and exec* calls and review ‘em all. If I were in the business of reviewing other code for this, I think I’d probably write a taint checker to help me look. It feels like llvm could get you really close to that, these days. It looks like there might be at least the scaffolding for that there already. Last time I had to do it as a one-off on someone else’s code, I modified the codebase I was working with to taint certain variables then highlight whether any of a group of functions acted on them.

          1. 1

            There was a FreeBSD security advisory the same day. If this repository is the source https://github.com/freenas/os/tree/freenas/11.3-stable, then it looks like the patches are not included.

            1. 3

              Why did you choose to pass in pledges as a null-terminated string? Did you consider adding a length field? What about encoding options as flags in a variable or two, eliminating the parsing step?

              1. 1

                It is using the openbsd api. tame(2) the pledge predecessor used flags. I can’t find a reference on why it was changed to strings.

                1. 1

                  strings are easier to change without breaking code or recompiling.

                  1. 3

                    Can you elaborate on this?

                    1. 1

                      A stringly typed system can ignore values it doesn’t understand, while if you have bitfields and change what one means or need to expand the number of bits, you have changed your API and need to recompile.

                      This is as I understand it.

                      1. 1

                        Ok, so lets look at a couple changes one might want to make, and how different apis handle them. The primary ones are adding, merging, and splitting pledge categories.

                        First, to add a new pledge category in both cases no code needs to be recompiled. This type of change could occur when adding new syscalls to the kernel. In each case, old pledge calls will still be valid. Old kernels can also ignore new pledge categories for both strings and bitfields.

                        When merging two (or more) pledge categories no code needs to be recompiled. However, the bitfield case is more elegant in its changes. Consider merging the pledge categories foo and bar into foobar. With strings, new kernels will need to recognize both older categories, in addition to the new (merged) category. Old kernels encountering foobar will kill the process when syscalls from either foo or bar are made. This effectively breaks the api, meaning that merges into a new pledge category in this manner difficult. There is no way to have one set of code work on both new and old kernels just using the merged category. Therefore, the only way to do merges is to have foo imply bar and vice versa. However, this may result in new code which breaks on old kernels since it uses syscalls from bar and only has foo in its pledge string.

                        In the bitfield case, merging new categories is much easier. If the two older categories were PLEDGE_FOO = 1 and PLEDGE_BAR = 2, then a new definition PLEDGE_FOOBAR = 3 can be added. New code can use this symbol. Bugs where new code only sets PLEDGE_BAR or PLEDGE_FOO can occur, but many compilers have the ability to deprecate enums. Now, using the old values for FOO and BAR gives a warning at compile-time. Alternatively, the header writer could just define PLEDGE_BAR and PLEDGE_FOO to both be 3, in addition to defining PLEDGE_FOOBAR. (though this would not affect behaviour, since the kernel would still imply bar from foo and vice versa).

                        Last, lets look at the case of splitting a pledge category into two new categories (for more fine-grained control). As before, no code needs to be recompiled. To illustrate, consider splitting the pledge category foobar into foo and bar. In order to not break new code running on old kernels, the old category must be included with pledge calls in addition to the new ones. E.g. to maintain compatibility, the new api for pledging just bar would be to pledge both foobar and bar, and for new kernels to then disable foo since it was not also pledged. With strings, each caller must do this manually, and there is a chance for breakage on old kernels if just bar is passed. However, with bitfields, if PLEDGE_FOOBAR = 1 and bits 1 and 2 are unused, one could define PLEDGE_BAR = 3 and PLEDGE_FOO = 5. This prevents incompatibility, while allowing transparent use of the new api. If desired, the old PLEDGE_FOOBAR could be marked as deprecated.

                        Of course, all these changes rely on having spare bits left. Since there are only 18 categories in use, 64 bits provides more than enough for expansion. The ability to transparently add merges in software is helpful as well. For example, if foo, bar, and baz are commonly pledged together, a constant for foobarbaz could easily be defined in software, with no change to the kernel. With a string api, such a change would be breaking. For these reasons, in addition to not having to parse (potentially unterminated) user-generated strings, I find the design of this api puzzling.

                        1. 1

                          You are ignoring the case when you might want to add more than bits, maybe instead of just ‘stdio bar’ you want to add extensions other than bits, Say for example you want “~stdio” to mean prevent children from inheriting new permissions other than stdio.

                          1. 1

                            then do pledge(PLEDGE_ALL, PLEDGE_STDIO).

                            1. 1

                              I think you missed my point because my example was bad.

                              1. 1

                                Perhaps, but either way, null-terminated strings in a syscall are bad design imo. The composability of a bitfield representation is a real advantage, especially when combined with the C preprocessor. I really can’t think of a case where strings would be more extensible, except if you wanted to add more than 64ish pledges.

              1. 6

                I find it curious that the Blink team at Google takes this action in order to prevent various other teams at Google from doing harmful user-agent sniffing to block browsers they don’t like. Google certainly isn’t the only ones, but they’re some of the biggest user-agent sniffing abusers.

                FWIW, I think it’s a good step, nobody needs to know I’m on Ubuntu Linux using X11 on an x86_64 CPU running Firefox 74 with Gecko 20100101. At most, the Firefox/74 part is relevant, but even that has limited value.

                1. 14

                  They still want to know that. The mail contains a link to the proposed “user agent client hints” RFC, which splits the user agent into multiple more standardized headers the server has to request, making “user-agent sniffing” more effective.

                  1. 4

                    Oh. That’s sad. I read through a bit of the RFC now, and yeah, I don’t see why corporations wouldn’t just ask for everything and have slightly more reliable fingerprinting while still blocking browsers they don’t like. I don’t see how the proposed replacement isn’t also “an abundant source of compatibility issues … resulting in browsers lying about themselves … and sites (including Google properties) being broken in some browsers for no good reason”.

                    What possible use case could a website have for knowing whether I’m on ARM or Risc-V or x86 or x86_64 other than fingerprinting? How is it responsible to let the server ask for the exact model of device you’re using?

                    The spec even contains wording like “To set the Sec-CH-Platform header for a request, given a request (r), user agents MUST: […] Let value be a Structured Header object whose value is the user agent’s platform brand and version”, so there’s not even any space for a browser to offer an anti-fingerprinting setting and still claim to be compliant.

                    1. 4

                      What possible use case could a website have for knowing whether I’m on ARM or Risc-V or x86 or x86_64 other than fingerprinting?

                      Software download links.

                      How is it responsible to let the server ask for the exact model of device you’re using?

                      … Okay, I’ve got nothing. At least the W3C has the presence of mind to ask the same question. This is literally “Issue 1” in the spec.

                      1. 3

                        Okay, I’ve got nothing.

                        I have a use case for it. I’ve a server which users run on a intranet (typically either just an access point, or a mobile phone hotspot), with web browsers running on random personal tablets/mobile devices. Given that the users are generally not technical, they’d probably be able to identify a connected device as “iPad” versus “Samsung S10” if I can show that in the web app (or at least ask around to figure out whose device it is), but will not be able to do much with e.g an IP address.

                        Obviously pretty niche. I have more secure solutions planned for this, however I’d like to keep the low barrier to entry that knowing the hardware type from user agent provides in addition to those.

                      2. 2

                        What possible use case could a website have for knowing whether I’m on ARM or Risc-V or x86 or x86_64 other than fingerprinting?

                        Benchmarking and profiling. If your site performance starts tanking on one kind of processor on phones in the Philippines, you probably want to know that to see what you can do about it.

                        Additionally, you can build a website with a certain performance budget when you know what your market minimally has. See the Steam Hardware and Software Survey for an example of this in the desktop videogame world.

                        Finally, if you generally know what kinds of devices your customers are using, you can buy a bunch of those for your QA lab to make sure users are getting good real-world performance.

                    2. 7

                      Gecko 20100101

                      Amusingly, this date is a static string — it is already frozen for compatibility reasons.

                      1. 2

                        Any site that offers you/administrators a “login history” view benefits from somewhat accurate information. Knowing the CPU type or window system probably doesn’t help much, but knowing it’s Firefox on Ubuntu combined with a location lookup from your IP is certainly a reasonable description to identify if it’s you or someone else using the account.

                        1. 2

                          There are terms I’d certainly like sites to know if I’m using a minority browser or a minority platform, though. Yes, there are downsides because of the risk of fingerprinting, but it’s good to remind sites that people like me exist.

                          1. 1

                            Though the audience here will play the world’s tiniest violin regarding for those affected the technical impact aspect may be of interest.

                            The version numbering is useful low-hanging-fruit method in the ad-tech industry to catch fraud. A lot of bad actors use either just old browsers[1] or skew browser usage ratios; though of course most ‘fraud’ detection methods are native and just assume anything older than two major releases is fraud and ignore details such as LTS releases.

                            [1] persuade the user to install a ‘useful’ tool and it sits as a background task burning ads or as a replacement for the users regular browser (never updated)

                            1. 5

                              “Just speak Chinese.” Source: I’ve tried DOAS.

                              1. 3

                                If only Chinese had such nice man pages…

                              2. 1

                                What if you’re not a BSD user?

                                1. 1

                                  DOAS is portable, I use it on Red Hat, CentOS and Oracle Linux systems, Ubuntu should also not be a problem.

                                    1. 2

                                      Nothing is perfect and doas is quite young comparing to sudo (about 15 years difference).

                              1. 8

                                This is especially nasty on linux, where inotify(7) events for write(2) and truncate(2)/ftruncate(2) both result in an IN_MODIFY. To make it worse, open(2) with O_TRUNC doesn’t result in IN_MODIFY, but only IN_OPEN events and there is no way to distinguish between O_RDONLY, O_WRONLY and/or O_RDWR.

                                At the time tail receives and handles the events and uses stat(2) to try to detect truncations the file could have already grown to the size before the truncation or larger and there is no way to tell if the file was truncated at all.

                                1. 3

                                  Because [Podman] doesn’t need a daemon, and uses user namespacing to simulate root in the container, there’s no need to attach to a socket with root privileges, which was a long-standing concern with Docker.

                                  Wait, Docker didn’t use user namespacing? I thought that was the whole point of Linux containers.

                                  1. 7

                                    There are two different things called user namespaces. CLONE_NEWUSER which creates a namespace that doesn’t share user and groups IDs with the parent namespace. And the kernel configuration option CONFIG_USER_NS, which allows unprivileged user to create new namespaces.

                                    Docker and the tools from the article both use user namespaces as in CLONE_NEWUSER.

                                    Docker by default runs as privilegued user and can create namespaces without CONFIG_USER_NS, I’m not sure if you can run docker as an unprivilegued user because of other features, but technically it should be able to create namespaces if CONFIG_USER_NS is enabled without root.

                                    For the tools described in the article, they just to create a namespace and then exec into the init process of the container. Because they are not daemons and don’t do a lot more than that, they can run unprivileged if CONFIG_USER_NS is enabled.

                                    Edit: Another thing worth mentioning in my opinion is, UID and GID maps (which are required if you want to have more than one UID/GID in the container) can only be written by root, and tools like podman use two setuid binaries from shadow (newuidmap(1) and newgidmap(1)) to do that.

                                    1. 1

                                      It can, but for a long time it was off by default. Not sure if that’s still true.

                                    1. 3

                                      as always, feel free to submit feedback, criticism or issues!

                                      1. 3

                                        Just some nitpicking on dependencies:

                                        • When depending on a Git repository (as you do with your colored dependency), it is a good practice to point to a particular commit or tag using the rev or tag parameter instead of the branch, as the branch’s HEAD can change but a commit or tag can only point to only one specific state of the repository.
                                        • When publishing a binary (executable) crate, it is a good practice to publish along the crate the Cargo.lock. You can find the reasoning on why you should publish this file in Cargo’s FAQ

                                        I will try it later though! I always complained that some prompt frameworks are using scripting languages like Python or Ruby that have slow spin-up rate, so this project seems interesting and a cool way to customize my ugly and boring prompt.

                                        1. 1

                                          You kind of cover this but the Cargo.lock would capture the commit that the git dependency was at when the lock file was generated. So if the Cargo.lock was checked in everyone would build against the same commit.

                                        2. 2

                                          I already implemented a similar tool some months ago rusty-prompt, maybe you can get some inspiration out of it.

                                          1. 1

                                            sure! thanks for sharing!

                                          2. 1

                                            My bashes (both the one that comes with Mac OS and the latest 5.0.7 from brew) seem to cache PS1 somehow, making pista break quite a lot.

                                            ➜  ~ /usr/local/bin/bash
                                            bash-5.0$ PS1=$(pista)
                                            ~
                                            $ cd git
                                            ~
                                            $ PS1=$(pista)
                                            ~/git
                                            $ cd nomad
                                            ~/git
                                            $ PS1=$(pista)
                                            ~/g/nomad master ·
                                            $
                                            
                                            1. 2

                                              Try PS1='$(pista)'. What’s happening is that pista is getting called once, when you set PS1, and then never again. The single quotes force PS1 to literally contain the expansion, which then gets expanded (and thereby call pista) each time the prompt is printed

                                              1. 2

                                                Ohhh, no :( Of course. I feel like I should step far away from the computer now.

                                                1. 3

                                                  looks like the installation instructions were faulty!

                                                  1. 1

                                                    Oh, whew, thanks for that. Now I feel slightly less stupid :)

                                              2. 1

                                                cant seem to replicate this, but it looks like a PROMPT_COMMAND thing.

                                              3. 1

                                                @hostname is nice to have if $SSH_CONNECTION is set.

                                                1. 4

                                                  i have plans to extend pista into a library, so you could build your own prompts with pista’s functions. maybe ill add a hostname function :^)

                                              1. 2

                                                See Firejail, if you haven’t already. It’s a sandbox for untrusted applications.

                                                With it, you can put firejail --private /usr/bin/firefox "$@" in an executable in your PATH, to spawn a safer amnesiac session when needed. Firefox + firejail without the --private flag is also practically indistinguishable from without firejail.

                                                1. 5

                                                  Everyone interested in firejail for “untrusted” software/security reasons should read this oss-sec thread.

                                                  1. 1

                                                    Can you guide us there?

                                                    1. 3

                                                      firejail, for something that should improve security, had quite some CVEs:

                                                      https://www.cvedetails.com/vulnerability-list/vendor_id-16191/Firejail-Project.html

                                                      Some of the CVEs are easy to exploit and have a high impact.

                                                      1. 2

                                                        But that’s still better than nothing, as long as you understand it’s not perfectly ‘secure’, right? I think the problem would be that folks may not understand it has some major CVEs and expect it to be a complete solution (when it is not).

                                                        1. 3

                                                          It is really not, firefox itself already sandboxes processes, its not perfect and there a things to improve. But there are people reviewing it and looking for bugs. Firejail is a setuid binary, which already gains more privileges than firefox would ever have. Firejail introduced privilege escalation and other security issues, which lead to root by just having firejail installed and accessible to a user.

                                                          1. 1

                                                            Indeed. On the typical single-user desktop giving applications full access to the user account is just as bad as giving them root access because a root escalation does not provide significant benefits to an attacker.

                                                            Firejail will not block an attacker that is both skilled and motivated but at least it effectively contains a spammy or nosy application.

                                                            Any better solution?

                                                            1. 1

                                                              Indeed. On the typical single-user desktop giving applications full access to the user account is just as bad as giving them root access.

                                                              This is funny because firejail is setuid and runs as root until it drops privileges again (Some CVEs result in root for any user who can access the firejail binary). Firefox does sandboxing itself, using the same or similar techniques, but would never gain root privileges.

                                                              Firejail is way to complex and the design doesn’t really look like it was build with security in mind. It does way to many things in one big setuid binary. This already elevates privileges to root from a normal user, this is not how it should be designed. The perfect solution would be something that is designed with least privileges in mind and do things like dbus or xorg forwarding/proxying in a completely different low privileges process.

                                                              There are things like bubblewrap, but they are not as easy to use for desktop applications because they are not designed around it, you can still make with work by bind mounting the Xorg socket into the namespace or letting it connect to a separate server like Xephyr so the sandboxed application doesn’t have access to all other windows. Other things like dbus would also be handled manually.

                                                          2. 1

                                                            Yeah, anything will have vulnerabilities but what are the odds of anything targeting users on Firefox + Firejail around? And then the odds of you actually getting it?

                                                            1. 1

                                                              Looks good but my current main OS is Windows and I can’t seem to be able to find a Windows version on the repo. Is there one?

                                                        2. 3

                                                          Firejail doesn’t have the best security record, and only works on linux.

                                                        1. 18

                                                          I’d really like to know why they seem to list requirements that seem squarely in musl libc’s core design goals yet post this like it’s a novel suggestion. Perhaps they have reasons for skipping on musl but it seems lazy or contemptuous to not at least mention why they would prefer to avoid existing glibc alternatives.

                                                          1. 10

                                                            The only thing that comes to mind is that google doesn’t own/control musl, so google’s proposed changes may not be accepted by musl. With their own libc, google can introduce things that other libc implementation would never merge.

                                                            1. 7

                                                              This is easy to say about any project but I found this post originally via twitter from the musl author: https://twitter.com/RichFelker/status/1143292587576635402

                                                              There has likely been no discussion of what might be accepted. If the merge problem is really the case, it probably doesn’t belong in LLVM either. Good riddance throw-it-over-the-fence style OSS if you ask me. Google can keep it to themselves if they’re incapable of this kind of conversation as a corporation (not trying to take offense to developers who may be stuck between to hard places as employees).

                                                              1. 6

                                                                This is easy to say about any project

                                                                Well, yeah, because it’s true a lot of the time. Happens all the time and it’s totally understandable. It really is not even remotely a stretch to imagine that the goals of MUSL wouldn’t align with the goals of Google.

                                                                I once wondered whether I should try to contribute a faster version of memchr to MUSL, but just looking at the tickets on that project made me immediately reconsider. Which isn’t to say MUSL is bad, but it’s to say that MUSL clearly has a specific set of goals in mind, and they do not always line up with everyone else’s goals.

                                                                1. 7

                                                                  There is actually a interesting mailing list thread with googlers on the musl mailing list from a few years back where they considered including musl in chromium, which failed at the end.

                                                                  TL;DR: Lawyers/Legal team had a problem with some files/headers that are in “public domain” and requested a re-license of those files.

                                                                  https://www.openwall.com/lists/musl/2016/03/15/1

                                                                  1. 6

                                                                    Interesting. I’ve been on the bad end of a bunch of Googlers and their licensing concerns too. Not a pleasant experience.

                                                                2. 3

                                                                  This is easy to say about any project

                                                                  Well, yea, but it’s not every day a major company decides to go off and do their own implementation (or fork) of (insert thing here with a some widely-available OSS implementations), and google has a history of doing this (BoringSSL immediately comes to mind).

                                                              2. 7

                                                                Rich Felker (of musl) posted a follow-up in the thread, taking the viewpoint that: 1) LLVM shouldn’t build its own from-scratch libc, and preferably 2) shouldn’t ship a libc at all, whether a new one or musl or otherwise.

                                                                1. 3

                                                                  isn’t musl linux-only?

                                                                  1. 2

                                                                    Not technically but it seems to have been designed with Linux in mind & using it with others kernels can require a lot of effort.

                                                                1. 2

                                                                  Without knowing much about the implementation, the bidirectional wormholes look nice on the surface.

                                                                  1. 2

                                                                    I think a new take on bind and maybe union mounts would have been cooler.

                                                                    Symlinks already make things complicated, but now you have two types and who knows if there is the need for something like lstat(2) or additional flags for open(2) like O_NOFOLLOW.

                                                                    Edit: Firmlinks only work for volume groups is limiting compared to bind mounts. But I guess this is why they don’t really need extra syscalls or flags to deal with it because they are only intended to map to the “Data” volumes in volume groups.

                                                                  1. 9

                                                                    When I switched from duckduckgo bangs to firefox keywords search I found %S by coincidence for search terms without url escaping. I couldn’t find this in any documentation. This allows me to just add !archive in front of the url to get redirected to the web archive:

                                                                    https://web.archive.org/web/*/%S
                                                                    
                                                                    1. 1

                                                                      That’s really really useful! Thank you!

                                                                    1. 15

                                                                      After the recent announcement of the F5 purchase of NGINX we decided to move back to Lighttpd.

                                                                      Would be interesting to know why instead of just a blog post which is basically an annotated lighthttpd configuration.

                                                                      1. 6

                                                                        If history has taught us anything, the timeline will go a little something like this. New cool features will only be available in the commercial version, because $$. The license will change, because $$. Dead project.

                                                                        And it’s indeed an annotated lighttpd configuration as this roughly a replication of the nginx config we were using and… the documentation of lighttpd isn’t that great. :/

                                                                        1. 9

                                                                          The lighttpd documentation sucks. Or at least it did three years ago when https://raymii.org ran on it. Nginx is better, but still missing comprehensive examples. Apache is best, on the documentation font.

                                                                          I wouldn’t move my entire site to another webserver anytime soon (it runs nginx) but for new deployments I regularly just use Apache. With 2.4 being much much faster and just doing everything you want, it being open source and not bound to a corporation helps.

                                                                          1. 1

                                                                            Whatever works for you. We used to run our all websites on lighttpd, before the project stalled. So seemed a good idea to move back, before nginx frustration kicked in. :)

                                                                            1. 3

                                                                              Im a bit confused. You’re worried about Nginx development stalling or going dead in the future. So, you switched to one that’s already stalled in the past? Seems like the same problem.

                                                                              Also, I thought Nginx was open source. If it is, people wanting to improve it can contribute to and/or fork it. If not, the problem wouldn’t be the company.

                                                                              1. 2

                                                                                The project is no longer stalled and if it stalls again going to move, again. Which open source project did well after the parent company got acquired?

                                                                                1. 3

                                                                                  I agree with you that there’s some risk after a big acquisition. I didnt know lighttpd was active again. That’s cool.

                                                                                  1. 2

                                                                                    If it was still as dead as it was a couple of years ago I would have continued my search. :)

                                                                                    1. 1

                                                                                      Well, thanks for the tip. I was collecting lightweight servers and services in C language to use for tests on analysis and testing tools later. Lwan was main one for web. Lighttpd seems like a decent one for higher-feature server. I read Nginx was a C++ app. That means I have less tooling to use on it unless I build a C++ to C compiler. That’s… not happening… ;)

                                                                                      1. 3

                                                                                        nginx is 97% C with no C++ so you’re good.

                                                                                        1. 1

                                                                                          Thanks for correction. What’s other 3%?

                                                                                          1. 2

                                                                                            Mostly vim script with a tiny bit of ‘other’ (according to github so who knows how accurate that is).

                                                                                            1. 1

                                                                                              Alright. I’ll probably run tools on both then.

                                                                                              1. 2

                                                                                                Nginx was “heavily influenced” by apache 1.x; a lot of the same arch, like memory pools etc. fyil

                                                                                  2. 2

                                                                                    SuSE has been going strong, and has been acquired a few times.

                                                                                    1. 1

                                                                                      SuSE is not really an open-source project though, but a distributor.

                                                                                      1. 3

                                                                                        They do have plenty of open-source projects on their own, though. Like OBS, used by plenty outside of SuSE too.

                                                                            2. 5

                                                                              It’s a web proxy with a few other features, in at least 99% of all cases.

                                                                              What cool new features are people using?

                                                                              Like, reading a few books on the topic suggested to me that despite the neat things Nginx can do we only use a couple workhorses in our daily lives as webshits:

                                                                              • Virtual hosts
                                                                              • Static asset hosting
                                                                              • Caching
                                                                              • SSL/Let’s Encrypt
                                                                              • Load balancing for upstream servers
                                                                              • Route rewriting and redirecting
                                                                              • Throttling/blacklisting/whitelisting
                                                                              • Websocket stuff

                                                                              Like, sure you can do streaming media, weird auth integration, mail, direct database access, and other stuff, but the vast majority of devs are using a default install or some Docker image. But the bread and butter features? Those aren’t going away.

                                                                              If the concern is that new goofy features like QUIC or HTTP3 or whatever will only be available under a commercial license…maaaaaybe we should stop encouraging churn in protocols that work well enough?

                                                                              It just seems like much ado about nothing to me.

                                                                              1. 6

                                                                                maaaaaybe we should stop encouraging churn in protocols that work well enough?

                                                                                They don’t work well enough on mobile networks. In particular, QUIC’s main advantage over TCP is it directly addresses the issues caused by TCP’s congestion-avoidance algorithm on links with rapidly fluctuating capacities. I share your concern that things seem like they’re changing faster than they were before, but it’s not because engineers are bored and have nothing better to do.

                                                                              2. 4

                                                                                New cool features will only be available in the commercial version, because $$.

                                                                                Isn’t that already the case with nginx?

                                                                            1. 5

                                                                              Am I the only one who uses set -g alternate-screen off in tmux to keep the output of programs like vim on the screen which allows me to peek back by just scrolling up?

                                                                              1. 9

                                                                                And http://libdill.org/, from the same authors, but without sticking to “go-style”.

                                                                                1. 4

                                                                                  Also libdill has support for structured concurrency, whereas libmill doesn’t.

                                                                                1. 1

                                                                                  I wonder if long shebangs have MAX_SIZE of 128 bytes, why those Nix scripts have more than that, doesn’t it always be truncated assuming linux-version < 5.0-rc1? What’s the point on having those big lines on them? if they will never be used to whatever purpose was intended for them in the first place.

                                                                                  1. 3

                                                                                    perl reads/parses the shebang itself again when executed and doesn’t use the command line passed by the kernel if it sees that its truncated.

                                                                                    https://lobste.rs/s/zmxyhk/case_supersized_shebang#c_dfkskv

                                                                                    1. 1

                                                                                      Thanks for answering. I assume such behaviour is only in place inside of the perl interpreter, is it different from other versions of perl (say version 1 for example)? are there other dynamic language interpreter that do it? It seems a little weird as a hack and thus non-standard and incorrect, but I may be the one incorrect, because I don’t know much else. By the way, thanks for your work on void.

                                                                                  1. 2

                                                                                    Is someone aware of an efficient shuffling algorithm that is biased for something like shuffling playlists? I basically want to put tracks into nested buckets (artist -> album) then shuffle each bucket and distribute each track as far as possible. It should then distribute the tracks from the same artist as far as possible and also distribute tracks from the same album as far as possible.

                                                                                    There are implementations, but I would like to know if there is something better.

                                                                                    Spotify used to do Fisher-Yates shuffle but switched to an algorithm like this.

                                                                                    http://keyj.emphy.de/balanced-shuffle/ https://labs.spotify.com/2014/02/28/how-to-shuffle-songs/ https://cjohansen.no/a-better-playlist-shuffle-with-golang/

                                                                                    1. 2

                                                                                      Spontaneous thought: Fisher–Yates picks the next element uniformity in the range [i,n] – why could you not pick using another distribution defined by similarity to the previously locked song? With dynamic programming you should be able to get a reasonably efficient and optimal solution.

                                                                                      1. 2

                                                                                        I think that’s pretty much state of the art. You can get better results in some cases by starting with a larger target array. Say 4x the number of inputs. This reduces the number of collisions.

                                                                                        I didn’t fully follow the last example, but I think it’s possible to do better than trying to pile the first track into index 0 every time.