1. 2

    I appreciate most of the arguments, but the counter-point around security is missing the spot. For distributions, it is far easier to apply a patch on a single package. Rebuilding or not is not really the difficulty. Now, if many applications are bundling/pinning specific versions, distributions need to patch each version. Some of these versions may be very old and the patch may be more difficult to apply. This is a lot of work. Distributions cannot just bump the dependency as it goes against the stability promise and introduce bugs and changes. Distributions have to support what they ship for around 5 years (because many users use distributions for this exact purpose) while developers usually like to support things for a few months.

    Unfortunately both sides do not want to move an inch. When packaging for Debian, I would appreciate being able to bundle dependencies instead of packaging each single dependency, but there must be some ways to guarantee we are not just multiplying the amount of work we need to provide in the future. However, this is not new. Even with C, many devs do not like distributions freezing their software for 5 years.

    1. 11

      The real “issue” from the distro perspective is that they’re now trying to package ecosystems that work completely differently than the stuff they’re used to packaging, and specifically ecosystems where the build process is tied tightly to the language’s own tooling, rather than the distro’s tooling.

      This is why people keep talking about distros being stuck on twenty-years-ago’s way of building software. Or, really, stuck on C’s way of building software. C doesn’t come with a compiler, or a build configuration tool, or a standard way to specify dependencies and make sure they’re present and available either during build or at runtime. C is more or less just a spec for what the code ought to do when it’s run. So distros, and everybody else doing development in C, have come up with their own implementations for all of that, and grown used to that way of doing things.

      More recently-developed languages, though, treat a compiler and build tool and dependencies/packaging as a basic requirement, and tightly integrate to their standard tooling. Which then means that the distro’s existing and allegedly language-agnostic tooling doesn’t work, or at least doesn’t work as well, and may not have been as language-agnostic as they hoped.

      Which is why so many of the arguments in these threads have been red herrings. It’s not that “what dependencies does this have” is some mysterious unanswerable question in Rust, it’s that the answer to the question is available in a toolchain that isn’t the one the distro wants to use. It’s not that “rebuild the stuff that had the vulnerable dependency” is some nightmare of tracking down impossible-to-know information and hoping you caught and patched everything, it’s that it’s meant to be done using a toolchain and a build approach that isn’t the one the distro wants to use.

      And there’s not really a distro-friendly thing the upstream developers can do, because each distro has its own separate preferred way of doing this stuff, so that’s basically pushing the combinatorial nightmare upstream and saying “take the information you already provide in your language’s standard toolchain, and also provide and maintain one additional copy of it for each distro, in that distro’s preferred format”. The only solution is for the distros to evolve their tooling to be able to handle these languages, because the build approach used in Rust, Go, etc. isn’t going away anytime soon, and in fact is likely to become more popular over time.

      1. 5

        The only solution is for the distros to evolve their tooling to be able to handle these languages

        The nixpkgs community has been doing this a lot. Their response to the existence of other build tools has been to write things like bundix, cabal2nix and cargo2nix. IIRC people (used to) use cabal2nix to make the whole of hackage usable in nixpkgs?

        From the outside it looks like the nix community’s culture emphasizes a strategy of enforcing policy by making automations whose outputs follow it.

        1. 4

          Or, really, stuck on C’s way of building software.

          I think it’s at least slightly more nuanced than that. Most Linux distributions, in particular, have been handling Perl modules since their earliest days. Debian/Ubuntu use them fairly extensively even in base system software. Perl has its own language ecosystem for building modules, distributing them in CPAN, etc., yet distros have generally been able to bundle Perl modules and their dependencies into their own package system. End users are of course free to use Perl’s own CPAN tooling, but if you apt-get install something on Debian that uses Perl, it doesn’t go that route, and instead pulls in various libxxx-perl packages. I don’t know enough of the details to know why Rust is proving more intractable than Perl though.

          1. 6

            I don’t know enough of the details to know why Rust is proving more intractable than Perl though

            There is a big difference between C, Perl, Python on the one side and Rust on the other.

            The former have a concept of “search path”: there’s a global namespace where all libraries live. That’s include path for C, PYTHONPATH for Python and @INC (?) for Perl. To install a library, you put it into some blessed directory on the file system, and it becomes globally available. The corollary here is that every one is using the same version of library. If you try to install two different versions, you’ll get a name conflict.

            Rust doesn’t have global search path / global namespace. “Installing rust library” is not a thing. Instead, when you build a piece of Rust software, you need to explicitly specify path for every dependency. Naturally, doing this “by hand” is hard, so the build system (Cargo) has a lot of machinery for wiring a set of interdependent crates together.

            1. 2

              there’s a global namespace where all libraries live

              Yes, this is one of the biggest differences. Python, Perl, etc. come out of the Unix-y C-based tradition of not having a concept of an “application” you run or a “project” you work on, but instead only of a library search path that’s assumed to be shared by all programs in that language, or at best per-user unique so that one user’s set of libraries doesn’t pollute everyone else’s.

              Python has trended away from this and toward isolating each application/project – that’s the point of virtual environments – but does so by just creating a per-virtualenv search path.

              More recently-developed languages like Rust have avoided ever using the shared-search-path approach in the first place, and instead isolate everything by default, with its own project-local copies of all dependencies.

              (the amount of code generation/specialization that happens at compile time in Rust for things like generics is a separate issue, but one that distros – with their ability to already handle C++ – should in theory not have trouble with)

          2. 4

            This is why people keep talking about distros being stuck on twenty-years-ago’s way of building software. Or, really, stuck on C’s way of building software. C doesn’t come with a compiler, or a build configuration tool, or a standard way to specify dependencies and make sure they’re present and available either during build or at runtime. C is more or less just a spec for what the code ought to do when it’s run. So distros, and everybody else doing development in C, have come up with their own implementations for all of that, and grown used to that way of doing things.

            More than that, I’d say they’re language-specific package managers around autotools+C.

        1. 7

          this is remarkable!

          for the sake of my understanding, what are the other popular options for installing a drop-in c/c++ cross compiler? A long time ago, I used Sourcery Codebench, but I think that was a paid product

          1. 7

            Clang is a cross-compiler out of the box, you just need headers and libraries for the target. Assembling a sysroot for a Linux or BSD system is pretty trivial, just copy /usr/{local}/include and /usr/{local}/lib and point clang at it. Just pass a --sysroot={path-to-the-sysroot} and -target {target triple of the target} and you’ve got cross compilation. Of course, if you want any other libraries then you’ll also need to install them. Fortunately, most *NIX packaging systems are just tar or cpio archives, so you can just extract the ones you want in your sysroot.

            It’s much harder for the Mac. The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use. I couldn’t see anything in the Zig documentation that explains how they get around this. Hopefully they’re not just violating Apple’s license agreement…

            1. 3

              Zig bundles Darwin’s libc, which is licensed under APSL 2.0 (see: https://opensource.apple.com/source/Libc/Libc-1044.1.2/APPLE_LICENSE.auto.html, for example).

              APSL 2.0 is both FSF and OSI approved (see https://en.wikipedia.org/wiki/Apple_Public_Source_License), which makes me doubt that this statement is correct:

              The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use.

              That said, if you have more insight, I’m definitely interested in learning more.

              1. 1

                I remember some discussion about these topics on Guix mailing lists, arguing convincingly why Guix/Darwin isn’t feasible for licensing issues. Might have been this: https://lists.nongnu.org/archive/html/guix-devel/2017-10/msg00216.html

              2. 1

                The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use.

                Can’t we doubt the legal validity of such prohibition? Copyright often doesn’t apply where it would otherwise prevent interoperability. That’s why we have third party printer cartridges, for instance.

                1. 2

                  No, interoperability is an affirmative defence against copyright infringement but it’s up to a court to decide whether it applies.

              3. 4

                When writing the blog post I googled a bit about cgo specifically and the only seemingly general solution for Go I found was xgo (https://github.com/karalabe/xgo).

                1. 2

                  This version of xgo does not seem to be maintained anymore, I think most xgo users now use https://github.com/techknowlogick/xgo

                  I use it myself and albeit very the tool is very heavy, it works pretty reliable and does what is advertised.

                  1. 2

                    Thanks for mentioning this @m90. I’ve been maintaining my fork for a while, and just last night automated creating PRs for new versions of golang when detected to reduce time to creation even more.

                2. 3

                  https://github.com/pololu/nixcrpkgs will let you write nix expressions that will be reproducibly cross-compiled, but you also need to learn nix to use it. The initial setup and the learning curve are a lot more demanding that zig cc and zig c++.

                  1. 3

                    Clang IIRC comes with all triplets (that specify the target, like powerpc-gnu-linux or whatever) enabled OOTB. You can then just specify what triplet you want to build for.

                    1. 2

                      But it does not include the typical build environment of the target platform. You still need to provide that. Zig seems to bundle a libc for each target.

                      1. 2

                        I have to wonder how viable this will be when your targets become more broad than Windows/Linux/Mac…

                        1. 6

                          I think the tier system provides some answers.

                          1. 3

                            One of the points there is that libc is available when cross-compiling.

                            On *NIX platforms, there are a bunch of things that are statically linked into every executable that provide the things that you need for things like getting to main. These used to be problematic for anything other than GCC to use because the GCC exemption to GPLv2 only allowed you to ignore the GPL if the thing that inserted them into your program was GCC. In GCC 4.3 and later, the GPLv3 exemption extended this to any ‘eligible compilation process’, which allows them to be used by other compilers / linkers. I believe most *BSD systems now use code from NetBSD (which rewrote a lot of the CSU stuff) and LLVM’s compiler-rt. All of these are permissively licensed.

                            If you’re dynamically linking, you don’t actually need the libc binary, you just need something that has the same symbols. Apple’s ld64 supports a text file format here so that Apple doesn’t have to ship all of the .dylib files for every version of macOS and iOS in their SDKs. On ELF platforms, you can do a trick where you strip everything except the dynamic symbol tables from the .so files: the linker will still consume them and produce a binary that works if you put it on a filesystem with the original .so.

                            As far as I am aware, macOS does not support static linking for libc. They don’t ship a libc.a and their libc.dylib links against libSystem.dylib, which is the public system call interface (and does change between minor revisions, which broke very single Go program, because Go ignored the rules). As I understand correctly, a bunch of the files that you need to link a macOS or iOS program have a license that says that you may only use them on a Mac. This is why the Visual Studio Mac target needs a Mac connected on the network to remotely access and compile on, rather than cross-compiling on a Windows host.

                            I understand technically how to build a cross-compile C/C++ toolchain: I’ve done it many times before. The thing I struggle with on Zig is how they do so without violating a particularly litigious company’s license terms.

                            1. 2

                              This elucidates a lot of my concerns better than I could have. I have a lot of reservations about the static linking mindset people get themselves into with newer languages.

                              To be specific on the issue you bring up: Most systems that aren’t Linux either heavily discourage static libc or ban it - and their libcs are consistent unlike Linux’s, so there’s not much point in static libc. libc as an import library that links to the real one makes a lot of sense there.

                  1. 8

                    Tomorrow seems to be a very bad day for all those poor souls, who didn’t have time/resources to switch to py3 yet. Fortunately enough it can be easily fixed with pip<21 but it will definitely add additional grey hairs to some heads.

                    1. 7

                      As one of those poor souls, thanks. We have eight years of legacy code that Just Works and so seldom gets touched, and a major 3rd party framework dependency that hasn’t updated to Python 3 either. We just got permission and funding to form a new engineering sub-group to try to deal with this sort of thing, and upper management is already implicitly co-opting it to chase new shinies.

                      1. 9

                        Python 3.0 was released in 2008. I personally find it hard to feel sympathy for anyone who couldn’t find time in the last twelve years to update their code, especially if it’s code they are still using today. Even more so for anyone who intentionally started a Python 2 project after the 3.0 ecosystem had matured.

                        1. 9

                          Python 2.7 was released in 2010. Python 3.3 in 2012. Python 2.6 last release was in 2013. Only from this date people could easily release stuff compatible with both Python 2 and Python 3. You may also want to take into consideration the end of support date of some of the distributions shipping Python 2.6 and not Python 2.7 (like Debian Squeeze, 2016).

                          I am not saying that 8 years is too fast, but Python 3.0 release date is mostly irrelevant as the ecosystem didn’t use it.

                          1. 7

                            Python 3.0 was not something you wanted to use; it took several releases before Python 3 was really ready for people to write programs on. Then it took longer for good versions of Python 3 to propagate into distributions (especially long term distributions), and then it took longer for people to port packages and libraries to Python 3, and so on and so forth. It has definitely not been twelve years since the ecosystem matured.

                            Some people do enough with Python that it’s sensible for them to build and maintain their own Python infrastructure, so always had the latest Python 3. Many people do not and so used supplied Python versions, and may well have stable Python code that just works and they haven’t touched in years (perhaps because they are script-level infrastructure that just sits there working, instead of production frontend things that are under constant evolution because business needs keep changing).

                            1. 4

                              Some of our toolchain broke in the last few weeks. We ported to python3 ages ago, but chunks of infrastructure still support both, and some even still default to 2. The virtualenv binary in Ubuntu 18.02 does that; and that’s a still-supported Ubuntu version, and the default runner for GitHub CI.

                              I think python2-related pain will continue for years to come even for people who have done the due diligence on their own code.

                              1. 4

                                Small tip regarding virtualenv: Since python 3.3 virtualenv comes bundled as the venv module in python, so you can just use python -m venv instead of virtualenv, then you are certain it matches the python version you are using.

                                1. 1

                                  virtualenv has some nice features which do not exist for venv. One of the examples is activate_this.py script, which can be used for configuration of remote environment, similar to what pytest_cloud does.

                                  1. 1

                                    virtualenv has some nice features which do not exist for venv

                                    Huh, thanks for pointing that out. I haven’t been writing so much Python in the last few years, and I totally thought venv and virtualenv were the same thing.

                              2. 4

                                Consider, at a minimum, the existence of PyPy; PyPy’s own position is that PyPy will support Python 2.7 forever because PyPy is written in RPython, a strict subset of Python 2.7.

                                Sympathy is not required; what you’re missing out on is an understanding that Python is not wholly under control of the Python Software Foundation. By repeatedly neglecting PyPy, the PSF has effectively forced them to create their own parallel Python 2 infrastructure; when PyPI finally makes changes which prevent Python 2 code from deploying, then we may see PyPy grow even more tooling and possibly even services to compensate.

                                It is easy for me to recognize in your words an inkling of contempt for Python 2 users.

                                1. 21

                                  Every time you hop into one of these threads, you frame it in a way which implies you think various entities are obligated to maintain a Python 2 interpreter, infrastructure for supporting Python 2 interpreters, and versions of third-party packages which stay compatible with Python 2, for all of eternity.

                                  Judging from that last thread, you seem to think I am one of the people who has that obligation. Could you please, clearly, state to me the nature of this obligation – is its basis legal? moral? something else? – along with its origin and the means by which you assume the right to impose it on me.

                                  I ask because I cannot begin to fathom where such an obligation would come from, nor do I understand why you insist on labeling it “contempt” when other people choose not to maintain software for you, in the exact form you personally prefer, for free, forever, anymore.

                                  1. 2

                                    Your sympathy, including any effort or obligation that you might imagine, is not required. I don’t know how to put it any more clearly to you: You have ended up on the winning side of a political contest within the PSF, and you are antagonizing members of the community who lost for no other reason than that you want the political divide to deepen.

                                    Maybe, to get some perspective, try replacing “Python 2” with “Perl 5” and “Python 3” with “Raku”; that particular community resolved their political divide recently and stopped trying to replace each other. Another option for perspective: You talk about “these threads”; what are these threads for, exactly? I didn’t leave a top-level comment on this comment thread; I didn’t summon you for the explicit purpose of flamewar.

                                    Finally, why not reread the linked thread? I not only was clearly the loser in that discussion, but I also explained that I personally am not permanently tied to Python 2, and that I’m trying to leave the ecosystem altogether in order to avoid these political problems. Your proposed idea of obligation towards me is completely imagined and meant to make you seem like a victim.

                                    Here are some quotes which I think display contempt towards Python 2 and its users, from the previous thread (including your original post) and also the thread before that one:

                                    If PyPy wants to internally maintain the interpreter they use to bootstrap, I don’t care one way or another. But if PyPy wants that to also turn into broad advertisement of a supported Python 2 interpreter for general use, I hope they’d consider the effect it will have on other people.

                                    Want to keep python 2 alive? Step up and do it.

                                    What do you propose they do then? Extend Python 2 support forever and let Python 2 slow down Python 3 development for all time?

                                    That’s them choosing and forever staying on a specific dependency. … Is it really that difficult for Python programmers to rewrite one Python program in the newer version of Python? … Seems more fair for the project that wants the dependency to be the one reworking it.

                                    The PyPy project, for example, is currently dependent on a Python 2 interpreter to bootstrap and so will be maintaining their own either for as long as PyPy exists, or for as long as it takes to migrate to bootstrapping on Python 3 (which they seem to think is either not feasible, or not something they want to do).

                                    He’s having a tantrum. … If you’re not on 3, it’s either a big ball of mud that should’ve been incrementally rewritten/rearchitected (thus exposing bad design) or you expected an ecosystem to stay in stasis forever.

                                    I’m not going to even bother with your “mother loved you best” vis a vis PyPy.

                                    You’re so wrapped up in inventing enemies that heap contempt on you, but it’s just fellow engineers raising their eyebrows at someone being overly dramatic. Lol contempt. 😂😂😂

                                    If I didn’t already have a long history of knowing other PyPy people, for example, I’d be coming away with a pretty negative view of the project from my interactions with you.

                                    What emotional word would you use to describe the timbre of these attitudes? None of this has to do with maintainership; I don’t think that you maintain any packages which I directly require. I’m not asking for any programming effort from you. Indeed, if you’re not a CPython core developer either, then you don’t have the ability to work on this; you are also a bystander. I don’t want sympathy; I want empathy.

                                    1. 6

                                      You have ended up on the winning side of a political contest within the PSF, and you are antagonizing members of the community who lost for no other reason than that you want the political divide to deepen.

                                      And this is where the problem lies. Your behavior in the previous thread, and here, makes clear that your approach is to insult, attack, or otherwise insinuate evil motives to anyone who disagrees with you.

                                      Here are some quotes which I think display contempt towards Python 2 and its users

                                      First of all, it’s not exactly courteous to mix and match quotes from multiple users without sourcing them to who said each one. If anyone wants to click through to the actual thread, they’ll find a rather different picture of, say, my engagement with you. But let’s be clear about this “contempt”.

                                      In the original post, I said:

                                      The PyPy project, for example, is currently dependent on a Python 2 interpreter to bootstrap and so will be maintaining their own either for as long as PyPy exists, or for as long as it takes to migrate to bootstrapping on Python 3 (which they seem to think is either not feasible, or not something they want to do).

                                      You quoted this and replied:

                                      This quote is emblematic of the contempt that you display towards Python users.

                                      I remain confused as to what was contemptuous about that. You yourself have confirmed that PyPy is in fact dependent on a Python 2 interpreter, and your own comments seem to indicate there is no plan to migrate away from that dependency. It’s simply a statement of fact. And the context of the quote you pulled was a section exploring the difference between “Python 2” the interpreter, and “Python 2” the ecosystem of third-party packages. Here’s the full context:

                                      Unfortunately for that argument, Python 2 was much more than just the interpreter. It was also a large ecosystem of packages people used with the interpreter, and a community of people who maintained and contributed to those packages. I don’t doubt the PyPy team are willing to maintain a Python 2 interpreter, and that people who don’t want to port to Python 3 could switch to the PyPy project’s interpreter in order to have a supported Python 2 interpreter. But a lot of those people would continue to use other packages, too, and as far as I’m aware the PyPy team hasn’t also volunteered to maintain Python 2 versions of all those packages.

                                      So there’s a sense in which I want to push back against that messaging from PyPy folks and other groups who say they’ll maintain “Python 2” for years to come, but really just mean they’ll maintain an interpreter. If they keep loudly announcing “don’t listen to the Python core team, Python 2 is still supported”, they’ll be creating additional burdens for a lot of other people: end users are going to go file bug reports and other support requests to third-party projects that no longer support Python 2, because they heard “Python 2 is still supported”, and thus will feel entitled to have their favorite packages still work.

                                      Even if all those requests get immediately closed with “this project doesn’t support Python 2 anymore”, it’s still going to take up the time of maintainers, and it’s going to make the people who file the requests angry because now they’ll feel someone must be lying to them — either Python 2 is dead or it isn’t! — and they’ll probably take that anger out on whatever target happens to be handy. Which is not going to be good.

                                      This is why I made comments asking you to consider the effect of your preferred stance on other people (i.e., on package maintainers). This is why I repeated my point in the comments of the previous thread, that an interpreter is a necessary but not sufficient condition for saying “Python 2 is still supported”. I don’t think these are controversial statements, but apparently you do. I don’t understand why.

                                      I also still don’t understand comments of yours like this one:

                                      Frankly, I think that you show your hand when you say “really important packages like NumPy/SciPy.” That’s the direction that you want Python to go in.

                                      Again, this is just a statement of fact. There are a lot of people using Python for a lot of use cases, and many of those use cases are dependent on certain domain-specific libraries. As I said in full:

                                      So regardless of whether I use them or not, NumPy and SciPy are important packages. Just as Jupyter (née IPython) notebooks are important, even though I don’t personally use them. Just as the ML/AI packages are important even though I don’t use them. Just as Flask and SQLAlchemy are important packages, even though I don’t use them. Python’s continued success as a language comes from the large community of people using it for different things. The fact that there are large numbers of people using Python for not-my-use-case with not-the-libraries-I-use is a really good thing!

                                      Your words certainly imply you think it’s a bad thing that there are, for example, people using NumPy and SciPy, or at least that you think that’s a bad direction for Python to go in. I do not understand why, and you’ve offered no explanation other than to hand-wave it as “contempt” and “denigration”.

                                      But really the thing I do not understand is this:

                                      You have ended up on the winning side of a political contest within the PSF

                                      You seem to think that “the PSF” and/or some other group of people or entities in the Python world are your enemy, because they chose to move to Python 3 and to stop dedicating their own time and resources to maintaining compatibility with and support for Python 2. The only way that this would make any sense is if those entities had some sort of obligation, to you or to others, to continue maintaining compatibility with and support for Python 2. Hence I have asked you for an explanation of the nature and origin of that obligation so that I can try to understand the real root of why you seem to be so angry about this.

                                      Admittedly I don’t have high hopes for getting such an explanation, given what happened last time around, but maybe this time?

                                      1. 4

                                        Your behavior in the previous thread, and here, makes clear that your approach is to insult, attack, or otherwise insinuate evil motives to anyone who disagrees with you.

                                        As Corbin has said themselves multiple times, they are not a nice person. So unfortunately you can’t really expect anything better than this.

                              3. 2

                                Why will tomorrow be a bad day? pip will continue to work. They’re just stopping releasing updates.

                                1. 1

                                  From my OpenStack experience – many automated gates could go south, because they could do something like: pip install pip --upgrade hence dropping support for py2. I know, that whomever is involved in this conundrum, should know better and should introduce some checks. But I also know, that we’re all humans, hence prone to make errors.

                                  1. 2

                                    pip install pip --upgrade should still work, unless the pip team screwed something up.

                                    When you upload something to PyPI, you can specify a minimal support Python version. So Python 2.7 will get the latest version that still supports Python 2.

                                    And indeed, if you go to https://pypi.org/project/pip/ you will see “Requires: Python >= 3.6”, so I expect things will Just Work for most Python 2 users.

                              1. 1

                                The Vary header should always be sent. Otherwise, the non-301 response could be cached.

                                1. 2

                                  Until a few years back, I was also running a dedicated Hetzner server. From an availability point of view, this is a bit a source of stress as everything is running on a single server who would get problems from time to time (most common being an hard disk failure, promptly fixed by Hetzner technical team). I am now using several VPS as it gives me redundancy. Sure, you don’t get as many memory and CPU for the same price.

                                  1. 4

                                    Yeah, I’m aware it’s putting a lot of eggs in one basket, however given most of the important services are either stateless or excessively backed up I’m not practically concerned.

                                    1. 2

                                      common being an hard disk failure

                                      This is why I’m using an KVM host with managed SSD RAID 10 and guaranteed CPU,Memory and Network*. Yeah you will have always some more performance on a bare metal system you own, but I didn’t have to work around a broken disk or system since 2012 on my personal host. I still have enough performance for multiple services and 3 bigger game systems + VoIP. The only downtime I had was for ~1h when the whole node was broken and my system got transferred to another host, but I didn’t had to do anything for it. That way I didn’t have any problems even on the services that need to run 24/7 or people will notice.

                                      *And I don’t mean managed server, that’d be far too expensive. Just something like this.

                                    1. 1

                                      I am using this alias:

                                      branches = "!f() { git for-each-ref --sort=committerdate refs/heads/$1** refs/heads/$1* refs/heads/$1/** --format='%(HEAD) %(color:yellow)%(refname:short)%(color:reset) — %(contents:subject) (%(color:green)%(committerdate:relative)%(color:reset))'; }; f"
                                      

                                      As I am using prefixes for branch names, I can do git branches fix/ for example.

                                      1. 14

                                        I don’t really agree with your take on webfonts; I use it on my personal website because … I think it just looks nice. Call that “branding” if you will, but I don’t think there’s anything wrong with making your site look nice according to your personal aesthetics. You may dislike these aesthetics, but I don’t really subscribe to the “functionality over form”-ethos.

                                        This adds about 140k, which is comparatively a lot considering the actual content is ~35k (13k gzip’d, and depending on page length), but it’s not really that large and it’s optional – the site will work fine without it. It’s still small enough to be in the “250k club”.

                                        One of the great features of the web is that you can override the site’s choices if you don’t like them. This is what “reader mode” is all about, but I tend to use a little bookmarklet if I don’t like the font size, colour, or style and which preserves the layout and such:

                                        javascript:(function() {
                                            document.querySelectorAll('p, li, div').forEach(function(n) {
                                                n.style.color = '#000';
                                                n.style.font = '500 16px/1.7em sans-serif';
                                            });
                                        })();
                                        

                                        This is really simple, but works well on ~90% of websites, the biggest omission being that it doesn’t deal well with dark background colours (should probably spend some time on that, since I really hate “dark mode” and will typically just close a site with a dark background as it’s so hard to read for me). You can get a lot more advanced with stuff like Helperbird if you want.

                                        More practically, there are differences between font metrics on various platforms, and “word” in font A may render at 20px wide, but 22px wide in font B. I use DejaVu Sans Serif instead of Arial, which is a bit wider, and some labels and the like can break. And “12px” in one font may be quite legible, but difficult to read in another font. Different fonts also work best with different line-height values. These aren’t really big issues on personal sites and the like, but if you build an UI then it can become an issue, and “just rely on whatever the user wants as font-family: sans-serif doesn’t really work all that well.

                                        1. 9

                                          In my opinion, web fonts kind of suck. You’re just adding multiple seconds of load time before the text shows up, and it the network happens to be slow, you’ll get a flash of text with one font before it switches to the web font. Just not a very good experience. The amount of times I’ve sat there for seconds waiting for text to show up after everything else has loaded, due to the extra web font request, is infuriating.

                                          Just trust the user’s browser to have good default fonts.

                                          Of course this might not be the best idea for web apps with things like labels which must be of a given size, as you point out. But this is for text-based websites like blogs.

                                          1. 3

                                            Yeah, this can be an issue, but you can improve on that with font-display (optional or fallback). With the default of block there’s a 3s blocking timeout, but with fallback it’s more like 100ms which is a reasonable compromise. Also not loading 3 different webfonts in regular, bold, and italics helps (you can definitely overdo it), but ~140k is still reasonable even for slower connections (especially if you self-host it so you don’t have the extra overhead of DNS/TLS/HTTP).

                                          2. 6

                                            I don’t think there’s anything wrong with making your site look nice to your personal aesthetics.

                                            I totally agree with you on this, however I don’t think the solution is to use a webfont. My own websites use “monospace” and “sans-serif” as the only fonts, and then I tweak my system/browser settings (using fontconfig) so they map to my personal font tastes. This way, my website looks exactly as I want too, and everyone can make it look however they want by setting their own font mapping. Here is my ~/.config/fontconfig/fonts.conf:

                                            <?xml version="1.0" encoding="UTF-8"?>
                                            <!DOCTYPE fontconfig SYSTEM "fonts.dtd">
                                            <fontconfig>
                                              <!-- Override the ugly helvetica font too… -->
                                              <alias>
                                                <family>Helvetica</family>
                                                <prefer><family>IBM Plex Sans</family></prefer>
                                              </alias>
                                              <alias>
                                                <family>sans-serif</family>
                                                <prefer><family>IBM Plex Sans</family></prefer>
                                              </alias>
                                              <alias>
                                                <family>monospace</family>
                                                <prefer><family>IBM Plex Mono</family></prefer>
                                              </alias>
                                            </fontconfig>
                                            
                                            1. 6

                                              But then it looks nice for the 0.1% that take the effort to do this (and can stomach fontconfig’s XML), and not for the vast majority of people reading your site.

                                              CC: @Seirdy

                                              1. 7

                                                chances are the vast majority of people won’t share your aesthetic sensibilities and will appreciate seeing a font they are familiar with (and looks good with their system settings and screen resolution)

                                                1. 8

                                                  The problem is that the vast majority of people (probably even the majority of users on this site, including me) don’t bother setting fonts in their browser. Plus, fonts can convey a style/mood for the site, which is nice to see when well-done.

                                                  Plus plus, power users can have their browser ignore fonts on a website, not download them, and just use their chosen fonts.

                                                  1. 2

                                                    The problem is that the vast majority of people (probably even the majority of users on this site, including me) don’t bother setting fonts in their browser.

                                                    I figured this response would come up, so I addressed it in a dedicated section.

                                                    Fonts can convey a style/mood for the site, which is nice to see when well-done.

                                                    This makes sense for more complex websites, but I’d argue that textual websites should be personalized with content rather than form.

                                                    Plus plus, power users can have their browser ignore fonts on a website, not download them, and just use their chosen fonts.

                                                    I addressed this in another dedicated section.

                                                    1. 3

                                                      I personally think that things like Font Awesome are a problematic hack. The very fact that system fonts don’t have a good fallback for the symbols should really tell us something—namely that we’re abusing fonts. That said, if you need Font Awesome, just set the browser to a patched font with FA symbols.

                                                      Back to the main discussion, though. Likely the vast majority of people on the internet enjoy a nice-looking web page—i.e. a website that employs good graphical design. And the authors who want to design them often don’t do it for branding, which implies a desire for recognition, but rather for quality. The authors of “simple” pages (not companies) just want to think their own site looks nice. So there’s a demand for these stylized sites, and the creators of them don’t have to have some kind of evil, manipulative intention.

                                                      In general, asking people to stop loading custom fonts makes me think that browsers need more work, or that some layer other than the site itself is at fault. The whole point of a browser is that it will enable you to browse and view the Web how you want. If a website suggesting a font to you breaks everything, then we have bigger problems.

                                                  2. 4

                                                    While there is a place for consistency, i think there’s also a place for personal style. For example, when visiting someone’s house, i like to see that it has its own style, different than mine. And in a similar way, i like seeing people’s styles when visiting their personal websites. It makes them more memorable to me than if they all looked the same, like Medium or FB posts. And it can also give some nice ideas to steal to try out on my own projects/house.

                                                    But i guess there’s something to say about consistency too. After all, books can “all look the same” in the sense they they are only text, but of course their content can be vastly different. So, i don’t know, maybe these two different viewpoints can co-exist? Maybe have some personal presentation, but also let readers easily “extract” the content into their own format/devices?

                                                    I feel i’ve been doing something like this for a while: when reading articles on my computer, i usually use the page’s style instead of the browser reader mode, but sometimes i send the article to a e-reader to read later (using this extension), which strips all the website personalized styles and converts the article into ebook-friendly format.

                                                    1. 3

                                                      Thus far the only feedback I’ve got “hey, looks nice”, and the only negative feedback has been on some minor technical issues. Personally I’m rarely bothered by a webfont unless it’s some crazy stuff like a very thin font (or a low contrast colour). This probably applies to most people, and the people who are really bothered by it are very few, and they all hang out on sites like Lobsters. I don’t think it’s representative for all people viewing your site.

                                                      1. 1

                                                        What is the DPI of your monitor

                                                        1. 1

                                                          I used a 14” 1366x768 screen up until a few weeks ago, and I have a 26” 1920x1080 screen. I’m too lazy to calculate the exact DPI, but neither are very high. I’m not sure how this is important though?

                                                          1. 1

                                                            Serif fonts aren’t usually as good on low-DPI screens, which is why computers tend to use sans-serif fonts.

                                                            1. 2

                                                              Only at small sizes, and not even with all serif fonts. “Serif fonts don’t work well on low DPI screens” is extremely simplistic to the point of uselessness. And it also has nothing to do with webfonts (actually, with webfonts you can choose a serif font that you know will work well for your particular use case).

                                                              1. 1

                                                                Only at small sizes, and not even with all serif fonts. “Serif fonts don’t work well on low DPI screens” is extremely simplistic to the point of uselessness.

                                                                True, but browser-default sans-serif fonts are also far more likely to display well than custom fonts, in my experience. Far too many websites with custom fonts end up being harder to read. For example:

                                                                Thus far the only feedback I’ve got “hey, looks nice”, and the only negative feedback has been on some minor technical issues.

                                                                Your site looks pretty good (I knew you seemed familiar!). I’m not sure I’m a fan of the font choice, though.

                                                                Here’s a side-by-side comparison of how your website looks on my laptop with its default fonts and with my preferred sans-serif font: https://seirdy.one/misc/arp242.net_fonts.png

                                                                I didn’t zoom out; that’s how it looks out of the box on my default browser with my dark-mode addon. The italic text–especially the small italic text–is much more readable in sans-serif. That isn’t even the lowest-resolution screen I use; if I used one of the campus library’s spares, the italic text would probably be unreadable.

                                                                Low-res screen users aren’t the only demographic with strong font preferences. Another example: dyslexic users might set their preferred font to Dyslexie; it’s not the prettiest, but it does help with this disability.

                                                                Branding isn’t evil, but it can have big trade-offs. I’d rather have text-centric websites distinguish themselves through content, and allow users to dictate presentation.

                                                                Also: would you be okay with me updating the article to include the screenshot of your website? I’ll pair it with a shout-out since I have found some interesting content there in the past.

                                                                1. 2

                                                                  Low-res screen users aren’t the only demographic with strong font preferences. Another example: dyslexic users might set their preferred font to Dyslexie; it’s not the prettiest, but it does help with this disability.

                                                                  Sure, I appreciate that, but one of the great features of the web is that it’s all so easy to modify with client-side tools. For example with the bookmarklet I posted earlier, or the myriad of extensions that are out there. I’m not “forcing” my font preferences on anyone, or “dictating” the presentation, they’re just a default that’s fairly easy to alter if you want, just as you modified the colour scheme.

                                                                  Arguably, browsers should capitalize a bit more on this; the “reader mode” that Firefox and Safari come with are a bit too invasive IMO since it mucks with the layout too much. I worked a bit on a less invasive widget/extension last year which swaps out just the colours and fonts, but never finished that.

                                                                  Branding isn’t evil, but it can have big trade-offs.

                                                                  I don’t really see it as “branding”; it’s just a font I like. I set up my email client to use the same font, as well as the “light-weight reader mode” extension that I mentioned earlier.

                                                                  I wouldn’t use a non-standard font like this on a product website or the like by the way, but my personal website is my, well, personal space. While “conveying ideas” is certainly part of the reason I write stuff on there, the main reason is more about personal development and ordering my own thoughts on matters. I was happily putting stuff up for years without anyone taking much notice or reading it.

                                                                  If I had my way then I’d also enable the old-fashioned ligatures on the main body text (right now it’s just for headers), as I feel it looks handsome 😅 But since most people aren’t used to it, it’s probably more distracting than anything else.

                                                                  Also: would you be okay with me updating the article to include the screenshot of your website? I’ll pair it with a shout-out since I have found some interesting content there in the past.

                                                                  Yeah, sure, I have no problems with that. Thanks for asking.

                                                            2. 1

                                                              IME web font hinting is worse than hinting on native fonts, which is more of a problem on low DPI screens. But I guess people get used to it.

                                                              1. 1

                                                                Can’t say I ever noticed this, but to be honest I don’t really have a good eye for these kind of things either. I’m happy with just “DVD quality” for films too 🤷‍♂️

                                                      2. 1

                                                        I addressed this in the article in a dedicated section.

                                                        Furthermore, fontconfig isn’t the only way to change fonts; most browsers’ settings pages let you change default fonts with a drop-down menu that’s a bit less intimidating for non-technical users.

                                                    2. 3

                                                      Edit: I updated the article to contain this information. Diff.

                                                      I agree that the default sans-serif font family doesn’t look good, which is why I change it in my computer’s gtk3/fontconfig settings. Now every website that uses sans-serif will have my preferred font. By setting your default font family to sans-serif, you automatically use the font family of the user’s browser. Users who care very much about fonts will be pleased to see their favorite font automatically selected.

                                                      I should address this more clearly. Thanks for the feedback!

                                                      1. 1

                                                        As for the size, you can consider subsetting your font with pysubset. On my website, I get 23k for the text font (regular and italic) and 7k for the code font (regular). It also enables you to start from a font with extensive coverage to get most characters.

                                                        Another argument for a web font is that you can ensure the text font and the code font matches. You can’t really get that with a font stack and you definitely don’t get that with the default fonts. Many websites apply a size correction for the code font to make it fit more with the text font. Code font also often have a background to make it less visible there is a mismatch. There is also often a difference with boldness. In my case, I have matched both boldness and x-height of text and code fonts and I find it easier to read this way.

                                                        1. 1

                                                          you definitely don’t get that with the default fonts.

                                                          Most OSes and browsers set default sans-serif and monospace fonts that fit together, and users are free to change these if they like.

                                                          The article also explicitly addressed how to change colors (such as code backgrounds) properly. Nowhere in the article did I mention font sizes. Maybe I wasn’t clear enough.

                                                          The article itself practices what it preaches; it uses default fonts without the resizing issues you describe, at least on the graphical browser engines I tested (Blink, Webkit, Gecko, Links, Netsurf, Dillo, and Trident [Trident tested via webpagetest.org]).

                                                          1. 1

                                                            Most browsers default to Courier and Helvetica or a variation (Courier New and Arial). They absolutely do not match. Courier is a slab serif font. Remove the gray background and you should see there is quite a mismatch between the two.

                                                            1. 1

                                                              I’ve since replaced the grey background with a soft border to better accommodate users who override the default fg/bg colors; try reloading and skipping the browser cache. I don’t see too great a mismatch, but this probably comes down to personal preference. And the best way to satisfy users with the strongest personal preferences is to use their preferred fonts.

                                                        2. 1

                                                          The problem with very many sites using web fonts is that a) they don’t specify any fallback fonts, or at least b) they are not designed with the fallback fonts in mind.

                                                          I block all web fonts (except icon fonts, which don’t even get me started…) in my browser, and many sites look terrible.

                                                          1. 2

                                                            That just seems like an implementation defect rather than a problem with the concept of webfonts. I always test with webfonts disabled to make sure the results are reasonable.

                                                        1. 1

                                                          Optimized images. You also might want to use HTML’s element, using jpg/png as a fallback for more efficient formats such as WebP or AVIF. Use tools such as oxipng to optimize images.

                                                          I usually use optipng, and have recently heard of others, but this is the first time I’ve seen oxipng. The made me wonder, what the state of the art image optimiser is. Does anyone have experiences with this?

                                                          1. 2

                                                            AFAICT the best you can do right now is pngquant followed by advpng. I once wrote a deployment pipeline containing the commands/options pngquant -Q 90 -v -o … && advpng -z4 …

                                                            Pngquant dithers the PNG to a few-colour representation. It does nothing for some PNGs and compresses others by a large factor. If you want a soundbite: Pngquant reduces the fidelity of window frames in screenshots, while leaving all the substantive matter unaffected.

                                                            This is lossy, you should store the original asset so you can generate other/better formats later. Here’s a random lossy screenshot, just as a sample. Note the window borders.

                                                            Advpng spends a lot of time trying different compression settings; the output is an ordinary PNG. It’s not lossy. Oxipng might beat advpng, I’d love to see a comparison of the two on a large image set.

                                                            1. 1

                                                              In my tests, oxipng -Z -o max --strip all (my preferred solution for lossless png compression) seems to beat advpng. I updated the article to reference pngquant, thanks! Diff

                                                              1. 1

                                                                Excellent. The update, but particularly for that testing.

                                                            2. 1

                                                              Edit: updated the article with more info. Diff.

                                                              Oxipng is basically a parallelized optipng that optionally supports the better (and much slower) Zopfli compression algorithm to get the smallest size possible.

                                                              I should also add a link to the manpage for cwebp, the standard webp encoder that comes with libwebp-tools, and jpegoptim. I use jpegoptim -s, cwebp -lossless -m 6 -z 9, and oxipng -Z -o max --strip all to optimize my images.

                                                              1. 1

                                                                For many PNG, you can also consider pngquant which will reduce the palette. In my opinion, it’s really good and only has a small loss of quality. This also helps WebP compression.

                                                                1. 1

                                                                  Updated. Thanks! Diff

                                                              1. 9

                                                                This is all very cool, but it is 2020 and we still can’t share mathematics through the web (as well as gopher, and gemini for that matter) without using images, scripts or SVG. Considering that the web was envisioned as a way for academics and scientists to share knowledge, this is saddening. We should already have a lightweight way of sharing maths by now, readable using text only tools such as Lynx or w3m.

                                                                1. 3

                                                                  a lightweight way of sharing maths by now, readable using text only tools such as Lynx or w3m.

                                                                  How would this work in Lynx, because terminals have very limited graphical drawing capabilities, so more complex equations would be difficult or even impossible to render in Lynx, I think?

                                                                  1. 2

                                                                    I agree, it’s a sad state of affairs that the best approach for math on the web is to add JavaScript which parses and renders latex.

                                                                    1. 1

                                                                      why not store the rendered product and serve it directly

                                                                      1. 2

                                                                        It’s possible obviously, but it sucks that the web has no native support for math.

                                                                        I can’t, for example, copy/paste parts of an <img> tag. I can’t, easily and safely, let users in a comment form write math expressions. The web’s lack of support for math is, I assume, one of the big reasons why places like lobste.rs and reddit don’t support math expressions in comments. If MathML could’ve been relied on, I’m sure someone would’ve popularized a Markdown-flavor with math support already.

                                                                    2. 2

                                                                      You can use KaTeX, not requiring Javascript when pre-rendered and using math fonts.

                                                                      1. 1

                                                                        You have multiple options:

                                                                        1. Use PDF/LaTeX. These can have complex formatting, hyperlinks, and equations.

                                                                        2. Use MathML, the standard for equations in HTML.

                                                                        3. Use unicode characters, like tex-conceal.vim does. I don’t recommend using more than a tiny amount of this.

                                                                        1. 6

                                                                          Use PDF/LaTeX. These can have complex formatting, hyperlinks, and equations.

                                                                          This is not web.

                                                                          Use MathML, the standard for equations in HTML

                                                                          Only Firefox renders MathML, and the page size gets huge very fast.

                                                                          Use unicode characters, like tex-conceal.vim does. I don’t recommend using more than a tiny amount of this.

                                                                          Not good enough when we want to write complex formulas with a lot of symbols.

                                                                          1. 1

                                                                            Yeah, I agree that all three options are far from ideal. But those are our options that don’t resort to scripting (that I’m aware of; please LMK if there are others).

                                                                            Websites like Wikipedia seem to use MathML when it’s available, falling back to images. This approach seems to be the best available.

                                                                      1. 1

                                                                        Great article! Does 1080p works with a Pi 4 or none of the Pi are able to reach this resolution without maintaining the appropriate framerate?

                                                                        1. 3

                                                                          It is hard to compare: not all screenshots use the same font size or line height. Moreover, they are rendered in a super high resolution, then downscaled. I suspect this makes some font look bolder than they should.

                                                                          1. 15

                                                                            Most of this is just empty and ill-informed ranting about protocol differences.
                                                                            Having said that, there’s one portion that I want to respond to.

                                                                            The Address Resolution Protocol is the protocol used in IPv4 to translate link layer addresses (MAC addresses) into internet layer addresses (IP addresses).

                                                                            This protocol relies heavily on broadcast. Well wouldn’t you know it, IPv6 has no broadcast. There’s an “all nodes” link-local multicast which does effectively the same thing, but it’s not the same thing.

                                                                            But besides that there’s really nothing to it, other than the fact that we had to create an entirely new protocol to do the exact same job as an already existing protocol because someone removed broadcast, the second most common type of traffic.

                                                                            ARP uses broadcasts because it was defined in 1982.
                                                                            Many older protocols use broadcast instead of multicast because either multicast didn’t exist or wasn’t reliable.
                                                                            If the ARP protocol had been designed last week then it would use a multicast Ethernet address.

                                                                            1. 3

                                                                              NDP is not a brand new protocol per-se as the protocol is ICMPv6. This is far more elegant than a special non IP protocol to manage IP neighbors. Moreover, NDP also provides router discovery.

                                                                              1. 1

                                                                                ARP was, and, according to protocol, still is, able to resolve more than just IP network addresses. I don’t think that’s a fault of IP or ARP, that’s just the age of the original specifications showing.

                                                                                1. 3

                                                                                  Sure, but this is an additional protocol while ICMPv6 bundles neighbor discovery, router discovery, diagnostics and error reporting in a single protocol with clear semantics. Why keep ARP while you already have a protocol in the stack able to do this job? Also, because of the non-IP nature of ARP, some tools just don’t work with it (eg iptables vs arptables, until it was unified with nftables).

                                                                                  1. 1

                                                                                    Now I know everyone is just going to call this an issue that only I have, but I see a description like that for ICMPv6 and I think “is this doing too much?”

                                                                                    It’s nice to have one protocol that bundles everything, but generally, generally, there are some exceptions, the more you bundle into one unit, the less effective it is at each of those parts. And I don’t want to see ICMPv6 have so much bundled up that we eventually split off separate protocols that do it’s job better - basically going back to square 1

                                                                                    1. 1

                                                                                      Are we bundling too much stuff in UDP or in TCP? TCP is a reliable connection-oriented stream protocol with application multiplexing, UDP is an unreliable connection-less message-oriented protocol with application multiplexing, ICMP is a simple control protocol for network signalling (no reliability, no streaming, no application multiplexing). It’s the right tool to implement neighbor discovery. Separate protocols are a burden for all the implementations.

                                                                                      As for “so much bundled up”, it’s not like in 30 years, there was many stuff added to ICMPv6: https://www.iana.org/assignments/icmpv6-parameters/icmpv6-parameters.xhtml.

                                                                                      Also, you complain that NDP is brand new protocol, then when I say this is ICMPv6, you complain this is not a separate protocol.

                                                                                      1. 1

                                                                                        I’m not saying what it is now, I’m saying what it could be with a lot more widespread acoption. Most things don’t get innovated on and changed when they’re barely deployed.

                                                                              2. 3

                                                                                ARP uses broadcasts because it was defined in 1982.

                                                                                If you read The Alto: A Personal Computer it talks about the development of Ethernet and says, somewhat tellingly, (quoting from memory, possibly slightly wrong) ‘It’s possible to imagine a network with as many as tens of thousands of computers’. It’s easy to forget, in a world where we’re struggling with four billion addresses not being sufficient for a single network, that some of the core technologies were built on the assumption that a 16-bit address is probably sufficient for pretty much anything and so designing more than that was providing massive amounts of headroom for future expansion.

                                                                                1. 2

                                                                                  And yet, when Ethernet, a local area network, was standardised in the early 80s, it had a 48-bit address space, whereas TCP/IP, a wide-area network, only had 32 bits to address the entire world.

                                                                                  We’re running the entire world on a proof-of-concept stack.

                                                                                  1. 1

                                                                                    We’re running the entire world on a proof-of-concept stack.

                                                                                    I do not disagree in the slightest.

                                                                                2. 1

                                                                                  My point was this: Why remove broadcast, then re-work everything to use an “everyone” multicast group, which is just broadcast with extra steps?

                                                                                  The only really difference between “broadcast” and “all nodes” multicast, functionally, is nothing. There’s some benefit to not having three addressing modes to deal with, only two, but at the end of the day, we could take “all-nodes”, rebrand that as the one designated address for “broadcast” instead of just saying “set all the bits in the host field” and it’d work the exact same.

                                                                                  1. 1

                                                                                    My point was this: Why remove broadcast, then re-work everything to use an “everyone” multicast group, which is just broadcast with extra steps?

                                                                                    I would turn this question around. Since broadcast can be simulated with multicast, why have broadcasts at all? There’s no advantage to having both.

                                                                                    If you want to know why this change was made, then I would try searching the IETF’s IPng mailing list archives and look at the IPng proposals. SIPP, the proposal that became IP6, was an evolution of SIP. You could also try emailing Stephen Derring (the designer of SIP).

                                                                                    1. 1

                                                                                      I do understand your point. And, with the current implementation that’s what’s been done, I get that.

                                                                                      My point being, why not label the “broadcast” (airquotes) group here as real broadcast. It’s the multicast addressing scheme using a fixed address, but for protocols that expected “broadcast” (ARP), they just need to change their semantics for IPv6 instead of being cast aside completely.

                                                                                  2. 1

                                                                                    Not to word this as an attack: The entire article is going to be re-written shortly, regardless. Even the spell checker gave up on this one. If there’s any other things you’d like to specifically point out aas empty or ill-informed I’d love to hear it to take into consideration.

                                                                                    1. 2

                                                                                      If there’s any other things you’d like to specifically point out as empty or ill-informed I’d love to hear it to take into consideration.

                                                                                      I’ll start with a disclaimer and a couple caveats.

                                                                                      • It’s been quite awhile since I’ve worked with layer 3 or layer 4 on a regular basis.
                                                                                      • I haven’t used IPv6 in anger.
                                                                                      • I won’t respond to your entire post; I’ve hit my time box.

                                                                                      You might also consider reading the comments at HN.

                                                                                      First Paragraph

                                                                                      IPv6 was a draft in 1997(!), and became a real Internet Standard in 2017.

                                                                                      IETF standards terminology is confusing.

                                                                                      • Proposed Standard - stable and has a well-reviewed specification.
                                                                                      • Draft Standard - like a proposed standard but more mature. This category was discontinued in 2011.
                                                                                      • Standard - stable, mature, and has a well-reviewed specification. It can take a number of years for a standard to reach this category.

                                                                                      Proposed Standards and Draft Standards are just as real as Standards; they’re just newer. IPv6 was a draft standard in 1998, it just took years for it to reach the Standard category.

                                                                                      Allocation Issues

                                                                                      Even though the entire “special” address assignments are exactly 1.271% of the entire IPv6 address space, we’re still allocating giant swathes of addresses. History repeats itself, you can see that right here.

                                                                                      I don’t see this as repeating the mistakes of IPv4. Addressing requirements only increase over time; allocations in a new protocol should be generous to allow for new applications and larger networks. It’s much better to reserve 1.271% now than reserve 0.5% and be stuck with an undersized allocation. To paraphrase an old saying, there are only two sizes of allocations - too large and too small.

                                                                                      Address Representation

                                                                                      I agree that IPv6 addresses are longer and harder to type than IPv4. The same would be true for any IPv4 alternative, though. Hexadecimal addresses are easy to convert to binary and seem like the least worst option.

                                                                                      URLs

                                                                                      To connect to a raw IPv6 address, you wrap it in square brackets. To connect to 2607:f0d0:1002:51::4 directly, that’s http://[2607:f0d0:1002:51::4]/ Why is this a thing?!.

                                                                                      The first URI RFC was in 1994. It used “:” to specify the port number. IPv6 was still a work in progress at the time, it’s not too surprising that there’s a conflict here. URLs have since evolved into a monster specification.

                                                                                      DNS

                                                                                      Unless you have your own DNS server (actually not that hard) that’s configured, you’re still manually typing addresses. Of course if you have, say, pfSense managing your network, every static DHCP lease will be registered in DNS, but it has to take a DHCP lease. And if this device doesn’t… well, I hope you don’t mind typing that out by hand to connect so you can configure it.

                                                                                      You could also use multicast DNS, hosts files, or LLMNR for local name resolution. I would avoid LLMNR but either of the others would work. IPv6 does require more use of DHCP than IPv4. Any device with an IPv6 stack and no DHCP client is unfit for use. There’s also the old standby of assigning devices pneumonic MACs.

                                                                                      That is insane. The IPv6 rDNS TLD is just ip6.arpa, and the IP part is… every single hex digit, reversed.

                                                                                      It’s longer and uglier but consistent with IPv4.

                                                                                      NAT

                                                                                      Don’t make a virtue out of a necessity. NAT is ubiquitous in IPv4 networks because addresses are scarce. IPv6 addresses are plentiful enough that it isn’t needed.

                                                                                      In this sense, all unknown traffic is dropped, and traffic that I have NAT rules for are also allowed past the firewall. This is a “default drop” system. Nothing gets through unless I say so.

                                                                                      NAT doesn’t guard against compromised internal systems reaching out, e.g. UPnP or NAT slipstreaming. Allowing internal hosts to initiate arbitrary connections is a security risk for IPv4 or IPv6; default permit is an anti-pattern. Your network, your rules but NAT isn’t a substitute for a firewall.

                                                                                      ARP / NDP

                                                                                      I agree with @vbernat’s opinion on this topic. ARP was fine when it was introduced but NDP is a better solution to the problem it solves.

                                                                                      One Other Random Remark

                                                                                      And this is just the sign that you’ve made a stupid protocol: Enabling IPv6 on the LAN side of Sophos UTM 9 causes the WAN side to lose it’s link. …Why? I can’t even make my local network IPv6 for link-local communications because then the machine can’t connect to the rest of the internet.

                                                                                      Why is IPv6 to blame for a buggy/broken implementation? Poorly behaved hosts are a fact of life on networks.

                                                                                      1. 1

                                                                                        I’ll take that into consideration… Also that “One Other Random Remark” section wasn’t meant to be blaming IPv6, I’m just pointing out the headache it’s given me just getting any IPv6 support over here.

                                                                                        I can clearly see a couple sections are worded such that people are consistently missing what I indended to say, and I’ll update that.

                                                                                  1. 1

                                                                                    While NixOS 20.09 includes a derivation for Isso, it is unfortunately broken and relies on Python 2

                                                                                    Will be fixed shortly! The fix already landed in unstable, but wasn’t backported to 20.09 yet.

                                                                                    1. 1

                                                                                      Thanks! Then, I’ll have to figure out how to convert an application to a package as the former cannot be used in a buildEnv.

                                                                                    1. 1

                                                                                      What was painful with the comments? Just on the previous post, you link to a comment that was supposedly helpful but not here any more.

                                                                                      1. 1

                                                                                        Managing them - replying to them, filtering spam, moderation etc. I want people to be able to provide feedback though, so by having a guestbook and not a comment form on every page, hopefully it’s a nice balance.

                                                                                        I’ll edit that post to include the comment. Thanks!

                                                                                      1. 2

                                                                                        I wonder if it really makes sense to use home-manager for single user systems? Or rather, I haven’t tried this, I think it actually doesn’t make sense, and I wonder if folks can confirm/reject my suspicions.

                                                                                        My understanding is that home-manager achieves two somewhat orthogonal goals:

                                                                                        • First, it manages user-packages (as opposed to system-wide packages)
                                                                                        • Second, it manages dotfiles

                                                                                        Using home manager, you can install packages without sudo, and they will be available only for your user and won’t be available for root, for example. I think this makes a ton of sense in multi-user systems, where you don’t want to give sudo access to. But for a personal laptop, it seems like there’s little material difference between adding a package to global configuration.nix vs adding it to home-manager’s per-user analogue?

                                                                                        For dotfiles, the need to switch configs to update them seems like it adds a lot of friction. I don’t see why this is better than just storing the dotfiles themselves in a git repo and symlinking them (or just making ~ a git repository with git clone my-dotfiles-repo --separate-git-dir ~/config-repo trick).

                                                                                        Am I overlooking some of the benefits of home-manager?

                                                                                        1. 6

                                                                                          Post author here! I find it conceptually nicer to be able to shove most of my config in my ‘per-user’ configuration, as opposed to a bunch of separate dotfiles that I then have to manage the symlinking for myself.

                                                                                          The friction is definitely a downside, but most of my config I update infrequently enough that it doesn’t matter, and a lot of programs will have a “use this as the config file” option; my mechanism for tweaking kitty is to grab the config file path by looking at ps, copy it into /tmp/kitty.conf, run kitty -c /tmp/kitty.conf until I’m happy, then copy my changes back into my config.

                                                                                          I do agree that doing per-user installation isn’t super useful on a single-user system. This is why I have /etc/nixos owned by my non-root user, so I don’t have to sudo every time I want to edit it (though I do still have to sudo to rebuild the changes).

                                                                                          1. 2

                                                                                            I would like to thank you for the post, I’m totally copying the autorandr and awesome setup (and a bunch of other things too =])

                                                                                          2. 4

                                                                                            What is neat with managing dotfiles with Nix is to be able to reference derivations. You need to run autorandr as part of script? Just say ${pkgs.autorandr}/bin/autorandr and it won’t clutter your path.

                                                                                            1. 2

                                                                                              My understanding is that home-manager achieves two somewhat orthogonal goals:

                                                                                              As @vbernat said, they are not completely orthogonal, since you can refer to files in (package) output paths in configuration files, which is really nice.

                                                                                              I think this makes a ton of sense in multi-user systems, where you don’t want to give sudo access to. But for a personal laptop, it seems like there’s little material difference between adding a package to global configuration.nix vs adding it to home-manager’s per-user analogue?

                                                                                              For me, the large benefit is that I can use home-manager on both NixOS and non-NixOS systems. E.g., in my previous job I used several Ubuntu compute nodes. By using home-manager, I could have exactly the same user environment as my NixOS machines.

                                                                                              1. 2

                                                                                                home-manager has a complexity cost so that needs to be weighed in for sure.

                                                                                                Typically the vim config can get complicated because it has to be adjusted to work between various environments. Since home-manager provides both the package and config, this is much simplified. I remember having to tune vim constantly before.

                                                                                              1. 2

                                                                                                An alternative would be to self-host and subset the fonts you use. pyftsubset from fonttools can do that. I am always reluctant to use only system fonts as it makes hard to match the body font with the monospace font.

                                                                                                1. 1

                                                                                                  Thats a good idea - I didn’t know that was a thing, I might give that a try in the future.

                                                                                                1. 4

                                                                                                  This experiment with home manager coincided with a regular low point in my life in which I try to use emacs. This comes with quite a lot of .emacs changes as I use Lisp for the only thing it’s ever been good for; configuring a text editor in the most complicated way imaginable. Now, the dotfiles that home manager (or Nix) puts on our systems are read-only, so every change would involve changing the source file and running home-manager switch.

                                                                                                  When making big configuration changes, like setting up an initial emacs configuration, why not first do it in .emacs mutably and then later formalizing it as a home-manager configuration? No need to be absolutist ;). I carried over my Emacs configuration from my prior macOS/Arch/Fedora use, so it was never was a big issue [1]. Most other programs have such simple configurations that I am mostly done after at most two or three home-manager switch.

                                                                                                  The first one was never a big attraction. Home manager lets us define a list of packages in home.packages of packages to install. If we control the system configuration, we can acheive the same by defining those packages in users.users.$username.packages.

                                                                                                  I think one of the nice things about home-manager and NixOS is that you often don’t have to specify a list of packages at all, unless you want them to be globally available on the system or as a user. E.g. if you want to use pass somewhere, there is no need to rely on pass being installed. You can just do something along the lines of the following in a Nix string:

                                                                                                  "${restic}/bin/restic --password-command='${pass}/bin/pass Storage/Restic'"
                                                                                                  

                                                                                                  And the full store path of restic and pass will be used through antiquotation. E.g.:

                                                                                                  "/nix/store/654w9743wxdhwyb24si6q0s139pz8jpl-restic-0.9.6/bin/restic --password-command='/nix/store/0i4vyzgfl1dzr12r94z44in84p9ikc6v-password-store-1.7.3/bin/pass Storage/Restic'"
                                                                                                  

                                                                                                  The given store paths will automatically be fetched/built if needed.

                                                                                                  [1] I did rewrite my Emacs configuration in Nix, using rycee’s hm-init: https://github.com/danieldk/nix-home/blob/master/cfg/emacs.nix

                                                                                                  1. 1

                                                                                                    I am a bit uncomfortable of wrapping random configuration files in here-strings just to be able use ${restic}. I wish there was some clever way to be able to use ${pkgs.restic} in arbitrary files that would be copied to the correct location without using here-strings. The reason is mostly you don’t get help from your editor to edit here-strings.

                                                                                                    1. 3

                                                                                                      I wish there was some clever way to be able to use ${pkgs.restic} in arbitrary files that would be copied to the correct location without using here-strings.

                                                                                                      There is. You can use @varName@ in your configuration file and then use the substitute/substituteInPlace/substituteAll to replace the variables by a string that comes from Nix, such as a store path.

                                                                                                  1. 2

                                                                                                    Not sure if this is the right approach. Notably, I don’t think you should mangle routing and switching. Switching have a learn phase while routing does not. Moreover, routing needs to update the source MAC address as well.

                                                                                                    1. 8

                                                                                                      I wish they’d included an avif at a similar size as the jpeg used in the first F1 comparisons.

                                                                                                      The ‘26’ in front of the car is barely visible in anything else than the jpeg, but the jpeg is ~70kb, so would avif at ~70kb match jpeg on that? do better? worse? I can’t tell.

                                                                                                      1. 6

                                                                                                        What really stands out to me is how some details are totally unaffected. The red bull sticker looks almost identical on the avif but the 26 which is almost as big becomes a complete smudge.

                                                                                                        1. 4

                                                                                                          AVIF has a novel technique of predicting color from brightness. This makes encoding of color cheaper overall, and helps it have super sharp edges of colored areas without any fringing.

                                                                                                          However, the red-blue “26” text is only a difference in hue, but not brightness, so the luma channel doesn’t help it get predicted and sharpened. The encoder should have been smarter about this and compensated for it.

                                                                                                          1. 1

                                                                                                            The markings on the road are another smudge.

                                                                                                          2. 1

                                                                                                            Just a bit below, there is the same image as a 20 KB JPEG (to be compared with the 20 KB AVIF).

                                                                                                            1. 2

                                                                                                              Sure, I’ve seen that, and it is neat, but there’s no 70KB AVIF to be compared with the 70KB JPEG, so that I can see whether AVIF is still better at that size.

                                                                                                              1. 2

                                                                                                                For a lot of things, there optimisation you want is best quality meeting this size / bandwidth goal, rather than lowest size meeting this quality goal. If a 70KiB JPEG meets your size / bandwidth requirements then it would be interesting to see how your quality increases going to a 70 KiB AVIF.

                                                                                                            1. 1

                                                                                                              You seem to have some kind of stand for your computer on the floor. What is it?

                                                                                                              1. 1

                                                                                                                And why, if I may ask

                                                                                                                1. 3

                                                                                                                  Just a cheap tower riser stand to keep my computer off the carpet and so improve airflow/reduce dust. Also gives me slightly more footroom.